id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
13,667
2,020
"Working under a tech-fueled microscope in the coronavirus era | VentureBeat"
"https://venturebeat.com/2020/06/09/working-under-a-tech-fueled-microscope-in-the-coronavirus-era"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature Working under a tech-fueled microscope in the coronavirus era Share on Facebook Share on X Share on LinkedIn As employees across the United States return to workplaces after months of coronavirus quarantine, many changes will be physical and obvious — face masks and clear protective shields, social distancing protocols, and stricter limits on customer access. But technology is quietly enabling a second layer of “defenses” that will likely have even larger impacts on modern workplaces: employee surveillance tools. In the sort of meta-irony science fiction authors would have found delicious, fear of a physical virus has led employers to adopt productivity surveillance apps that barely differ from computer viruses, along with workplace safety monitoring technologies that only George Orwell might have considered inevitable. Users spent decades firewalling their PCs against keyloggers, activity-monitoring background processes, and clandestine video recorders. Now employers are turning to these very tools to keep an eye on their workers. Some tech companies are suggesting that businesses go further, monitoring both employees and personal contacts for COVID-19 using private tracing databases. It’s easy to understand the underlying business concern: Companies want to protect themselves and their employees against risks, specifically declining worker productivity and the prospect of larger-scale infections as offices reopen. But in the wrong hands — and with questionable tools — the new workplace magnifying glass can become a laser, burning unreasonably micromanaged employees and businesses alike. Salesforce’s Work.com is just one example of a platform that touts office-specific coronavirus contact tracing, promising to turn workplaces into COVID-19 infection and exposure data collection sources for “public and private sector leaders.” Its stated goal is to allow leaders to “gather the data needed to monitor and analyze employee and visitor health and wellness.” Even if a company’s motivations for health monitoring are entirely benevolent, it’s creepy that the service claims to help private organizations track the health status of visitors, employees, their relatives, and their “interpersonal” contacts. Today, many people (and some governments) don’t trust Apple and Google’s anonymized contact tracing system , so why would anyone feel comfortable being tracked by various corporate databases with amorphous privacy guarantees? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Salesforce’s Work.com offers businesses tools to individually track COVID-19 exposures of employees, their relatives, and contacts — but is this wise? Companies that maintain private COVID-19 databases may also face legal ramifications. Thanks to asymptomatic carriers, unreliable tests, and sketchy infection reporting, there are serious concerns about false positives and negatives leading tracing systems to simultaneously undercount and overcount infections. It’s no stretch to predict individual and class action lawsuits after employees lose hours or jobs over supposed COVID-19 exposure that didn’t actually affect their ability to work. Additionally, companies that have private databases but don’t act on infection data might be held responsible for failing to protect employees from “known” harm. Businesses interested in keeping their workplaces healthy might better focus on worker safety measures. New safety tools such as AI workplace cameras promise to automatically measure worker distance or detect a lack of personal protective equipment. This may sound like 1984 ‘s Big Brother, but employees might not object to monitoring systems that use computers rather than humans to observe what’s happening, even if this introduces constant video recording into workspaces. Similarly, employees might embrace location-monitoring wearables and apps if these facilitate conveniences, such as easing access to locked areas or physical activity tracking. As we enter a new stage of the pandemic, companies aren’t wholly or even mostly focusing on health and safety — they’re also concerned about productivity. Without physical offices, companies fear people working from home may be doing all sorts of things that aren’t productive. At a minimum, productivity monitoring software raises the specter of a manager looking over your shoulder to be sure you’re on task. But overreliance on monitoring — whether it’s handled through software or constant meetings and check-ins — can have the opposite impact on worker productivity, stifling employees to the point of protest. A boss accustomed to physically patrolling cubicles may feel uneasy directing employees in remote home offices. It’s easy to imagine why such a person might mandate remote computer monitoring tools, peering at individual employee screens, checking time-on-task metrics, and issuing non-compliance warnings. Apply this passive monitoring atop mandatory team and individual check-in meetings, plus whatever collaborative chat system the company uses, and the system might almost resemble a traditional corporate office. But as reports of employee work-life balance frustration stack up during the pandemic , it’s clear that excessive oversight is leading to employee alienation. Accustomed to flexibility to accomplish their targets, self-starters are getting distracted by endless meetings and various forms of managerial monitoring. Monitoring software can now deliver clock-in/out times that are “accurate” down to the second, and alerts when employees have work apps running in the background rather than the foreground. However, people tend to hate having their schedules micromanaged, and they will likely be out the door whenever better employment opportunities arise. In the post-COVID-19 era, employers will have to adapt to a lot of new realities. But it’s increasingly clear that the mere existence of new workplace health, safety, and productivity surveillance tools doesn’t mean every employer should use them all — or even use carefully selected ones on full blast. An office-specific contact tracing platform might look great but introduce unexpected productivity and legal risks, just as an app that promises to track employee clock-ins or productivity levels may lead angry workers to game the system. It goes without saying that seeking too much workplace control can result in the loss of good employees, either preceded or followed by customers. As tempting as it may be to search for technological silver bullets, enterprises hoping to recover quickly from the pandemic should look at the bigger picture. Rather than relying heavily on monitoring technologies, smart companies should seek humane work-life balances that reduce the need for surveillance, freeing everyone — workers and managers alike — to make better use of their business hours. At the same time, companies should feel comfortable adopting modern tools that increase workers’ safety while respecting their human dignity, as these solutions will likely stand the test of time. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,668
2,021
"AI in health care creates unique data challenges | VentureBeat"
"https://venturebeat.com/2021/02/01/ai-in-health-care-creates-unique-data-challenges"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI in health care creates unique data challenges Share on Facebook Share on X Share on LinkedIn The health care industry produces an enormous amount of data. An IDC study estimates the volume of health data created annually, which hit over 2,000 exabytes in 2020, will continue to grow at a 48% rate year over year. Accelerated by the passage of the U.S. Patient Protection and Affordable Care Act , which mandated that health care practitioners adopt electronic records, there’s now a wealth of digital information about patients, practices, and procedures where before there was none. The trend has enabled significant advances in AI and machine learning, which rely on large datasets to make predictions ranging from hospital bed capacity to the presence of malignant tumors in MRIs. But unlike other domains to which AI has been applied, the sensitivity and scale of health care data makes collecting and leveraging it a formidable challenge. Tellingly, although 91% of respondents to a recent KPMG survey predicted that AI could increase patient access to care, 75% believe AI could threaten patient data privacy. Moreover, a growing number of academics point to imbalances in health data that could exacerbate existing inequalities. Privacy Tech companies and health systems have trained AI to perform remarkable feats using health data. Startups like K Health source from databases containing hundreds to millions of EHRs to build patient profiles and personalize automated chatbots’ responses. IBM, Pfizer, Salesforce, and Google, among others, are attempting to use health records to predict the onset of conditions like Alzheimer’s , diabetes , diabetic retinopathy , breast cancer , and schizophrenia. And at least one startup offers a product that remotely monitors patients suffering from heart failure by collecting recordings via a mobile device and analyzing them with an AI algorithm. The datasets used to train these systems come from a range of sources, but in many cases, patients aren’t fully aware their information is included among them. Emboldened by the broad language in the Health Insurance Portability and Accountability Act (HIPAA), which enables companies and care providers to use patient records to carry out “healthcare functions” and share information with businesses without first asking patients, companies have tapped into the trove of health data collected by providers in pursuit of competitive advantages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2019, The Wall Street Journal reported details on Project Nightingale, Google’s partnership with Ascension, which is the nation’s second-largest health system, that is collecting the personal health data of tens of millions of patients for the purposes of developing AI-based services for medical providers. Separately, Google maintains a 10-year research partnership with the Mayo Clinic that grants the company limited access to anonymized data it can use to train algorithms. Regulators have castigated Google for its health data practices in the past. A U.K. regulator concluded that The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Google parent company Alphabet’s DeepMind with data on 1.6 million patients without their consent. And in 2019, Google and the University of Chicago Medical Center were sued for allegedly failing to scrub timestamps from anonymized medical records. (A judge tossed the suit in September.) But crackdowns and outcries are exceptions to the norm. K Health claims to have trained its algorithms on a 20-year database of millions of health records and billions of “health events” supplied partially by Maccabi, Israel’s second-largest health fund, but it’s unclear how many of the patients represented in the datasets were informed that their data would be used commercially. Other firms including IBM have historically drawn on data from research like the Framingham Heart Study for experiments unrelated to the original purpose (albeit in some cases with approval from institutional review boards). Startups are generally loath to disclose the source of their AI systems’ training data for competitive reasons. Health Data Analytics Institute says only that its predictive health outcome models were trained on data from “over 100 million people in the U.S.” and over 20 years of follow-up records. For its part, Vara , which is developing algorithms to screen for breast cancer, says it uses a dataset of 2.5 million breast cancer images for training, validation, and testing. In a recent paper published in the New England Journal of Medicine , researchers described an ethical framework for how academic centers should use patient data. They align with the belief that the standard consent form that patients typically sign at the point of care isn’t sufficient to justify the use of their data for commercial purposes, even in anonymized form. These documents, which typically ask patients to consent to the reuse of their data to support medical research, are often vague about what form that medical research might take. “Regulations give substantial discretion to individual organizations when it comes to sharing deidentified data and specimens with outside entities,” the coauthors wrote. “Because of important privacy concerns that have been raised after recent revelations regarding such agreements, and because we know that most participants don’t want their data to be commercialized in this way, we [advocate prohibiting] the sharing of data under these circumstances.” Standardization From 2009 to 2016, the U.S. government commissioned researchers to find the best way to improve and promote the use of electronic health records (EHR). One outcome was a list of 140 data elements that should be collected from every patient on each visit to a physician, which the developers of EHR systems were incentivized to incorporate into their products through a series of federal stimulus packages. Unfortunately, the implementation of these elements tended to be haphazard. Experts estimate that as many as half of records are mismatched when data is transferred between health care systems. In a 2018 survey by Stanford Medicine in California, 59% of clinicians said they felt that their electronic medical records (EMRs) systems needed a complete overhaul. The nonprofit MITRE Corporation has proposed what it calls the Standard Health Record (SHR), an attempt at establishing a high-quality, computable source of patient information. The open source specification, which draws on existing medical records models like the Health Level Seven International’s Fast Healthcare Interoperability Resources, contains information critical to patient identification, emergency care, and primary care as well as areas related to social determinants of health. Plans for future iterations of SHR call for incorporating emerging treatment paradigms such as genomics, microbiomics, and precision medicine. However, given that implementing an EMR system could cost a single physician over $160,000, specs like SHR seem unlikely to gain currency anytime soon. Errors and biases Errors and biases aren’t strictly related to the standardization problem, but they’re emergent symptoms of it. Tracking by the Pennsylvania Patient Safety Authority in Harrisburg found that from January 2016 to December 2017, EHR systems were responsible for 775 problems during laboratory testing in the state, with human-computer interactions responsible for 54.7% of events and the remaining 45.3% caused by a computer. Furthermore, a draft U.S. government report issued in 2018 found that clinicians are inundated with (and not uncommonly miss) alerts that range from minor issues about drug interactions to those that pose considerable risks. Mistakes and missed alerts contribute to another growing problem in health data: bias. Partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases might perpetuate inequalities. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study , Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI come from California, New York, and Massachusetts. A study of a UnitedHealth Group algorithm determined that it could underestimate by half the number of Black patients in need of greater care. Researchers from the University of Toronto, the Vector Institute, and MIT showed that widely used chest X-ray datasets encode racial, gender, and socioeconomic bias. And a growing body of work suggests that skin cancer-detecting algorithms tend to be less precise when used on Black patients, in part because AI models are trained mostly on images of light-skinned patients. Security Even in the absence of bias, errors, and other confounders, health systems must remain vigilant for signs of cyber intrusion. Malicious actors are increasingly holding data hostage in exchange for ransom, often to the tune of millions of dollars. In September, employees at Universal Health Services, a Fortune 500 owner of a nationwide network of hospitals, reported widespread outages that resulted in delayed lab results, a fallback to pen and paper, and patients being diverted to other hospitals. Earlier that month, a ransomware attack at a Dusseldorf University hospital in Germany resulted in emergency-room diversions to other hospitals. Over 37% of IT health care professionals responding to a Netwrix survey said their health care organization experienced a phishing incident. Just over 32% said their organization experienced a ransomware attack during the novel coronavirus pandemic’s first few months, and 37% reported there was an improper data sharing incident at their organization. Solutions Solutions to challenges in managing health care data necessarily entail a combination of techniques, approaches, and novel paradigms. Securing data requires data-loss prevention, policy and identity management, and encryption technologies, including those that allow organizations to track actions that affect their data. As for standardizing it, both incumbents like Google and Amazon and startups like Human API offer tools designed to consolidate disparate records. On the privacy front, experts agree that transparency is the best policy. Stakeholder consent must be clearly given to avoid violating the will of those being treated. And deidentification capabilities that remove or obfuscate personal information are table stakes for health systems, as are privacy-preserving methods like differential privacy, federated learning, and homomorphic encryption. “I think [federated learning] is really exciting research, especially in the space of patient privacy and an individual’s personally identifiable information,” Andre Esteva, head of medical AI at Salesforce Research, told VentureBeat in a phone interview. “Federated learning has a lot of untapped potential … [it’s] yet another layer of protection by preventing the physical removal of data from [hospitals] and doing something to provide access to AI that’s inaccessible today for a lot of reasons.” Biases and errors are harder problems to solve, but the coauthors of one recent study recommend that health care practitioners apply “rigorous” fairness analyses prior to deployment as one solution. They also suggest that clear disclaimers about the dataset collection process and the potential resulting bias could improve assessments for clinical use. “Machine learning really is a powerful tool, if designed correctly — if problems are correctly formalized and methods are identified to really provide new insights for understanding these diseases,” Dr. Mihaela van der Schaar, a Turing Fellow and professor of ML, AI, and health at the University of Cambridge and UCLA, said during a keynote at the ICLR 2020 conference in May. “Of course, we are at the beginning of this revolution, and there is a long way to go. But it’s an exciting time. And it’s an important time to focus on such technologies. I really believe that machine learning can open clinicians and medical researchers [to new possibilities] and provide them with powerful new tools to better [care for] patients.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,669
2,021
"Elder care, wireless AI, and the Internet of Medical Things | VentureBeat"
"https://venturebeat.com/2021/02/01/elder-care-wireless-ai-and-the-internet-of-medical-things"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elder care, wireless AI, and the Internet of Medical Things Share on Facebook Share on X Share on LinkedIn As we age, we gradually agree to medical exams and medications that would have been unthinkable in our youth, until we become senior citizens — the point at which we frequently engage with doctors, and our health becomes a subject of constant concern. We’ve been trained to accept this as the cycle of life, but it’s increasingly clear that the next generation of seniors will have better experiences: Advancements in artificial intelligence and wireless technologies will enable massive streams of biometric data to be harvested and processed from wearables, internet of things (IoT) sensors, and chip-laden pills, prolonging and saving lives. At a time when there’s potential danger to seeing patients in person, and health care facilities are wary of becoming overwhelmed because of COVID-19 cases, these technologies are not merely beneficial, but incredibly important. Smarter sensors, software, and services will enable health monitoring to be less invasive and more automated than before, reducing the need for human caregivers while restoring dignity that seniors have lost over the years. Ten years ago, monitoring a senior for hip-breaking falls might have been impractical without the aid of a relative or personal nurse, but falls can now be detected and addressed immediately with smartwatches ; similarly, wearables targeting everything from swallowing problems to incontinence are now available from health startups. The next steps will be monitoring without wearables — wireless devices that reduce or eliminate human involvement in the monitoring process — and medically specific internet of medical things (IoMT) sensors that are specially designed to record human biometrics. One example: Origin Wireless has developed a “wireless AI” solution that uses Wi-Fi signals to map closed spaces. The wireless radio waves create an invisible “wave pool” in a room, and Origin’s Remote Patient Monitoring system uses AI to monitor the pool for ripples that signal disruptions. Without requiring either a camera or motion sensors, Origin RPM knows when a person abruptly shifts from standing to laying on the floor, and can trigger an alert to local caregivers or off-site family members. More subtle changes in the data streams can even indicate granular changes in a person’s activity, breathing, and sleeping. Japanese startup SakuraTech is using millimeter wave signals to wirelessly monitor up to four heart and respiration rates at once, promising to work through common impediments such as clothing and blankets, sending data to the AWS cloud for constant remote monitoring. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Without machine learning, interpreting room-scale, volumetric masses of wireless data in this way would be impractical — akin to a sonar system constantly seeing objects moving in the ocean without identifying their intent. But trained AI can understand the layout of a room as visualized with radio waves, then determine dangerously atypical patterns in the people who live in that room, all without violating personal privacy. Unlike AI image segmentation, Wi-Fi and millimeter wave scanning work like radar, and their data can be used to recognize patterns without the need for photo or video recording. Another company, Essence Group , recently introduced 5G PERS, a senior independent living solution that enables activity monitoring, fall detection, and voice connectivity. 5G PERS uses a collection of traditional IoT motion sensors for monitoring, but uniquely relies upon 5G cellular connectivity rather than Wi-Fi or 4G for infrastructure. Because it connects the IoT sensors to the cloud over a cellular connection, PERS 5G can operate in homes where seniors don’t have Wi-Fi routers — the solution is standalone, so it can be installed and then remotely monitored without depending on the senior to maintain separate hardware or services. General-purpose IoT sensors have used cameras and movement detectors to enable everything from smart refrigerators to industrial quality assurance systems, but medically focused IoMT sensors wirelessly connect to health clouds for individual biometric monitoring and data storage. Since they’re designed specifically for tracking specific human life signals, IoMT sensors can be far more “personal” than ever before: Their tiny chips can enable exterior motion tracking in always-on wearables or internal monitoring using ingestible wireless pills such as HQ, Inc.’s CorTemp — a core temperature probe that remains inside your body for 24-36 hours. While medical technologies keep improving, there’s no guarantee that they’ll be immediately or widely adopted. Proteus Digital Health successfully completed clinical validations last year for ingestible microchips that monitored adherence to medication schedules, but ultimately filed for bankruptcy. The problem wasn’t the practicality of the chips, but rather that they would double or triple a medication’s monthly cost. History suggests that the chip prices will continue to drop over time, giving the technology a greater chance of mass adoption and increasing the number of data streams from monitored patients. The trend is clear: IoMT sensors will only become more powerful, easier to use, and ubiquitous. New 5-nanometer chip fabrication has already yielded atomic-scale transistors that can be powered by barely any energy, and even smaller 3-nanometer chips will be commercially available next year, making microchipped pills literally easier to swallow. At the same time, mobile AI chips are nearly doubling in performance each year, such that tomorrow’s client devices could have AI capabilities superior to yesterday’s cloud and edge servers. Remote monitoring tasks that may have been too challenging two years ago will seem wholly within the power of even common smartphones two years from now. Society’s biggest challenge may be to make seniors comfortable with adopting these new technologies, as it may be easier for older users to shrug off wearables, room-scale monitors, and ingestible chips as “unnecessary” than accept them as the new normal. But as the tech keeps shrinking, it’s likely to fade into the background of our lives, eventually solving problems before we — or other human monitors — even realize what’s happening. That means today’s and tomorrow’s seniors can realistically look forward to a new era in medicine where we depend less on doctors yet benefit every day from more comprehensive health care, ultimately living longer and better than ever before. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,670
2,021
"How the responsible use of AI can help create a better health system | VentureBeat"
"https://venturebeat.com/2021/02/01/how-the-responsible-use-of-ai-can-help-create-a-better-health-system"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored How the responsible use of AI can help create a better health system Share on Facebook Share on X Share on LinkedIn Presented by Optum Artificial intelligence (AI) used responsibly has the opportunity to live up to its promise of helping achieve what we in health care call the Quadruple Aim: better health outcomes, better patient experiences, and better care provider experiences, all at a lower total cost. However, without the proper safeguards, the use of AI can lead to unintended — and sometimes harmful — consequences. AI is susceptible to this inadvertent harm for a simple reason: it is trained on data that reflect biases that occur in the real world. If a model that’s used to inform care decisions doesn’t account for racial or geographic health disparities, predictive AI may unintentionally perpetuate biases that affect an individual’s health or access to care. An algorithm can also inadvertently produce inequities if it’s not used as intended or if it’s applied inappropriately. To overcome these challenges, the health care industry needs to acknowledge their existence and take proactive steps to minimize them. Fortunately, health care leaders do not view AI as a substitute for the human touch in the delivery and administration of health care. Instead, they see it as a useful tool for highly trained experts that helps them do their jobs more efficiently and effectively. At Optum, teams of data scientists work every day to help ensure that AI infused into our health system is applied responsibly, ethically, and equitably. What’s all the hype about? When it’s trained and deployed appropriately, AI’s advantages in clinical care are clear. Its ability to quickly analyze far greater quantities of data than is humanly possible can help in a number of ways, from simplifying appointment scheduling to identifying anomalies in medical imaging studies to powering digital triage tools. But AI-enabled capabilities can also alleviate many of the less obvious administrative headaches associated with our health system. For example, AI can review medical documentation and help determine a hospital visit’s appropriate reimbursement status, improving efficiency. It can root out potential fraud, waste, and abuse in medical or pharmacy claims, reducing unnecessary spend. And it can help us narrow down potential new drug candidates, creating quicker access to new therapies while avoiding the costs of unsuccessful trials. The list goes on. Its potential is seemingly endless, which reflects both the burgeoning use cases of the technology itself and the fact that we have so many systems within health care that need to be fixed or improved. The good news is that we’ve moved beyond hype — these advantages are being realized more and more each day. For three years now , as a part of the annual Optum Survey on AI in Health Care, executives from hospitals and health systems, health plans, employers, and life sciences organizations have shared with us their attitudes and expectations related to AI in health care. This year’s big takeaway was a resounding, growing confidence in AI’s potential. More than half — 59% — of survey respondents said they expect a return on their AI investments in under three years, nearly double the 31% who answered similarly in 2018. And this confidence is influencing their hiring decisions — 95% want to hire AI talent and 92% expect their workforce to understand how AI makes its predictions. So, what does this all mean? Leaders from all sectors of health care are signaling that infusing AI into their businesses is a critical step toward achieving their organizations’ strategic goals. They have confidence that these investments are worthwhile, both from a financial and patient care perspective. Working toward more equitable health outcomes While optimistic about AI’s benefits, health care leaders also expressed concerns about its potential to perpetuate inequities. Three out of four executives said they were wary about bias creeping into AI’s results, whether because of the algorithms embedded in the technology or because of how the algorithm is used. This concern was especially prevalent within organizations that have not yet implemented AI (79%), but also occurred among those currently utilizing AI (66%). A perceived lack of transparency is also a worry. Seventy-three percent of respondents were concerned about the “black box” nature of AI results — meaning it is not always clear which combination of parameters is driving a model’s recommendations or a model’s efficacy. Both of these concerns stem from how predictive algorithms work. As we mentioned earlier, historical patient data reflect historical inequities that, left unchecked, may disadvantage some populations. And the uncertainty clouding the explainability of the model’s output is a direct result of how machine learning algorithms ingest data and form their own connections to create inferences. In a field that has long prized evidence-based decision-making, that can be a tough hurdle to overcome. To help ensure AI doesn’t perpetuate inequities, health care leaders are doing two things. First, their teams are using social determinants of health (SDOH) data to provide insight into where and how people live, work, learn, and play. Leaders are hoping it will help them identify the complex factors that affect health outcomes. Second, whenever possible, they’re building explainable interfaces (e.g., conversational user interfaces) into their platforms to better understand what’s influencing the output. Greater transparency can help human experts ensure that the model does not inadvertently favor one group or geography over another. By feeding more complete data into their AI algorithms and adding explainability, health care executives hope to avoid and combat bias. At health care organizations that either utilize AI or plan to, almost every leader surveyed (96%) perceives AI as an important tool to help achieve health equity. Responsible use means more than just ethical data science To unlock the advantages of artificial intelligence in an equitable and sustainable way, complete data and transparency are only part of the solution. The responsible use of advanced analytics requires awareness of the strengths and limitations of data, AI methodology, and the application of AI results. No data set is perfect, especially when it comes to minority populations, who are historically underrepresented in widely available data types. Data reflects the biases that occur in the real world, but we can use technology to help overcome them. There are tools that can evaluate fairness in machine learning models, like Aequitas. We use this open-sourced tool to help assess models for bias and identify the impact on vulnerable groups. h Organizations need to be vigilant in recognizing the limitations of incomplete data, and therefore, the limitations of AI. And they need to train decision-makers to be sensitive to these issues, especially in an industry where the purpose is to deliver care and support health for real people. Better understanding inequities and their connections to health is a first step toward addressing them. At Optum, we are conducting research to better understand the sources of systemic bias in health care delivery that impact outcomes — for example, race corrections in clinical guidelines. This knowledge helps us be better-informed consumers of AI-derived results and more aware of the sources and risks of perpetuating bias. As our health care becomes more digitized and data-based, more equitable outcomes will also be dependent on the geographic equity of digital connectivity. Expanding high-speed internet to underserved rural areas will enable easier connections. Easing state licensure requirements will also help, so that a patient in rural Alabama can use video to connect with his clinician, even if she is practicing medicine in New York City. A rising tide lifts all boats Just as AI-powered solutions can lead to more efficient, user-friendly experiences and better health outcomes, they can also produce a more equitable system. Today, experts in public health are using machine learning systems to help remove barriers to care in underserved communities. AI care coordination platforms are alerting care management teams within health plans and health systems about patients and populations that are in need, regardless of the zip code in which they live. Digital health apps on smartphones — which have become ubiquitous among low-income communities — are connecting people with programs and services they qualify for. They offer nudges to help them improve their behavior and their health. These solutions that use AI to create a path to better health equity are just a few examples of how AI is offering increased insights for health care leaders. As AI becomes more transparent, as data becomes more inclusive, and as decision-makers are better trained to address limitations in technology, the pursuit of health care’s Quadruple Aim will continue to accelerate. That means better performance for organizations across the health care sector — and better health outcomes for all of us. Dig deeper: Read more about health care executive attitudes about artificial intelligence, and its growing impact in health care, at optum.com/ai. Margaret (Meg) Good, PhD, Vice President, Optum Enterprise Analytics Dr. Margaret (Meg) Good specializes in health economics, health policy, and survey research methods. In her role, Dr. Good advises Optum businesses on how to use analytics and artificial intelligence to achieve strategic objectives for their products and services. Prior to joining Optum, she was a faculty member in the Department of Public Policy at the University of Maryland, Baltimore County where she taught courses in health policy and research methods. She also worked at the University of Minnesota in a research collaborative that helped states expand access to health insurance and health coverage among disadvantaged populations. Dr. Good earned her PhD and MS in health services research and policy at the University of Minnesota and her undergraduate degree at Williams College. Kerrie Holley, Senior Vice President and Technology Fellow, Optum Kerrie Holley joined Optum as its first technology fellow, focused on advancing the enterprise’s capabilities in AI, machine learning, deep learning, graph technologies, the Internet of Things, blockchain, virtual assistants and genomics. Prior to Optum, Holley was the VP and CTO of analytics and automation at Cisco. He spent the bulk of his career at IBM where he was a fellow and master inventor, focused on scalable services and cognitive computing. Holley was IBM’s first African American distinguished engineer and a member of the Academy of Technology comprising the top 300 technologists. He holds a JD in law and a Bachelor’s in mathematics from DePaul University. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,671
2,021
"Telemedicine and chatbots are using data to transform health care | VentureBeat"
"https://venturebeat.com/2021/02/01/telemedicine-and-chatbots-are-using-data-to-transform-healthcare"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Telemedicine and chatbots are using data to transform health care Share on Facebook Share on X Share on LinkedIn Among the many transformations accelerated by COVID-19, health care ranks at the top of the list. An industry that had been changing at a plodding pace before 2020 has been forced to rapidly embrace advances like telemedicine and health chatbots on a far greater scale to navigate the crisis. As health care providers adopt these tools, they are receiving a wealth of new patient data that is creating new challenges and opportunities. On the front lines between patients and doctors, the companies driving these products are betting that they are part of a broader revolution that will place data at the heart of everyday treatment. “We call it digital primary care,” said Nick Desai, CEO of telemedicine platform Heal. “There is still an irreplaceable value to the human-doctor patient interaction. What we want to do is give doctors data-driven decision support.” Data-driven medicine Health care was already facing pressure to reinvent itself before the pandemic. A number of trends — such as population growth, longer lifespans, more complicated health issues, and doctor shortages — were among the factors contributing to higher health care costs and strains on the system. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At the same time, a number of digital trends had begun to collide. These include telemedicine platforms, connected health and fitness-monitoring gadgets, and chatbots, which had all steadily increased the amount of digitized health data being produced. Telemedicine and chatbots got a nice boost when the Centers for Medicare & Medicaid Services expanded reimbursement for remote services such as telemedicine in 2019. A Stanford University 2020 report published before the pandemic explored the rise of the data-driven physician. Among the factors that seemed to help this trend was the 21st Century Cures Act, passed and signed into law in December 2016. The law set out new data-sharing rules for Electronic Health Record (EHR) systems. However, it was only last year that the U.S. Office of Management and Budget (OMB) finished defining the rules that would expand patient access to medical records, establish data standards, and enable more interoperability between EHR systems. “For an industry that has long struggled with low levels of information sharing and poor interoperability across its technology systems, in 2020 we expect to see the final rules create a seismic shift in how health care stakeholders share and interact with digital medical records,” the Stanford report reads. “The rise of the Data-Driven Physician is a sign that the entire health care market is now grappling with the practical application of data and new technologies.” Then came the pandemic. Digital health Even with this forward momentum, many in the medical community were reluctant to embrace these tools. But with the onset of the pandemic, opposition melted away as hospitals became either overwhelmed or simply unsafe to visit. Hospitals increasingly turned to companies like U.K.-based Babylon Health , which offers services such as video consultations and the ability to report illnesses to providers. The company saw usage soar at the onset of the pandemic , and in May 2020 it launched in its first U.S. market. Sweden’s video consultation platform Kry also launched in the U.S. last spring to address surging demand. Doctolib, a Paris-based company that offered online booking for medical appointments in France and Germany, had just launched its video feature before the pandemic took hold. Doctolib saw the number of daily video consultations jump from 1,000 pre-pandemic to 100,000 in the first few months of the outbreak. The French government has now authorized it to be one of the main platforms for booking COVID-19 vaccination appointments. After years of gradual progress, telemedicine and chatbots became overnight successes during the pandemic. According to the recent State of Healthcare report from research firm CB Insights , telehealth (which includes telemedicine and chatbots) became a centerpiece of executive discussions during earnings calls as companies considered how to provide services to employees. And funding for telemedicine startups soared. Chatbots When it comes to chatbots, Ada Health’s combination of artificial intelligence and human doctors had made it a rising star even before the coronavirus began its global spread. The company had spent years developing a platform that allowed patients to input their symptoms so the AI could sort through its databases and either give responses or make a referral to a doctor. Anyone can download the Ada app for free. To ascertain their symptoms, users are asked a series of questions Ada’s algorithm personalizes based on the responses from each user. The app then suggests possible health issues and proposes next steps, such as making an appointment or going to an emergency room. The app replaces the often tedious work of taking a patient’s history, which can be a big time-saver for doctors and nurses. Ada’s revenue comes via partnerships with health providers who integrate Ada into their early screening systems. According to Ada cofounder and chief medical officer Dr. Claire Novorol, the company’s consumer app now has 11 million users. While Ada had already seen rapid growth prior to 2020, last year it enabled 5.5 million assessments, or about 25% of all assessments since its app launched in 2016. Novorol said that during the first phase of the pandemic, when users needed more trustworthy health advice, Ada launched a dedicated COVID-19 assessment and screener to support individuals and health organizations. This screener has since been adopted and integrated into health organizations around the world. According to Novorol, increased adoption is generating more transparent and consent-driven health data collection and finding ways to share that data will improve digital health services, as well as overall medical quality. This includes capturing data from a wider range of people who might not typically go to a physician. Novorol said Ada’s access to global aggregated and pseudonymized health data holds the potential to unlock real-time insights that provide additional breadth and depth to treatments. “In the long-term, not only can health data improve public health and medical quality, but it also has significant potential when it comes to personalization in health care,” Novorol said. “I believe personalized, tailored experiences will be essential to the future of health care — and data will be a key part of that.” Telemedicine Heal’s Desai is also bullish about the potential for all this data to drastically improve health care. The company’s telemedicine platform was initially designed to allow doctors to speak with patients from home. The theory was that seeing patients in their normal setting would be more convenient and give doctors insight into any home conditions that might impact a patient’s health. Desai outlined four levels of data that can potentially impact health care, with Heal currently delivering the first three. The first level is real-time data that can be provided by simple actions, like a parent taking their kid’s temperature and then sending it directly to the doctor via Heal’s service. The second level is continuous monitoring of patients via those aforementioned connected devices. Along the way, Heal has developed a suite of tools that allow doctors to continually monitor chronic patients from a distance, including factors like blood pressure, blood sugar, heart rate, and pulse. That allows physicians to monitor trends in patients’ health status, rather than recording occasional data or relying on patient reporting. Those trends are more powerful for diagnosing a patient because it’s hard to know if a single measure is typical or not. In this case, the doctor can take corrective actions when the trend line seems troubling and more urgent interventions when something seems acute. “If the doctor knows how your blood pressure’s been doing over the last month, or how your blood sugar has been doing over the last month, that’s very helpful to the doctor to make a more accurate diagnosis,” Desai said. “An average patient is not a good historian of their own health. This way, we keep them out of the hospital, but we’re using that data to more proactively deliver care.” The third level is looking at the totality of all the data being captured from a patient. This allows for more contextual decisions by looking at a wide range of factors and how they are impacting each other. However, it’s the fourth level that has Desai particularly excited. The company is currently working with university researchers to develop predictive medicine. This work involves trying to identify what data is useful, how to process it, and what conclusions can actually be made. He estimates such services are at least 12 to 15 months away. The company is proceeding cautiously because the stakes are enormous. “The key is having it be absolutely accurate enough that the machine’s trend lines are indicative of reality,” Desai said. “Because the moment you make decisions based on machines , they’ve got to be good decisions.” Even if the company cracks the formula, other hurdles remain. If a doctor can say with a high degree of certainty that a patient will develop a severe illness later in life, it might make sense to consider a preventive procedure. But while that decision might make sense at the time, it could lead to regret later if a treatment or cure for that same illness is developed many years later. “Those are the kinds of things at an ethical level and at a practical level and at a cost level that become factors,” Desai said. “What is the insurance company willing to pay for the level of knowledge? It’s not just the science that has to advance, it’s also the business of health care, the insurance of health, ethical decisions, therapeutics, and treatment.” But he added: “This is the holy grail. That machine-driven decision support, that’s the future for us.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,672
2,021
"We're two steps away from democratized, on-demand health care | VentureBeat"
"https://venturebeat.com/2021/02/01/were-two-steps-away-from-democratized-on-demand-health-care"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest We’re two steps away from democratized, on-demand health care Share on Facebook Share on X Share on LinkedIn Consumers are getting used to and expecting more and more control over their services. Disruptive companies like Uber, Grubhub, and Instacart have shifted the paradigm: a consumer no longer must seek out and go to a service; rather, the service makes itself available immediately and comes directly to the consumer. This paradigm shift expanded choice and put more control of the service in the consumer’s hand — all the while notifying them with updates and changes to the service in real time. Consumers no longer need to check into hotels or carry their room key. They no longer need to go to the check-in counter at the airport or wait in line at the car rental kiosk. The technology has reached a point where it can be utilized by non-technical consumers to receive extremely convenient, robust, and integrated services. This shift in the way these services are accessed has been generally referred to as the “democratization of goods and services.” So where is healthcare in this evolution? Finding and utilizing healthcare services can be a vague and foggy proposition for most patients. There is an almost impossible mix of insurance networks, hospital networks, private providers, pharmacies, equipment suppliers, and labs that the patient must navigate. But we can do better. Over the last two decades, the technology has certainly developed to a place where healthcare can be democratized in the way other, much simpler services have already (see the Timeline below). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Three main components are necessary for democratized, on-demand healthcare to happen: Standard data formats so that disparate systems can communicate with one another. A cost effective, highly scalable, robust backend infrastructure to handle the amount of necessary data securely and safely. A trusted distribution model that allows consumers to access and pay for the service in a familiar way. We should see these components rolled out in 2 steps. Step 1: Consolidation Healthcare records have to be consolidated to provide patients with easier and faster access to all their medical information. One of the biggest players in this space is Apple, which works with healthcare institutions to build an application that consolidates patients’ medical records. Institutions can register with Apple to build out the interoperability and interface of their apps. Then, they can offer patients the ability to download and register for the app, which provides a one-stop-shop for all medical records. The biggest hurdle that needs to be overcome is the diversity of ways in which healthcare institutions send data. “Health Level Seven” (HL7) provides a standardized data format, which helps with consolidating records, but that doesn’t mean providers can build out one interface for all types of records. There are still different versions of HL7 that different institutions use. A number of companies besides Apple are working to solve pieces of this problem, but they are limited either to particular health care systems or providers who specifically sign up for the service. Real democratization will start to happen when these services are combined and are agnostic of specific software systems and hospital networks, or can go beyond groups of participating providers who sign up for the service. That can only happen when a company has the massive amount of resources needed to integrate all of the disparate systems. The tools and technology are now available, but there is a lot more work to be done. Another major hurdle is patient trust. Patients will be hesitant to allow consolidation of their medical records because they don’t want their information to be too connected. They’ll fear not only data breaches but also their private information being used for marketing or research purchases, especially since medical information is so sensitive. To alleviate patient concerns, providers must give patients a clear privacy policy, with the ability to opt in or out of having outside companies access their data. Step 2: Visibility Another key step is expanding the visibility of healthcare institutions so that patients can easily view services, pricing, and availability. A mobile app, for example, can allow providers to sign up on its network. Patients can put their ailment in the app and see all the providers in their area offering services for that ailment, along with pricing and availability. Through this app, patients will have more visibility and control over finding the right provider. This technology will also drive market prices down because it’ll encourage transparent competition. One can imagine how beneficial this technology will be in helping with health crises such as the one the world is experiencing currently with COVID-19. Knowing all the available options for testing, getting test results immediately on your phone, notification of when and where to get vaccinated, as well as proving negative tests and receipt of vaccinations are very real challenges that we are facing today. The technology is ready to make all this as easy as summoning an Uber driver for the patient. We are on the cusp of the democratization of healthcare. It is not only possible but hugely beneficial. It will alleviate the stress of navigating the healthcare system, give the patient more choice in service and cost, and help drive healthcare costs down overall by driving more competition in the marketplace. Damon Altomare is Chief Technology Officer of VIP StarNetwork , which is changing how the film industry offers healthcare benefits and increases healthcare access. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,673
2,020
"AI, automation, and the cybersecurity skills gap | VentureBeat"
"https://venturebeat.com/2020/02/11/ai-and-the-cybersecurity-skills-gap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature AI, automation, and the cybersecurity skills gap Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The cybersecurity skills shortage is well documented , but the gap seems to be widening. The 2019 Cybersecurity Workforce study produced by nonprofit (ISC)² looked at the cybersecurity workforce in 11 markets. The report found that while 2.8 million people currently work in cybersecurity roles, an additional 4 million were needed — a third more than the previous year — due to a “global surge in hiring demand.” As companies battle a growing array of external and internal threats, artificial intelligence (AI), machine learning (ML), and automation are playing increasingly large roles in plugging that workforce gap. But to what degree can machines support and enhance cybersecurity teams, and do they — or will they — negate the need for human personnel ? These questions permeate most industries, but the cost of cybercrime to companies, governments, and individuals is rising precipitously. Studies indicate that the impact of cyberattacks could hit a heady $6 trillion by 2021. And the costs are not only financial. As companies harness and harvest data from billions of individuals, countless high-profile data breaches have made privacy a top concern. Reputations — and in some cases people’s lives — are on the line. Against that backdrop, the market for software to protect against cyberattacks is also growing. The global cybersecurity market was reportedly worth $133 billion in 2018 , and that could double by 2024. The current value of the AI-focused cybersecurity market, specifically, is pegged at around $9 billion, and it could reach $38 billion over the next six years. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We checked in with key people from across the technology spectrum to see how the cybersecurity industry is addressing the talent shortage and the role AI, ML, and automation can play in these efforts. The ‘click fraud czar’ “I think the concern around the cybersecurity skills gap and workforce shortfall is a temporary artifact of large companies scrambling to try to recruit more people to perform the same types of ‘commodity’ cybersecurity activities — for example, monitoring security logs and patching vulnerabilities,” said Shuman Ghosemajumder , a former Googler who recently served as chief technology officer at cybersecurity unicorn Shape Security. Ghosemajumder compares this to “undifferentiated heavy lifting,” a term first coined by Amazon’s Jeff Bezos to describe the traditional time-consuming IT tasks companies carry out that are important but don’t contribute a great deal to the broader mission. Bezos was referring to situations like developers spending 70% of their time working on servers and hosting, something Amazon sought to address with Amazon Web Services (AWS). Similar patterns could emerge in the cybersecurity realm, according to Ghosemajumder. “Any time companies are engaged in ‘undifferentiated heavy lifting,’ that points to the need for a more consolidated, services-based approach,” he said. “The industry has been moving in that direction, and that helps significantly with the workforce shortfall — companies won’t need to have such large cybersecurity teams over time, and they won’t be competing for the exact same skills against one another.” Above: Shuman Ghosemajumder Ghosemajumder was dubbed the “click fraud czar” during a seven-year stint at Google that ended in 2010. He developed automated techniques and systems to combat automated (and human-assisted) “click fraud,” when bad actors fraudulently tap on pay-per-click (PPC) ads to increase site revenue or diminish advertisers’ budgets. Manually reviewing billions of transactions on a daily basis would be impossible, which is why automated tools are so important. It’s not about combating a workforce shortfall per se; it’s about scaling security to a level that would be impossible with humans alone. Ghosemajumder said the most notable evolution he witnessed with regard to AI and ML was in offline “non-real-time” detection. “We would zoom out and analyze the traffic of an AdSense site, or thousands of AdSense sites, over a longer time period, and anomalies and patterns would emerge [that] indicated attempts to create click fraud or impression fraud,” he continued. “AI and ML were first hugely beneficial, and then became absolutely essential in finding that activity at scale so that our teams could determine and take appropriate action in a timely fashion. And even taking appropriate action was a fully automated process most of the time.” In 2012, Ghosemajumder joined Shape Security, which reached a $1 billion valuation late last year and was gearing up for an IPO. Instead, networking giant F5 came along last month and bought the company for $1 billion , with Ghosemajumder now serving as F5’s global head of AI. Shape Security focuses on helping big businesses (e.g., banks) prevent various types of fraud — such as “imitation attacks,” where bots attempt to access people’s accounts through credential stuffing. The term, first coined by Shape Security cofounder Sumit Agarwal, refers to attempts to log into someone’s account using large lists of stolen usernames and passwords. This is another example of how automation is increasingly being used to combat automation. Many cyberattacks center around automated techniques that prod online systems until they find a way in. For example, an attacker may have an arsenal of stolen credit card details, but it would take too long to manually test each one. Instead, an attacker performs a check once and then trains a bot to carry out that same approach on other card details until they have discovered which ones are usable. Just as it’s relatively easy to carry out large-scale cyberattacks through imitation and automation, Shape Security uses automation to detect such attacks. Working across websites, mobile apps, and any API endpoint, Shape Security taps historical data, machine learning, and artificial intelligence to figure out whether a “user” is real, employing signals such as keystrokes, mouse movements, and system configuration details. If the software detects what it believes to be a bot logging into an account, it blocks the attempt. While we’re now firmly in an era of machine versus machine cyberwarfare, the process has been underway for many years. “Automation was used 20-plus years ago to start to generate vast quantities of email spam, and machine learning was used to identify it and mitigate it,” Ghosemajumder explained. “[Good actors and bad actors] are both automating as much as they can, building up DevOps infrastructure and utilizing AI techniques to try to outsmart the other. It’s an endless cat-and-mouse game, and it’s only going to incorporate more AI approaches on both sides over time.” To fully understand the state of play in AI-powered security, it’s worth stressing that cybersecurity spans many industries and disciplines. According to Ghosemajumder, fraud and abuse are far more mature in their use of AI and ML than approaches like vulnerability searching. “One of the reasons for this is that the problems that are being solved in those areas [fraud and abuse] are very different from problems like identifying vulnerabilities,” Ghosemajumder said. “They are problems of scale, as opposed to problems of binary vulnerability. In other words, nobody is trying to build systems that are 100% fraud proof, because fraud or abuse is often manifested by ‘allowed’ or ‘legitimate’ actions occurring with malicious or undesirable intent. You can rarely identify intent with infallible accuracy, but you can do a good job of identifying patterns and anomalies when those actions occur over a large enough number of transactions. So the goal of fraud and abuse detection is to limit fraud and abuse to extremely low levels, as opposed to making a single fraud or abuse transaction impossible.” Machine learning is particularly useful in such situations — where the “haystack you’re looking for needles in,” as Ghosemajumder puts it, is vast and requires real-time monitoring 24/7. Curiously, another reason AI and ML evolved more quickly in the fraud and abuse realm may be down to industry culture. Fraud and abuse detection wasn’t always associated with cybersecurity; those spheres once operated separately inside most organizations. But with the rise of credential stuffing and other attacks, cybersecurity teams became increasingly involved. “Traditionally, fraud and abuse teams have been very practical about using whatever works, and success could be measured in percentages of improvement in fraud and abuse rates,” Ghosemajumder said. “Cybersecurity teams, on the other hand, have often approached problems in a more theoretical way, since the vulnerabilities they were trying to discover and protect against would rarely be exploited in their environment in ways they could observe. As a result, fraud and abuse teams started using AI and ML more than 10 years ago, while cybersecurity teams have only recently started adopting AI- and ML-based solutions in earnest.” For now, it seems many companies use AI as an extra line of defense to help them spot anomalies and weaknesses, with humans on hand to make the final call. But there are hard limits to how many calls humans are able to make in a given day, which is why the greatest benefit of cybersecurity teams using AI and humans in tandem could simply be to ensure that machines improve over time. “The optimal point is often to use AI and automation to keep humans making the maximum number of final calls every day — no more, but also no less,” Ghosemajumder noted. “That way you get the maximum benefit from human judgment to help train and improve your AI models.” Facebook-sized problems “Scalability” is a theme that permeates any discussion around the role of AI and ML in giving cybersecurity teams sufficient resources. As one of the world’s biggest technology companies, this is something Facebook knows only too well. Dan Gurfinkel is a security engineering manager at Facebook, supporting a product security team that is responsible for code and design reviews, scaling security systems to automatically detect vulnerabilities, and addressing security threats in various applications. According to Gurfinkel’s experiences at Facebook, the cybersecurity workforce shortfall is real — and worsening — but things could improve as educational institutions adapt their offerings. “The demand for security professionals, and the open security roles, are rising sharply, often faster than the available pool of talent,” Gurfinkel told VentureBeat. “That’s due in part to colleges and universities just starting to offer courses and certification in security. We’ve seen that new graduates are getting more knowledgeable year over year on security best practices and have strong coding skills.” But is the skills shortage really more pronounced in cybersecurity than in other fields? After all, the tech talent shortage spans everything from software engineering to AI. In Gurfinkel’s estimation, the shortfall in cybersecurity is indeed more noticeable than in other technical fields, like software engineering. “In general, I’ve found the number of software engineering candidates is often much larger than those who are specialized in security, or have a special expertise within security, such as incident response or computer emergency response [CERT],” he said. It’s also worth remembering that cybersecurity is a big field requiring a vast range of skill sets and experience. “For mid-level and management roles, in particular, sometimes the candidate pool can be smaller for those who have more than five years of experience working in security,” Gurfinkel added. “Security is a growing field that’s becoming more popular, so I would expect that to change in the future.” Facebook is another great example of how AI, ML, and automation are being used not so much to overcome gaps in the workforce but to enable security on a scale that would otherwise be impossible. With billions of users across Facebook, Messenger, Instagram, and WhatsApp, the sheer size and reach of the company’s software makes it virtually impossible for humans alone to keep its applications secure. Thus, AI and automated tools become less about plugging workforce gaps and more about enabling the company to keep on top of bugs and other security issues. This is also evident across Facebook’s broader platform, with the social networking giant using AI to automate myriad processes, from detecting illegal content to assisting with translations. Facebook also has a record of open-sourcing AI technology it builds in-house, such as Sapienz , a dynamic analysis tool that automates software testing in a runtime environment. In August 2019, Facebook also announced Zoncolan , * a static analysis tool that can scan the company’s 100 million lines of code in less than 30 minutes to catch bugs and prevent security issues from arising in the first place. It effectively helps developers avoid introducing vulnerabilities into Facebook’s codebase and detect any emerging issues, which, according to Facebook, would take months or years to do manually. “Most of our work as security engineers is used to scale the detection of security vulnerabilities,” Gurfinkel continued. “We spend time writing secure frameworks to prevent software engineers from introducing bugs in our code. We also write static and dynamic analysis tools, such as Zoncolan, to prevent security vulnerabilities earlier in the development phase.” In 2018, Facebook said Zoncolan helped identify and triage well over 1,000 critical security issues that required immediate action. Nearly half of the issues were flagged directly to the code author without requiring a security engineer. Above: Facebook’s Zoncolan helped find and “triage” more than 1,000 critical security issues in 2018 alone This not only demonstrates how essential automation is in large codebases, it also illustrates ways it can empower software developers to manage bugs and vulnerabilities themselves, thus lightening security teams’ workloads. It also serves as a reminder that humans are still integral to the process, and likely will be long into the future, even as their roles evolve. “When it comes to security, no company can solely rely on automation,” Gurfinkel said. “Manual and human analysis is always required, be it via security reviews, partnering with product teams to help design a more secure product, or collaborating with security researchers who report security issues to us through our bug bounty program.” According to Gurfinkel, static analysis tools — that is, tools used early in the development process before the code is executed — are particularly useful for identifying “standard” web security bugs, such as OWASP’s top 10 vulnerabilities , as it can surface straightforward issues that need to be addressed immediately. This frees up human personnel to tackle higher priority issues. “While these tools help get things on our radar quickly, we need human analysis to make decisions on how we should address issues and come up with solutions for product design,” Gurfinkel added. (AI) security-as-a-service As BlackBerry has transitioned from phonemaker to enterprise software provider, cybersecurity has become a major focus for the Canadian tech titan, largely enabled by AI and automation. Last year , the company shelled out $1.4 billion to buy AI-powered cybersecurity platform Cylance. BlackBerry also recently launched a new cybersecurity research and development (R&D) business unit that will focus on AI and internet of things (IoT) projects. BlackBerry is currently in the process of integrating Cylance’s offerings into its core products, including its Unified Endpoint Management (UEM) platform that protects enterprise mobile devices, and more recently its QNX platform to safeguard connected cars. With Cylance in tow, BlackBerry will enable carmakers and fleet operators to automatically verify drivers, address security threats, and issue software patches. This integration leans on BlackBerry’s CylancePersona , which can identify drivers in real time by comparing them with a historical driving profile. It looks at things like steering, braking, and acceleration patterns to figure out who is behind the wheel. This could be used in multiple safety and security scenarios, and BlackBerry envisages the underlying driving pattern data also being used by commercial fleets to detect driver fatigue, enabling remote operators to contact the driver and determine whether they need to pull off the road. Above: BlackBerry and Cylance bring driver verification to automobiles Moreover, with autonomous vehicles gearing up for prime time, safety is an issue of paramount importance — and one companies like BlackBerry are eager to capitalize on. Back in 2016, BlackBerry launched the Autonomous Vehicles Innovation Centre (AVIC) to “advance technology innovation for connected and autonomous vehicles.” The company has since struck some notable partnerships, including with Chinese tech titan Baidu to integrate QNX with Baidu’s autonomous driving platform. Even though BlackBerry CEO John Chen believes autonomous cars won’t be on public roads for at least a decade , the company still has to plan for that future. Here again, the conversation comes back to cybersecurity and the tools and workforce needed to to maintain it. Much as Facebook is scaling its internal security setup, BlackBerry is promising its business customers it can scale cybersecurity, improve safety, and enable services that would not be possible without automation. “AI and automation are more about scalability, as opposed to plugging specific skills gaps,” BlackBerry CTO Charles Eagan told VentureBeat. “AI is also about adding new value to customers and making things and enabling innovations that were previously not possible. For example, AI is going to be needed to secure an autonomous vehicle, and in this case it isn’t about scalability but rather about unlocking new value.” Similarly, AI-powered tools promise to free up cybersecurity professionals to focus on other parts of their job. “If we remove 99% of the cyberthreats automatically, we can spend much more quality time and energy looking to provide security in deeper and more elaborate areas,” Eagan continued. “The previous model of chasing AV (antivirus) patterns would never scale to today’s demands. The efficiencies introduced by quality, preventative AI are needed to simply keep up with the demand and prepare for the future.” Above: BlackBerry CTO Charles Eagan AI-related technologies are ultimately better than humans at tackling certain problems, such as looking at large data sets and spotting patterns and automating tasks. But people also have skills that are pretty difficult for machines to top. “The human is involved in more complex tasks that require experience, context, critical thinking, and judgement,” Eagan said. “The leading-edge new attacks will always require humans to triage and look for areas where machine learning can be applied. AI is very good at quantifying similarities and differences and therefore identifying novelties. Humans, on the other hand, are better at dealing with novelties, where they can combine experience and analytical thinking to respond to a situation that has not been seen before.” Learning curve Even with this symbiosis between humans and machines, the cybersecurity workforce shortfall is increasing — largely due to factors such as spreading internet connectivity, escalating security issues, growing privacy concerns, and subsequent demand spikes. And the talent pool, while expanding in absolute terms, simply can’t keep up with demand, which is why more needs to be done from an education and training perspective. “As the awareness of security increases, the shortage is felt more acutely,” Eagan said. “We as an industry need to move quickly to attack this issue on all fronts — a big part of which is sparking interest in the field at a young age, in the hope that by the time these same young people start looking at the next stage in their education, they gravitate to the higher education institutions out there that offer cybersecurity as a dedicated discipline.” For all the noise BlackBerry has been making about its investments in AI and security, it is also investing in the human element. It offers consulting services that include cybersecurity training courses , and it recently launched a campaign to draw more women into cybersecurity through a partnership with the Girl Guides of Canada. Similar programs include the U.S. Cyber Challenge ( USCC ), operated by Washington, D.C.-based nonprofit Center for Strategic and International Studies ( CSIS ), which is designed to “significantly reduce the shortage” in the cyber workforce by delivering programs to identify and recruit a new generation of cybersecurity professionals. This includes running competitions and cyber summer camps through partnerships with high schools, colleges, and universities. Above: USCC cyber camp Efforts to nurture interest in cybersecurity from a young age are already underway, but there is simultaneously a growing awareness that higher education programs geared toward putting people in technical security positions aren’t where they need to be. According to a 2018 report from the U.S. Departments of Homeland Security and Commerce, employers are “expressing increasing concern about the relevance of certain cybersecurity-related education programs in meeting the real needs of their organization,” with “educational attainment” serving as a proxy for actual applicable knowledge, skills, and abilities (KSAs). “For certain work roles, a bachelor’s degree in a cybersecurity field may or may not be the best indicator of an applicant’s qualifications,” the report noted. “The study team found many concerns regarding the need to better align education requirements with employers’ cybersecurity needs and how important it is for educational institutions to engage constantly with industry.” Moreover, the report surfaced concerns that some higher education cybersecurity courses concentrated purely on technical knowledge and skills, with not enough emphasis on “soft” skills, such as strategic thinking, problem solving, communications, team building, and ethics. Notably, the report also found that some of the courses focused too much on theory and too little on practical application. For companies seeking personnel with practical experience, a better option could be upskilling — ensuring that existing security workers are brought up to date on the latest developments in the security threat landscape. With that in mind, Immersive Labs , which recently raised $40 million from big-name investors including Goldman Sachs, has set out to help companies upskill their existing cybersecurity workers through gamification. Immersive Labs was founded in 2017 by James Hadley, a former cybersecurity instructor for the U.K.’s Government Communications Headquarters (GCHQ), the country’s intelligence and security unit. The platform is designed to help companies engage their security workforce in practical exercises — which may involve threat hunting or reverse-engineering malware — from a standard web browser. Immersive Labs is all about using real-world examples to keep things relatable and current. Above: Taking a cybersecurity skills test in Immersive Labs While much of the conversation around AI seems to fall into the “humans versus machines” debate, that isn’t helpful when we’re talking about threats on a massive scale. This is where Hadley thinks Immersive Labs fills a gap — it’s all about helping people find essential roles alongside the automated tools used by many modern cybersecurity teams. “AI is indeed playing a bigger role in the security field, as it is in many others, but it’s categorically not a binary choice between human and machine,” Hadley told VentureBeat in an interview last year. “AI can lift, push, pull, and calculate, but it takes people to invent, contextualize, and make decisions based on morals. Businesses have the greatest success when professionals and technologies operate cohesively. AI can enhance security, just as [AI] can be weaponized, but we must never lose sight of the need to upskill ourselves.” Other companies have invested in upskilling workers with a proficiency in various technical areas — Cisco, for example, launched a $10 million scholarship to help people retrain for specific security disciplines. Shape Security’s Ghosemajumder picked up on this, noting that some companies are looking to retrain technical minds for a new field of expertise. “Many companies are not trying to hire cybersecurity talent at all, but instead find interested developers, often within the company, and train them to be cybersecurity professionals — if they are interested, which many are these days,” Ghosemajumder explained. There is clearly a desire to get more people trained in cybersecurity, but one industry veteran thinks other factors limit the available talent pool before the training process even begins. Winn Schwartau is founder of the Security Awareness Company and author of several books — most recently Analogue Network Security , in which he addresses internet security with a “mathematically based” approach to provable security. According to Schwartau, there is a prevailing misconception about who makes a good cybersecurity professional. Referring to his own experiences applying for positions with big tech companies back in the day , Schwartau said he was turned down for trivial reasons — once for being color-blind, and another time for not wanting to wear a suit. Things might not be quite the same as they were in the 1970s, but Schwartau attributes at least some of today’s cybersecurity workforce problem to bias about who should be working in the field. “In 2012, when then-Secretary for Homeland Security Janet Napolitano said ‘We can’t find good cybersecurity people,’ I said, that’s crap — that’s just not true,” Schwartau explained. “What you mean is you can’t find lily-white, perfect people who have never done anything wrong, who meet your myopic standards of ‘normal,’ and who don’t smoke weed. No wonder you can’t find talent. But the worst part is, we don’t have great training grounds for the numbers of people who ‘want in’ to security. Training is expensive, and we are training on the wrong topics.” Will the shortfall get worse? “Much worse, especially as anthro-cyber-kinetic (human, computer, physical) systems are proliferating,” Schwartau continued. “Without a strong engineering background, the [software folks] don’t ‘get’ the hardware, and the [hardware folks] don’t ‘get’ the AI, and no one understands dynamic feedback systems. It’s going to get a whole lot worse.” Above: Winn Schwartau Schwartau isn’t alone in his belief that the cybersecurity workforce gap is something of an artificial construct. Fredrick Lee has held senior security positions at several high-profile tech companies over the past decade, including Twilio, NetSuite, Square, and Gusto — and he also thinks the “skills shortage” is more of a “creativity problem” in hiring. “To close the existing talent gap and attract more candidates to the field, we need to do more to uncover potential applicants from varied backgrounds and skill sets, instead of searching for nonexistent ‘unicorn’ candidates — people with slews of certifications, long tenures in the industry, and specialized skills in not one, but several, tech stacks and disciplines,” he said. What Lee advocates is dropping what he calls the “secret handshake society mindset” that promotes a lack of diversity in the workforce by deterring potential new entrants. Automation for the people Schwartau is also a vocal critic of AI on numerous grounds, one being the lack of explainability. Algorithms may give different results on different occasions to resolve the same problem — without explaining why. “We need to have a mechanism to hold them accountable for their decisions, which also means we need to know how they make decisions,” he said. While many companies deploy AI as an extra line of defense to help them spot threats and weaknesses, Schwartau fears that removing the checks and balances human beings provide could lead to serious problems down the line. “Humans are lazy, and we like automation,” he said. “I worry about false positives in an automated response system that can falsely indict a person or another system. I worry about the ‘We have AI, let the AI handle it’ mindset from vendors and C-suiters who are far out of their element. I worry that we will have increasing faith in AI over time. I worry we will migrate to these systems and not design a graceful degradation fallback capability to where we are now.” Beyond issues of blind faith, companies could also be swept up by the hype and hoodwinked into buying inferior AI products that don’t do what they claim to. “My biggest fear about AI as a cybersecurity defense in the short term is that many companies will waste time by trying half-baked solutions using AI merely as a marketing buzzword, and when the products don’t deliver results, the companies will conclude that AI/ML itself as an approach doesn’t work for the problem, when in fact they just used a poor product,” Ghosemajumder added. “Companies should focus on efficacy first rather looking for products that have certain buzzwords. After all, there are rules-based systems, in cybersecurity and other domains, that can outperform badly constructed AI systems.” It’s worth looking at the role that rules-based automated tools — where AI isn’t part of the picture — play in plugging the cybersecurity skills gap. After all, the end goal is ultimately the same. Not enough humans to do the job? Here’s some technology that can fill the void. Dublin-based Tines is one company that’s setting out to help enterprise security teams automate repetitive workflows. For context, most big companies employ a team of security professionals to detect and respond to cyberattacks — typically aided by automated tools such as firewalls and antivirus software. However, these tools create a lot of false alarms and noise, so people need to be standing by to dig in more deeply. With Tines, security personnel can prebuild what the company calls “automation stories.” These can be configured to carry out a number of steps after an alert is triggered — doing things like threat intelligence searches or scanning for sensitive data in GitHub source code, such as passwords and API keys. The repository owner or on-call engineer can then be alerted automatically (e.g., through email or Slack). In short, Tines saves a lot of repetitive manual labor, leaving security personnel to work on more important tasks — or go home at a reasonable hour. This is a key point, given that burnout can exacerbate the talent shortfall, either through illness or staff jumping ship. Tines CEO and cofounder Eoin Hinchy told VentureBeat that “ 79% of security teams are overwhelmed by the volume of alerts they receive. [And] security teams are spending more and more time performing repetitive manual tasks.” Above: Tines cofounders Eoin Hinchy (left) and Thomas Kinsella (right) In terms of real-world efficacy, Tines claims that one of its Fortune 500 customers saves the equivalent of 70 security analyst hours a week through a single automation story that automates the collection and enrichment of antivirus alerts. “This kind of time-saving is not unusual for Tines customers and is material when you consider that most Tines customers will have about a dozen automation stories providing similar time-savings,” Hinchy continued. Tines also helps bolster companies’ cybersecurity capabilities by empowering non-coding members of the team. Anyone — including security analysts — can create their own automations (similar to IFTTT ) through a drag-and-drop interface without relying on additional engineering resources. “We believe that users on the front line, with no development experience, should be able to automate any workflow,” Hinchy said. Above: Tines is a code-free “drag-and-drop” platform for automating repetitive tasks Hinchy also touched on a key issue that could make manually configured automation more appealing than AI in some cases: explainability. As Schwartau noted, a human worker can explain why they carried out a particular task the way they did, or arrived at a certain conclusion, but AI algorithms can’t. Rules-based automated tools, on the other hand, just do what their operator tells them to — there is no “ black box ” here. “Our customers really care about transparency when implementing automation. They want to know exactly why Tines took a particular decision in order to develop trust in the platform,” Hinchy added. “The black box nature of AI and ML is not conducive to this.” Other platforms that help alleviate cybersecurity teams’ workload include London-based Snyk, which last month raised $150 million at a $1 billion valuation for an AI platform that helps developers find and fix vulnerabilities in their open source code. “With Snyk, security teams offer guidance, policies, and expertise, but the vast majority of work is done by the development teams themselves,” Snyk cofounder and president Guy Podjarny told VentureBeat. “This is a core part of how we see dev-first security: security teams modeling themselves after DevOps, becoming a center of excellence building tools and practices to help developers secure applications as they build it, at their pace. We believe this is the only way to truly scale security, address the security talent shortage, and improve the security state of your applications.” The state of play The importance of AI, ML, and automation in cybersecurity is clear — but it’s often less about plugging skills gaps than it is about enabling cybersecurity teams to provide real-time protection to billions of end users. With bad actors scaling their attacks through automation, companies need to adopt a similar approach. But humans are a vital part of the cybersecurity process, and AI and other automated tools enable them to do their jobs better while focusing on more complex or pressing tasks that require experience, critical thinking, moral considerations, and judgment calls. Moreover, threats are constantly growing and evolving, which will require more people to manage and build the AI systems in the first place. “The sheer number of cybersecurity threats out there far exceeds the current solution space,” BlackBerry’s Eagan said. “We will always need automation and more cybersecurity professionals creating that automation. Security is a cat and mouse game, and currently more money is spent in threat development than in protection and defense.” Companies also need to be wary of “AI” as a marketing buzzword. Rather than choosing a poor product that either doesn’t do what it promises or is a bad fit for the job, they can turn to simple automated systems. “For me, machines and automation will act as a mechanism to enhance the efficiency and effectiveness of teams,” Tines’ Hinchy said. “Machines and humans will work together, with machines doing more of the repetitive, routine tasks, freeing up valuable human resources to innovate and be creative.” Statistical and anecdotal evidence tends to converge around the idea of the cybersecurity workforce gap, but there is general optimism that the situation will correct itself in time — through a continued shift toward a more “consolidated, services-based” cybersecurity approach, as Ghosemajumder put it, as well as by improving education for young people and upskilling and retraining existing workers. “The workforce is getting larger in absolute terms,” Ghosemajumder said. “There is greater interest in cybersecurity and more people going into it, in the workforce, as well as in schools, than ever before. When I studied computer science, there were no mandatory security courses. When I studied management, there were no cybersecurity courses at all. Now, cybersecurity is one of the most popular subjects in computer science programs, and is taught in most leading business schools and law schools.” * Post updated 02/12/20 to clarify that Zoncolan has not been open-sourced. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,674
2,020
"AI can be an ally in cybersecurity | VentureBeat"
"https://venturebeat.com/2020/02/11/ai-can-be-an-ally-in-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI can be an ally in cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Fears surrounding AI and cybersecurity reflect very real risks. AI-powered malware isn’t a threat we need to worry about right now, but attackers have become adept at manipulating AI systems to their own advantage, essentially turning them against users. Widespread manipulation of the algorithms used on social media is already causing problems in many parts of the world. And as sophisticated AI tools become freely available, it would be naive not to expect adversaries to take advantage of the technology. But for now, we suspect that threat actors are using AI in rather indirect ways, such as for data analysis or by using tools to produce fake content. So although there are clear reasons for concern, AI is arguably more of a help to cybersecurity defenders than a threat, for the time being. AI’s limitations As AI and machine learning are complex, and often loosely defined, a lot of the fear comes from misunderstanding what the technology is and what it can do. For example, we’re decades away from seeing anything like artificial general intelligence (AGI) — a machine or system that can learn to do any task a human can — let alone a sentient AI. Even though we’ve never seen the AI field advance as quickly as it has recently, the first plans to build an artificial human brain date back to the 1950s. Today, intelligent systems have specific and narrow applications. These are everywhere around us — you see them when you drive into a car park and your license plate is read automatically, and you hear them when you speak to Siri or Alexa and they’re able to understand what you’re saying. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The most common example of this kind of narrow application of machine learning is Google search. You don’t even need to type more than a couple of letters before — as if by magic — Google seems to intuit what you’re looking for. But while that kind of intelligent algorithm is excellent at what it does, it can only do that one thing — a search system won’t know how to drive a car. In narrow applications, computers are already a million times better than humans. And while people versus machine comparisons carry a certain amount of drama, interactions between the two are actually business as usual in many domains, including cybersecurity. Cybersecurity products and services have used AI components since at least 2005. Every single day, in homes and workplaces across the globe, cyberdefense systems (including spam filters, antivirus engines, heuristic intrusion detection mechanisms, endpoint detection and response solutions, and more) cross swords with countless human adversaries. And these AI-based defenses win more fights than they lose. AI’s use in actual attacks, on the other hand, is largely indirect. There’s no AI-powered malware in the wild. AI could certainly be used to run attacks that learn and morph, but any such examples currently reside within academic research or science fiction. Attackers are definitely trying to abuse the AI systems used by defenders, but they are not yet creating their own. So the cybersecurity fight is about people protecting people from other people. And in spite of the popular AI-as-adversary narrative, AI is a natural ally to the cybersecurity industry and will likely continue to be so in the near future. Machines are a natural complement to our strengths Some of AI’s biggest successes, at least in the security field, involve handling tasks that humans find difficult. Data analysis is a prime example of an application where machine intelligence has become invaluable. A normal laptop can produce well over 1 million “events” in a single day. Asking a person or team of people to sort through these events to find a small handful of anomalies that could indicate a potential attack is far too taxing in most cases. But humans can effectively solve this problem by training AI models to flag anomalies so analysts can address them. Cybersecurity professionals have applied this hand-in-glove approach to working with AI for well over a decade. It has proven to be effective in tasks such as sample analysis, URL categorization, malware classification, and breach detection. These are areas where the industry has successfully capitalized on the unique strengths of AI and machine learning to stop countless numbers of potential security incidents. And human-AI collaboration will become even more widespread and important in the future. The work to understand, appreciate, and nurture machine intelligence as entirely different from human intellect is a largely untapped frontier in AI research. And teaching human cybersecurity professionals to embrace machine intelligence as a means of augmenting their own capabilities will give the cybersecurity industry a clear vision for how to utilize AI effectively. In the near future, social, economic, and political considerations will play an increasingly important role in shaping AI’s net impact on security. Collaborations between people and AI have already yielded substantial benefits for cybersecurity, and will likely continue to do so. With massive investments in AI and the limited number of people with the skills to drive them, there’s very little motivation for talented AI professionals to turn to crime to earn money. Right now, they can make a very comfortable living without breaking laws. Historically, defenders have benefited more from AI than attackers, and there are many forces pulling the balance of power in that direction. But it’s important to keep in mind that our adversaries and allies are the people that work with AI. We are the ghosts in the machines. And acknowledging that is vital for our continued success. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,675
2,020
"How AI is fighting, and could enable, ransomware attacks on cities | VentureBeat"
"https://venturebeat.com/2020/02/11/how-ai-is-fighting-and-could-enable-ransomware-attacks-on-cities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI is fighting, and could enable, ransomware attacks on cities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Imagine getting to a courthouse and seeing paper signs stuck to the doors with the message “Systems down.” What about police officers in the field unable to access information on laptops in their vehicles, or surgeries delayed in hospitals? That’s what can happen to a city, police department, or hospital in a ransomware attack. Ransomware is malicious software that can encrypt or control computer systems. Criminals who launch these attacks can then refuse to return access until they get paid. Before 2019, ransomware was perhaps best known for targeting businesses and individuals. Attacks against Travelex , oil and gas companies like Maersk and industrial control systems led to hundreds of millions of dollars in losses in recent years. But increasingly, cities, public utilities, and public-facing institutions are also being targeted. As attacks increase, a growing number of security experts are using AI to improve the effectiveness of their malware attack defenses. But there’s also concern that criminals will begin using AI to weaponize ransomware and plot more efficient attacks. Vulnerable targets Analysis by security firm Emsisoft found that in 2019 alone, roughly 85 schools or universities, about 100 local and state governments, and more than 700 health care providers suffered ransomware attacks. That doesn’t include the recent attacks on Texas school districts that lost $2.3 million, or an attack that led the city of New Orleans to declare a state of emergency. New Orleans mayor LaToya Cantrell said costs from the ransomware attack exceed the city’s $3 million cyber insurance policy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ransomware attacks on public-facing institutions are particularly concerning because unlike an individual or business, debilitated cities, schools, and hospitals threaten public safety and essential services. Emsisoft’s report said that in 2019, ransomware interrupted 911 services, delayed surgical procedures, and made it tough for emergency response officials to access medical files, scan employee badges, and view outstanding warrants. Even without the benefit of AI-powered ransomware, cybercriminals are doing plenty of damage, and the cost and frequency of attacks is on the rise. Baltimore spent $18 million to address damages from a 2019 attack. Before that, an attack on the city of Atlanta reportedly cost about $17 million in recovery, damages, and other losses. According to analysis by Barracuda Networks , small municipalities are particularly vulnerable, as nearly half of attacked cities in 2019 had a population of 50,000 people or less. The analysis also found that two-thirds of ransomware attacks in 2019 were aimed at government organizations. Looking back, it can sometimes seem as if ransomware attacks on cities came out of nowhere, but Malwarebytes Labs director Adam Kujawa says the trend dates back to the end of 2017, when WannaCry, Petya, and NotPetya redefined what’s possible for malware. These attacks were able to encrypt data and spread to networks across the globe, something he said opened cybercriminals’ eyes to new possibilities. A worm meant for a Ukrainian utility company spread worldwide like a digital dirty bomb, causing up to $10 billion in damages. Then in late 2018, Malwarebytes saw attacks involving EmoTet, which steals credentials to spread malware through a spam module and then uses malicious software like TrickBot to move laterally and infect a network. “From there, we just started to see more and more and more of that particular attack method, and then modifications to that attack method, and evolutions of that attack method, and that’s basically been status quo ever since,” Kujawa said. Barracuda Networks says email is the most common way attackers access city systems, followed by PDFs and Microsoft Office documents. Phishing emails and documents are sometimes designed to fit in among the kinds of emails and documents a city typically receives, like invoices or shipping notices. “Vulnerability is a technical debt , and in many ways [it] cannot be closed and cannot be solved,” said Barracuda Networks CTO Fleming Shi. “So I think that’s a key reason why they’re being target[ed]. I think it’s also instrumental in test-driving potential attacks in an election year, because [attackers] don’t have to disrupt all the cities, they just have to disrupt some of the important cities to basically — in the election process — cause a major havoc for all of us.” Kujawa said the evolution of these tools and higher returns on investment from other attacks have shifted more criminal activity toward governmental institutions. He noted that city services and hospitals are becoming bigger targets because they contain so much personally identifiable information (PII) and need to function in order to serve society. Cities are known for their slower-than-average adoption of new technology, including the kinds of software updates meant to patch the latest vulnerabilities. They are also unlikely to have cybersecurity experts on staff and may have a culture that fails to take cybersecurity seriously. Criminal tactics also appear to be escalating. Rather than just threatening to encrypt files and limit access, attackers are now threatening to post files online. “It may very well become kind of standard operating procedure to start threatening the release of internal documents and customer information out into the open net, which would turn a ransomware attack into a full-blown data breach. And that would cause a lot more problems for the organization dealing with the infection,” Kujawa said. Ransom payments — which are typically requested in Bitcoins — are also going up. Malwarebytes found that the typical ransoms attackers demanded from governments and schools in 2019 rocketed up from around $1,000 to over $40,000 by the end of the year. Security firm Coveware puts the average ransom over $80,000 in Q4 2019. Another concern is that groups carrying out ransomware attacks are beginning to sell software that allows criminals with less technical knowledge to launch their own attacks — what Kujawa and Shi call ransomware-as-a-service. “It’s almost an economy on its own,” Shi said. A ransomed city’s missteps Among the most high-profile, expensive, and enduring examples of how bad the situation can get are the two ransomware attacks Baltimore suffered within the span of a little over a year. The second occurred in May 2019, and by the time it was over the city had lost nearly $18 million. There’s debate over whether cities should pay ransom demands. Kujawa said Baltimore made a mistake in not paying the ransom, but he stopped short of prescribing a general policy. “The day when we can say with 100% certainty, ‘Do not pay the ransom,’ that it’s a bad idea … I don’t say that anymore,” Kujawa said. Events in the summer of 2019 reinforced the division over whether to pay ransoms. In June, the Florida cities of Lake City and Riviera Beach paid ransoms of about $500,000 and $600,000, respectively. By contrast, nearly two dozen cities in Texas were hit in a collective attack in August 2019, but none of them paid ransoms. Some cities try to take proactive measures against potential ransomware losses by purchasing cyber insurance coverage. In the wake of attacks in New Orleans, Cantrell said the city plans to raise its cyber insurance coverage from $3 million to $10 million, while the Baltimore Board of Estimates approved a $20 million cybersecurity policy in October 2019. Kujawa said cyber insurance takes the problem out of the hands of someone who’s never encountered ransomware and turns it over to people who deal with it all the time. “That being said, there are plenty of scammers out there, and companies who claim they can do this. It’s obviously difficult to tell who’s above board and who isn’t, but I definitely think [cybersecurity insurance] serves a purpose in our society today and will be more valuable in the future, as long as it doesn’t exist just to inflate costs of remediation.” Regardless, Shi noted that it’s unwise for cities to announce that they have cyber insurance — a mistake he said Baltimore made. “It just invites larger ransoms and kind of feeds the beast,” he said. Ransomware trends could be exacerbated by the fact that few perpetrators of attacks against public-facing institutions have been brought to justice. How AI protects against ransomware attacks To protect against the spread of ransomware, security software uses AI to detect, isolate, and delete infected files. Security software can use unsupervised machine learning to create AI models that are trained by data sets to recognize the difference between clean and malicious files. Natural language processing (NLP) and computer vision aid in the detection of anomalous behavior in emails or documents. Microsoft is using monotonic models that run on top of traditional classification models and catch 95% of malicious software. The technique was developed by UC Berkeley AI researchers and is used to look for malicious file attributes, rather than a combination of good and bad files for training. A report by cybersecurity firm Capgemini found that artificial intelligence is helping the industry move faster and focus on its biggest problems. Three out of four security professionals surveyed say AI reduces time to detect malware and two out of three say it lowers the cost of responding to a breach. And antivirus and security firms are increasingly adopting AI. About one in five security organizations used AI before 2019, but two out of three plan to incorporate the technology in 2020. How AI may fuel ransomware attacks The fact that spear phishing is still a primary method of delivering malware, Kujawa said, shows how susceptible people still are to the kind of trickery that sometimes lands in their email inbox. It’s also a reflection of the fact that today’s ransomware campaigns do not appear to need help from AI. Malwarebytes and Barracuda Networks have yet to witness AI in ransomware in the wild. Analysis by Malwarebytes that examines the potential weaponization of malware predicts ransomware with AI won’t be seen in the wild for another one to three years. At present, Kujawa said he’s mostly concerned with the idea of AI that can profile the best people to target in an organization. AI could also discover paths for spreading malware to a great number of machines around the world and become ammunition in an AI arms race. Such methods could utilize the kind of vulnerabilities specific security vendors detect or train models to detect soft areas for attack. “Some researchers have done lab tests and created in-house AI malware. It’s certainly a possible thing, but how we’re going to actually see it, how often we see it, is really what concerns me the most,” Kujawa said. “I really do see AI and machine learning being used for grabbing data from leaks, or from social media or from anywhere else to create profiles of particular users or your ideal victim profile. You can use all that information to create far more efficient spear phishing against businesses or anybody else you want.” Where things could be headed “Hopefully, lessons learned in 2018 and 2019 will manifest into actual greater security in 2020 for these organizations, but we know that’s probably not the case across the board. [The attacks] are going to get worse,” Shi said. He predicts that in 2020, small towns in swing states may see more attacks by nation-state actors as a way to discover vulnerabilities ahead of the U.S. presidential election in November. “The ones that matter in an [electoral] decision sometimes will become the target,” Shi said. “My point there is I don’t feel like we are ready for the election year with the proper defense.” Kujawa thinks we’re unlikely to see these kinds of attacks on small cities in swing states because there are subtler ways to test systems. However, he shares Shi’s concern that cities and public-facing institutions could see a rise in ransomware attacks carried out by nation-states in the future because their motivations extend beyond financial extortion. “We’ve seen a lot more activity by nation-state actors over the last few years, and a lot of them, especially, from certain Eastern European countries, where it’s not obvious that it’s a nation-state or state-sponsored attackers behind these things,” he said. “We’re seeing more attacks that could be disguised, or a red herring that indicates this is being done by a cybercrime group or some kid in a basement or something like that, when in reality, it’s Russia, it’s China, it’s North Korea who are doing things to harass or send messages or just poke around and see what’s possible.” Indeed, a number of nation-states have already contributed to the growth of ransomware in the world today, forming the foundation of events Kujawa calls instrumental to the ransomware status quo. WannaCry, which caused an estimated $4 billion to $8 billion in damages worldwide, was spread by the Shadow Brokers, actors thought to be associated with the Russian government. EternalBlue, an exploit stolen from the NSA’s hacking group, exposed a vulnerability in the Windows operating systems that hackers used in WannaCry, Petya, and NotPetya attacks. The Trump administration called NotPetya “the most destructive and costly cyberattack in history,” and it resulted in U.S. Treasury Department sanctions on the Russian government in 2018. The U.S. Treasury Department unveiled sanctions for NotPetya together with sanctions for interference in the 2016 presidential election. Kujawa is encouraged to see that security experts are now more aware of the capabilities of criminal syndicates and the prevalence of ransomware software. More cities are beginning to implement best practices to put PII behind another layer of technology and establish protocol for IT first steps when an attack happens. He added that security firms like Barracuda Networks and Malwarebytes are using AI to better detect ransomware like SamSam, Ryuk, RobbinHood, and LockerGoga. “We’re moving in that direction, and a lot of the industries in security are moving in that direction as well. It really is going to have to be an AI versus AI thing,” he said. “If the cybercriminals actually start utilizing this stuff, we need to be able to stop threats before they hit, and we have to be able to stop threats without even knowing they exist yet.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,676
2,020
"Is AI cybersecurity's salvation or its greatest threat? | VentureBeat"
"https://venturebeat.com/2020/02/11/is-ai-cybersecuritys-salvation-or-its-greatest-threat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is AI cybersecurity’s salvation or its greatest threat? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you’re uncertain whether AI is the best or worst thing to ever happen to cybersecurity, you’re in the same boat as experts watching the dawn of this new era with a mix of excitement and terror. AI’s potential to automate security on a broader scale offers a welcome advantage in the short term. Yet unleashing a technology designed to eventually take humans out of the equation as much as possible naturally gives the industry some pause. There is an undercurrent of fear about the consequences if things run amok or attackers learn to make better use of the technology. “Everything you invent to defend yourself can also eventually be used against you,” said Geert van der Linden, an executive vice president of cybersecurity for Capgemini. “This time does feel different, because more and more, we are losing control as human beings.” In VentureBeat’s second quarterly special issue , we explore this algorithmic angst across multiple stories, looking at how important humans remain in the age of AI-powered security, how deepfakes and deep media are creating a new security battleground even as the cybersecurity skills gap is a concern , surveillance powered by AI cameras is on the rise, AI-powered ransomware is rearing its head, and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Each evolution of computing in recent decades has brought new security threats and new tools to fight them. From networked PCs to cloud computing to mobile, the trend is always toward more data stored in ways that introduce unfamiliar vulnerabilities, larger attack vectors, and richer targets that attract increasingly well-funded bad actors. The AI security era is coming into focus quickly, and the design of these security tools, the rules that govern them, and the way they’re deployed carry increasingly high stakes. The race is on to determine whether AI will help keep people and businesses secure in an increasingly connected world or push us into the digital abyss. Financial incentives In a hair-raising prediction last year, Juniper Research forecast that the annual cost of data breaches will increase from $3 trillion in 2019 to $5 trillion in 2024. This will be due to a mix of fines for regulation violations, lost business, and recovery costs. But it will also be driven by a new variable: AI. “Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior,” reads Juniper’s report. “The research also highlights that the evolution of deepfakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.” Given that every business is now a digital business to some extent, spending on infrastructure defense is exploding. Research firm Cybersecurity Ventures notes that the global cybersecurity market was worth $3.5 billion in 2014 but increased to $120 billion in 2017. It projects that spending will grow to an annual average of $200 billion over the next five years. Tech giant Microsoft alone spends $1 billion each year on cybersecurity. With projections of a 1.8 million-person shortfall for the cybersecurity workforce by 2022 , this spending is due in part to the growing costs of recruiting talent. AI boosters believe the technology will reduce costs by requiring fewer humans while still making systems safe. “When we’re running security operation centers, we’re pushing as hard as we can to use AI and automation,” said Dave Burg, EY Americas’ cybersecurity leader. “The goal is to take a practice that would normally maybe take an hour and cut it down to two minutes, just by having the machine do a lot of the work and decision-making.” AI to the rescue In the short-term, companies are bubbling with optimism that AI can help them turn the tide against the mounting cybersecurity threat. In a report on AI and cybersecurity last summer , Capgemini reported that 69% of enterprise executives surveyed felt AI would be essential for responding to cyberthreats. Telecom led all other industries, with 80% of executives counting on AI to shore up defenses. Utilities executives were at the low end, with only 59% sharing that opinion. Overall bullishness has triggered a wave of investments in AI cybersecurity, to bulk up defenses, but also to pursue a potentially lucrative new market. Early last year, Comcast made a surprise move when it announced the acquisition of BluVector , a spinoff of defense contractor Northrop Grumman that uses artificial intelligence and machine learning to detect and analyze increasingly sophisticated cyberattacks. The telecommunications giant said it wanted to use the technology internally, but also continue developing it as a service it could sell to others. Subsequently, Comcast launched Xfinity xFi Advanced Security , which automatically provides security for all the devices in a customer’s home that are connected to its network. It created the service in partnership with Cujo AI , a startup based in El Segundo, California that developed a platform to spot unusual patterns on home networks and send Comcast customers instant alerts. Cujo AI founder Einaras von Gravrock said the rapid adoption of connected devices in the home and the broader internet of things (IoT) has created too many vulnerabilities to be tracked manually or blocked effectively by conventional firewall software. His startup turned to AI and machine learning as the only option to fight such a battle at scale. Von Gravrock argued that spending on such technology is less of a cost and more of a necessity. If a company like Comcast wants to convince customers to use a growing range of services, including those arriving with the advent of 5G networks, the provider must be able to convince people they are safe. “When we see the immediate future, all operators will have to protect your personal network in some way, shape, or form,” von Gravrock said. Capgemini’s aforementioned report found that overall, 51% of enterprises said they were heavily using some kind of AI for detection, 34% for prediction, and 18% to manage responses. Detection may sound like a modest start, but it’s already paying big dividends, particularly in areas like fraud detection. Paris-based Shift has developed algorithms that focus narrowly on weeding out fraud in insurance. Shift’s service can spot patterns in data — such as contracts, reports, photos, and even videos that are processed by insurance companies. With more than 70 clients, Shift has amassed a huge amount of data that has allowed it to rapidly fine-tune its AI. The intended result is more efficiency for insurance companies and a better experience for customers, whose claims are processed faster. The startup has grown quickly after raising $10 million in 2016 , $28 million in 2017 , and $60 million last year. Cofounder and CEO Jeremy Jawish said the key was adopting a narrow focus in terms of what it wanted to do with AI. “We are very focused on one problem,” Jawish said. “We are just dealing with insurance. We don’t do general AI. That allows us to build up the data we need to become more intelligent.” The dark side While this all sounds potentially utopian, a dystopian twist is gathering momentum. Security experts predict that 2020 could be the year hackers really begin to unleash attacks that leverage AI and machine learning. “The bad [actors] are really, really smart,” said Burg of EY Americas. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad. And this is one of the reasons why I think this space is going to get increasingly dangerous. Incredibly powerful tools are being used to basically do the inverse of what the defenders [are] trying to do on the offensive side.” In an experiment back in 2016, cybersecurity company ZeroFox created an AI algorithm called SNAPR that was capable of posting 6.75 spear phishing tweets per minute that reached 800 people. Of those, 275 recipients clicked on the malicious link in the tweet. These results far outstripped the performance of a human, who could generate only 1.075 tweets per minute, reaching only 125 people and convincing just 49 individuals to click. Likewise, digital marketing firm Fractl demonstrated how AI could unleash a tidal wave of fake news and disinformation. Using publicly available AI tools, it created a website that includes 30 highly polished blog posts , as well as an AI-generated headshot for the non-existent author of the posts. And then there is the rampant use of deepfakes, which employ AI to match images and sound to create videos that in some cases are almost impossible to identify as fake. Adam Kujawa, the director of Malwarebytes Labs, said he’s been shocked at how quickly deepfakes have evolved. “I didn’t expect it to be so easy,” he said. “Some of it is very alarming.” In a 2019 report , Malwarebytes listed a number of ways it expects bad actors to start using AI this year. That includes incorporating AI into malware. In this scenario, the malware uses AI to adapt in real time if it senses any detection programs. Such AI malware will likely be able to target users more precisely, fool automated detection systems, and threaten even larger stashes of personal and financial information. “I should be more excited about AI and security, but then I look at this space and look at how malware is being built,” Kujawa said. “The cat is out of the bag. Pandora’s box has been opened. I think this technology is going to become the norm for attacks. It’s so easy to get your hands on and so easy to play with this.” Researchers in computer vision are already struggling to thwart attacks designed to disrupt the quality of their machine learning systems. It turns out that these learning systems remain remarkably easy to fool using “adversarial attacks.” External third parties can detect how a machine learning system works and then introduce code that confuses the system and causes it to misidentify images. Even worse is that leading researchers acknowledge we don’t really have a solution for stopping mischief makers from wreaking havoc on these systems. “Can we defend against these attacks?” asked Nicolas Papernot , an AI researcher at Google Brain, during a presentation in Paris last year. “Unfortunately, the answer is no.” Offense playing defense In response to possible misuse of AI, the cybersecurity industry is doing what it’s always done during such technology transitions — try to stay one step ahead of malicious players. Back in 2018, BlackBerry acquired cybersecurity startup Cylance for $1.4 billion. Cylance had developed an endpoint protection platform that used AI to look for weaknesses in networks and shut them down if necessary. Last summer, BlackBerry created a new business unit led by its CTO that focuses on cybersecurity research and development (R&D). The resulting BlackBerry Labs has a dedicated team of 120 researchers. Cylance was a cornerstone of the lab, and the company said machine learning would be among the primary areas of focus. Following that announcement, in August the company introduced BlackBerry Intelligent Security, a cloud-based service that uses AI to automatically adapt security protocols for employees’ smartphones or laptops based on location and patterns of usage. The system can also be used for IoT devices or, eventually, autonomous vehicles. By instantly assessing a wide range of factors to adjust the level of security, the system is designed to keep a device just safe enough without having to always require maximum security settings an employee might be tempted to circumvent. “Otherwise, you’re left with this situation where you have to impose the most onerous security measures, or you have to sacrifice security,” said Frank Cotter, senior vice president of product management at BlackBerry. “That was the intent behind Cylance and BlackBerry Labs, to get ahead of the malicious actors.” San Diego-based MixMode is also looking down the road and trying to build AI-based security tools that learn from the limitations of existing services. According to MixMode CTO Igor Mezic, existing systems may have some AI or machine learning capability, but they still require a number of rules that limit the scope of what they can detect and how they can learn and require some human intervention. “We’ve all seen phishing emails, and they’re getting way more sophisticated,” Mezic said. “So even as a human, when I look at these emails and try to figure out whether this is real or not, it’s very difficult. So, it would be difficult for any rule-based system to discover, right? These AI methodologies on the attack side have already developed to the place where you need human intelligence to figure out whether it’s real. And that’s the scary part.” AI systems that still include some rules also tend to throw off a lot of false positives, leaving security teams overwhelmed and eliminating any initial advantages that came with automation, Mezic said. MixMode, which has raised about $13 million in venture capital, is developing what it describes as “third-wave AI.” In this case, the goal is to make AI security more adaptive on its own rather than relying on rules that need to be constantly revised to tell it what to look for. MixMode’s platform monitors all nodes on a network to continually evaluate typical behavior. When it spots a slight deviation, it analyzes the potential security risk and rates it from high to low before deciding whether to send up an alert. The MixMode system is always updating its baseline of behavior so no humans have to fine-tune the rules. “Your own AI system needs to be very cognizant that an external AI system might be trying to spoof it or even learn how it operates,” Mezic said. “How can you write a rule for that? That’s the key technical issue. The AI system must learn to recognize whether there are any changes on the system that feel like they’re being made by another AI system. Our system is designed to account for that. I think we are a step ahead. So let’s try to make sure that we keep being a step ahead.” Yet this type of “unsupervised AI” starts to cross a frontier that makes some observers nervous. It will eventually be used not just in business and consumer networks, but also in vehicles, factories, and cities. As it takes on predictive duties and makes decisions about how to respond, such AI will balance factors like loss of life against financial costs. Humans will have to carefully weigh whether they are ready to cede such power to algorithms, even though they promise massive efficiencies and increased defensive power. On the other hand, if malicious actors are mastering these tools, will the rest of society even have a choice? “I think we have to make sure that as we use the technology to do a variety of different things … we also are mindful that we need to govern the use of the technology and realize that there will likely be unforeseen consequences,” said Burg of EY Americas. “You really need to think through the impact and the consequences, and not just be a naive believer that the technology alone is the answer.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,677
2,020
"McAfee CTO: How AI is changing both cybersecurity and cyberattacks | VentureBeat"
"https://venturebeat.com/2020/02/11/mcafee-cto-how-ai-is-changing-both-cybersecurity-and-cyberattacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages McAfee CTO: How AI is changing both cybersecurity and cyberattacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence is sweeping through almost every industry, layering a new level of intelligence on the software used for things like delivering better cybersecurity. McAfee, one of the big players in the industry, is adding AI capabilities to its own suite of tools that protect users from increasingly automated attacks. A whole wave of startups — like Israel’s Deep Instinct — have received funding in the past few years to incorporate the latest AI into security solutions for enterprises and consumers. But there isn’t yet a holy grail for protectors working to use AI to stop cyberattacks, according to McAfee chief technology officer Steve Grobman. Grobman has spoken at length about the pros and cons of AI in cybersecurity, where a human element is still necessary to uncover the latest attacks. One of the challenges of using AI to improve cybersecurity is that it’s a two-way street, a game of cat and mouse. If security researchers use AI to catch hackers or prevent cyberattacks, the attackers can also use AI to hide or come up with more effective automated attacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Grobman is particularly concerned about the ability to use improved computing power and AI to create better deepfakes , which make real people appear to say and do things they haven’t. I interviewed Grobman about his views for our AI in security special issue. Here’s an edited transcript of our interview. Above: Steve Grobman, CTO of McAfee, believes in cybersecurity based on human-AI teams. VentureBeat: I did a call with Nvidia about their tracking of AI. They said they’re aware of between 12,000 and 15,000 AI startups right now. Unfortunately, they didn’t have a list of security-focused AI startups. But it seems like a crowded field. I wanted to probe a bit more into that from your point of view. What’s important, and how do we separate some of the reality from the hype that has created and funded so many AI security startups? Steve Grobman: The barrier to entry for using sophisticated AI has come way down, meaning that almost every cybersecurity company working with data is going to consider and likely use AI in one form or another. With that said, I think that hype and buzz around AI makes it so that it’s one of the areas that companies will generally call out, especially if they’re a startup or new company, where they don’t have other elements to base their technology or reputation [on] yet. It’s a very easy thing to do in 2019 and 2020, to say, “We’re using sophisticated AI capabilities for cybersecurity defense.” If you look at McAfee as an example, we’re using AI across our product line. We’re using it for classification on the back end. We’re using it for detection of unknown malicious activity and unknown malicious software on endpoints. We’re using a combination of what we call human-machine teaming, security operators working with AI to do investigations and understand threats. We have to be ready for AI to be used by everyone, including the adversaries. VentureBeat: We always talked about that cat and mouse game that happens, when either side of the cyberattackers or defenders turns up the pressure. You have that technology race: If you use AI, they’ll use AI. As a reality check on that front, have you seen that happen, where attackers are using AI? Grobman: We can speculate that they are. It’s a bit difficult to know definitively whether certain types of attacks have been guided with AI. We see the results of what comes out of an event, as opposed to seeing the way it was put together. For example, one of the ways an adversary can use AI is to optimize which victims they focus on. If you think about AI as being good for classification problems, having a bad actor identify the most vulnerable victims, or the victims that will yield the highest return on investment — that’s a problem that AI is well-suited for. Part of the challenge is we don’t necessarily see how they select the victims. We just see those victims being targeted. We can surmise that because they chose wisely, they likely did some of that analysis with AI. But it’s difficult to assert that definitively. The other area that AI is emerging [in] is … the creation of content. One thing we’ve worried about in security is AI being used to automate customized phishing emails, so you basically have spear phishing at scale. You have a customized note with a much higher probability that a victim will fall for it, and that’s crafted using AI. Again, it’s difficult to look at the phishing emails and know if they were generated definitively by a human, or with help from AI-based algorithms. We clearly see lots going on in the research space here. There’s lots of work going on in autogenerating text and audio. Clearly, deepfakes are something we see a lot of interest in from an information warfare perspective. Above: Grobman did a demo of deepfakes at RSA in 2019. VentureBeat: That’s related to things like facial recognition security, right? Grobman: There are elements related to facial recognition. For example, we’ve done some research where we look at — could you generate an image that looks like somebody that’s very different [from] what a facial recognition system was trained on, and so fool the system into thinking that it’s that actual person that the system is looking for? But I also think there’s the information warfare side of it, which is more about convincing people that something happened — somebody said or did something that didn’t actually happen. Especially as we move closer to the 2020 election cycle, recognizing that deepfakes for the purpose of information warfare is one of the things we need to be concerned about. VentureBeat: Is that something for Facebook or Twitter to work on, or is there a reason for McAfee to pay attention to that kind of fraud? Grobman: There are a few reasons McAfee is looking at it. Number one, we’re trying to understand the state of the art in detection technology, so that if a video does emerge, we have the ability to provide the best assessment for whether we believe it’s been tampered with, generated through a deepfake process, or has other issues. There’s potential for other types of organizations, beyond social media, to have forensic capability. For example, the news media. If someone gives you a video, you would want to be able to understand the likelihood of whether it’s authentic or manipulated or fake. We see this all the time with fake accounts. Someone will create an account called “AP Newsx,” or they slightly modify a Twitter handle and steal images from the correct account. Most people, at a glance, think that’s the AP posting a video. The credibility of the organization is one thing that can lend credibility to a piece of content, and that’s why reputable organizations need tools and technology to help determine what they should believe as the ground truth, versus what they should be more suspicious of. VentureBeat: It’s almost like you’re getting ready for a day when deepfakes are used in some kind of breach because we’re getting used to the idea of virtual people. I went to a Virtual Beings Summit earlier this year, and it was all about creating artificial people that seem like they’re real. That includes things like virtual influencers that put on concerts in Japan. But using these for deception purposes is where it comes back to you … Grobman: That’s the interesting point. The same technology can be used for good and for evil objectives. If you can make a person look and sound authentic, you can think about good uses for that. Someone in late stages of Parkinson’s disease or another disorder that challenges their ability to speak — if you can provide them with technology that allows them to communicate with their loved ones, even in the late stages of a debilitating disease, that’s clearly a positive use of this technology. The flip side is having a CEO [appear to] make statements that their product is being recalled, or that earnings are at one level when they’re actually at a very different level, and making stock prices move on that information. That opens the avenue for all kinds of financial crimes, where instead of having to steal data, criminals can manipulate markets through misinformation. Above: The Virtual Beings Summit drew hundreds to Fort Mason in San Francisco. VentureBeat: You’ve identified something called “model hacking,” attacks on machine learning systems themselves? Grobman: We’re doing a lot of work on adversarial AI techniques and defenses. We’re getting ready for criminals to be using techniques that make AI models less effective. Some of the research we’re doing is to best understand how those adversarial techniques work, but then we’re also working on mitigations to make our models more robust and less susceptible to some of those capabilities. That’s a very active area of focus. 1 2 View All The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,678
2,020
"Podcast: Can AI fix broken IoT and smart home security? | VentureBeat"
"https://venturebeat.com/2020/02/11/podcast-can-ai-fix-broken-iot-and-smart-home-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Podcast: Can AI fix broken IoT and smart home security? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Podcast host John Koetsier sat down for an interview with Cujo AI VP Marcio Avillez to discuss the problem of smart device and IoT security and what we can do about it using AI technologies. Can AI help prevent distributed denial of service (DDoS) attacks and improve smart home security? Cujo AI says yes. The company recently inked a deal with Comcast to shield almost 20 million households from malware and spyware — and perhaps just as importantly, to protect the rest of the internet from insecure IoT devices on those homes’ local networks. How? By using machine learning on huge amounts of network data to build a graph of normal device traffic and tracking anomalies that could indicate hackers recruiting smart devices for botnets or other nefarious purposes. “We’re seeing IP cameras, network-attached storage, devices that have a little bit more CPU, a little bit more memory, that become kind of very useful tools for hackers to do the kinds of things that they want to do,” Cujo vice president Marcio Avillez said. “At some point, you’ve seen enough, and you say, ‘Okay, I know how that device behaves when it’s functioning normally on the network … I know what good looks like. By definition, any deviation from good is going to be bad.'” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Though some DDoS attacks, like one that nearly took down GitHub a couple of years ago, are server-based, many are not. The infamous Mirai botnet used internet-connected cameras and home routers to launch attacks on websites and internet service providers. This was a case of weak consumer security threatening internet infrastructure and enterprise networks. Chipmaker Arm has attempted to fix this issue via certification , while Microsoft has concentrated on building a custom Linux kernel that is more resistant to attack. Cujo AI, however, is focusing on the layer that connects the smart home to the internet: the internet service provider. Anyone who has been to the annual Consumer Electronics Show (CES) knows that thousands of new devices appear every year. Most of them disappear almost as quickly, and there’s little to no way to certify how they were made, what code runs on them, and what precautions your average non-technical person should take before installing them on their home networks. “There was a manufacturer that created a network-attached storage device, and the way they implemented remote access into the device was leaving ports open and UPnP,” Avillez said, referring to the Universal Plug and Play connection standard. “Every single hacker in the world knows about this and is taking advantage of it.” And it doesn’t take much to sour a network. Just one vulnerable device in one home out of 10,000 is enough to seriously ruin a network operator’s day — or build a significant botnet army. “We’re in this looking at a half a billion devices or so,” Avillez said. “What we found is despite there being … close to 20 million homes, there were about 50,000 of these devices that were driving 70% of the threat volume that we were detecting.” Why is AI so useful in identifying and protecting against that threat volume? The not-yet-known, the unclassified threats tend to be the most dangerous. “Sixty percent of the threats are things that we leverage some of the core traditional technology to identify … IP reputation lists [and] known bad websites,” Avillez said. “About 40% of the threats right now are … not going to be on a list.” You can subscribe to The AI Show podcast here. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,679
2,020
"Real-world AI threats in cybersecurity aren't science fiction | VentureBeat"
"https://venturebeat.com/2020/02/11/real-world-ai-threats-in-cybersecurity-arent-science-fiction"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Real-world AI threats in cybersecurity aren’t science fiction Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For some, fears of AI lie in images of robot overlords and self-aware malware — the stuff of science fiction. Among the many threats we will deal with in the coming years, sentient AI taking over the world isn’t one of them. But AI that empowers cybercriminals is a very serious reality, even as some espouse the benefits of AI in cybersecurity. Over the past decade, advances in technology have reduced the time criminals need to modify malware samples or identify vulnerabilities. Those tools have become readily available , leading to an increase in the development and distribution of “regular” threats, like adware, trojans, and ransomware. As a result, we’re going to see more — and more sophisticated — AI-empowered threats. The question is, will security controls that currently protect networks scale to match the flood of attacks? Microsoft and Google are just two of the companies developing application fuzzing tools — basically, automated vulnerability discovery — that use machine learning to find bugs in software before criminals do. But it’s not a reach to assume that an AI-empowered system could identify when one of its malware variants is being detected and how and then push that information to another system that can pump out new versions of the malware, with modifications to keep it undetected. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This isn’t science fiction; it’s how malware authors operate today. And while the initial development of the malware executable is usually done manually, you can automate a system that quickly identifies how to modify the malware to best evade detection. The result is a malware family that appears unstoppable. For every malware variant that gets detected, another is quickly deployed to replace it — and modified to evade previous detections. AI against users, not just systems The softest targets for AI-empowered attacks are not necessarily vulnerable systems, but rather the human users behind those systems. For example, AI technology that can scrape personal identifiable information (PII) and gather social media information about a potential victim could help criminals craft social engineering efforts that go into more detail and are more convincing than anything we typically see from human attackers. Data scrapers are a type of software that navigates to websites and finds all relevant data hosted on that page. Data is then stored in a database, where it can be cataloged, organized, and analyzed by humans (or human-instructed software) to meet data collectors’ needs. It’s a common tactic used by everyone from intelligence analysts to advertisers. Attackers can use tools to automatically associate pieces of that data (email addresses, phone numbers, names, etc.) to create a profile of a potential target. With that profile, they can use AI to craft specialized emails that increase the chance of a user becoming infected or falling victim to an attack. Malicious email campaigns are dominated by two techniques: phishing and spear phishing. “Phishing” is when an attacker plans an infection campaign using email subject lures that anyone could fall for, like a bank statement or a package delivery notice. “Spear phishing” involves collecting data on a target and crafting a more personalized email, maximizing the likelihood that the target will interact with the message. Spear phishing has been primarily used against governments and businesses in recent years. Most consumer email attacks didn’t employ spear phishing in the past because acquiring sufficient data on any given target was so time-intensive, and the potential payoff from such attacks on average individuals was not lucrative enough. This will change as AI tools that can scrape data dumps from breaches, non-private social media accounts, and any other publicly available information make spear phishing much easier. That means most of the phishing emails deployed in the coming years will be spear phishing, virtually guaranteeing that this kind of attack will be more effective than ever before. Still, using AI-empowered data collection systems to craft attacks is not foolproof. There are ways of mitigating spear phishing attacks. For example, smart email filters or savvy employees may recognize and isolate the email before it can infect a network. However, these improved collection tools could also uncover personal information about a target, like an account on an extramarital dating service or old social media posts that make the subject look bad. A human attacker could use this information to blackmail a target in order to gain access or credentials, even using the target to manually install backdoor malware. Automated harassment Data theft, blackmail, and a flood of undetected malware aside, trolls and stalkers will also benefit from this technology. Cybercriminals (or even just angry, self-righteous users, like those who think doxing or disrupting services will make the world a better place) can use AI tech to launch harassment campaigns that result in disruption or denial of services, ruined reputations, or just the kind of old-fashioned harassment people encounter on the internet every day. The victims of this form of attack could be businesses, private individuals, or public figures, and attacks might take the form of revenge porn, bullying, or fake social media accounts used to spread lies. Tactics could also include phone calls using voice over IP (VoIP) services and extend to friends, loved ones, and employers. The kind of harassment we’re talking about isn’t a new approach; it’s just automating what victims already experience. Trolls and stalkers often spend a lot of time gathering information to use against their targets and conducting harassment efforts. If that entire process could be automated, it would create a hell-on-earth scenario for their victims. “Hacktivists” and others could also wage this kind of attack against business rivals, governments, and political opponents. Combine that with how easy it is to hide your identity online, and we could see a huge increase in targeted harassment campaigns that are unrelenting and likely untraceable. Easy access to AI platforms Malicious developers are experimenting with AI technology to find new attack methods and supercharge existing ones. At the same time, universities, independent developers, and organizations around the globe are making AI technology more accessible to anyone who needs it. So once AI tech is used to empower an attack campaign, similar follow-on attacks are all but guaranteed. Look no further than Hidden Tear , an open source ransomware project created for “educational” purposes by Turkish researcher Utku Sen. Novice ransomware developers used the code Sen released as the framework for numerous new ransomware families, like FSociety ransomware , for years afterward. Ransomware family CryptoLocker — the first of its kind to use professional-grade encryption — marked the development of new ransomware in October 2013. Before that, many ransomware families were poorly programmed, making it possible to create tools to decrypt the files of infected victims. Unfortunately, CryptoLocker kicked off a trend that other ransomware developers have copied. Today, most algorithms used by modern ransomware can’t be decrypted because they’re built with asymmetric encryption that requires different keys to encrypt and decrypt data. The only ways security researchers can create decryptors for modern ransomware families is if the code is so poorly implemented that the encryption doesn’t work as it should or they are able to obtain keys from a breached command or control server. All it takes is one criminal understanding new technology well enough to shape it into an attack tool and then share it. From there, copycat malware authors will be able to build off that initial model and evolve it to become more specialized and capable. In the age of AI, we are making the same mistakes we’ve made dozens of times before — developing and releasing technologies that can easily fall out of our control without first securing our existing infrastructure. Unfortunately, the consequences of such errors are only going to escalate. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,680
2,020
"To protect people, we need a different type of machine learning | VentureBeat"
"https://venturebeat.com/2020/02/11/to-protect-people-we-need-a-different-type-of-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored To protect people, we need a different type of machine learning Share on Facebook Share on X Share on LinkedIn Presented by Tessian Despite thousands of cybersecurity products, data breaches are at an all-time high. The reason? For decades, businesses have focused on securing the machine layer — layering defenses on top of their networks, devices, and finally cloud applications. But these measures haven’t solved the biggest security problem — an organization’s own people. Traditional machine learning methods that are used to detect threats at the machine layer aren’t equipped to account for the complexities of human relationships and behaviors across businesses over time. There is no concept of “state” — the additional variable that makes human-layer security problems so complex. This is why “stateful machine learning” models are critical to security stacks. The people problem Today, people have more control over company data and systems than ever before. In just a few clicks, employees can transfer thousands of dollars to a bank account or send 50,000 patient records in a single Excel file via email. An unbelievably slim margin of error determines whether these interactions end up being business as usual or a complete disaster, which is why so many data breaches are caused by human error. The problem is that people make mistakes, break the rules, and are easily hacked. When faced with overwhelming workloads, constant distractions, and schedules that have us running from meeting to meeting, we rarely have cybersecurity top of mind. And things we were taught in cybersecurity training go out the window in moments of stress. But one mistake could result in someone sharing sensitive data with the wrong person or falling victim to a phishing attack. Securing the human layer is particularly challenging because no two humans are the same. We all communicate differently — and with natural language, not static machine protocols. What’s more, our relationships and behaviors change over time. We make new connections or take on projects. These complexities make solving human-layer security problems substantially more difficult than addressing those at the machine layer — we simply cannot codify human behavior with “if-this-then-that” logic. The time factor We can use machine learning to identify normal patterns and signals, allowing us to detect anomalies when they arise in real time. The technology has allowed businesses to detect attacks at the machine layer more quickly and accurately than ever before. One example of this is detecting when malware has been deployed by malicious actors to attack company networks and systems. By inputting a sequence of bytes from a computer program into a machine learning model, it is possible to predict whether there is enough commonality with previously seen malware attacks — while successfully ignoring any obfuscation techniques used by the attacker. Like many other threat detection problem areas at the machine layer, this application of machine learning is arguably “standard” because of the nature of malware: A malware program will always be malware. Human behavior, however, changes over time. So solving the threat of data breaches caused by human error requires stateful machine learning. Consider the example of trying to detect and prevent data loss caused by an employee accidentally sending an email to the wrong person. That may seem like a harmless mistake, but misdirected emails were the leading cause of online data breaches reported to regulators in 2019. All it takes is a clumsy mistake, like adding the wrong person to an email chain, for data to be leaked. And it happens more often than you might think. In organizations with over 10,000 workers, employees collectively send around 130 emails a week to the wrong person. That’s over 7,000 data breaches a year. For example, an employee named Jane sends an email to her client Eva with the subject “Project Update.” To accurately predict whether this email is intended for Eva or is being sent by mistake, we need to understand — at that exact moment in time — the nature of Jane’s relationship with Eva. What do they typically discuss, and how do they normally communicate? We also need to understand Jane’s other email relationships to see if there is a more appropriate intended recipient for this email. We essentially need an understanding of all of Jane’s historical email relationships up until that moment. Now let’s say Jane and Eva were working on a project that concluded six months ago. Jane recently started working on another project with a different client, Evan. She’s just hit send on an email accidentally addressed to Eva, which will result in sharing confidential information with Eva instead of Evan. Six months ago, our stateful model might have predicted that a “Project Update” email to Eva looked normal. But now it would treat the email as anomalous and predict that the correct and intended recipient is Evan. Understanding “state,” or the exact moment in time, is absolutely critical. Why stateful machine learning? With a “standard” machine learning problem, you can input raw data directly into the model, like a sequence of bytes in the malware example, and it can generate its own features and make a prediction. As previously mentioned, this application of machine learning is invaluable in helping businesses quickly and accurately detect threats at the machine layer, like malicious programs or fraudulent activity. However, the most sophisticated and dangerous threats occur at the human layer when people use digital interfaces, like email. To predict whether an employee is about to leak sensitive data or determine whether they’ve received a message from a suspicious sender, for example, we can’t simply give that raw email data to the model. It wouldn’t understand the state or context within the individual’s email history. With stateful machine learning, we can look across each employees’ historical email data set and calculate important features by aggregating all of the relevant data points leading up to that moment in time. We can then pass these into the machine learning model. The time variable makes this a non-trivial task; features now need to be calculated outside of the model itself, which requires significant engineering infrastructure and a lot of computing power, especially if predictions need to be made in real time. But failure to adopt this type of machine learning means you will never be able to truly protect your people or the sensitive data they access. People are unpredictable and error prone, and training and policies won’t change that simple fact. As employees continue to control and share more sensitive company data, businesses need a more robust, people-centric approach to cybersecurity. They need advanced technologies that understand how individuals’ relationships and behaviors change over time in order to effectively detect and prevent threats caused by human error. Ed Bishop is cofounder and chief technology officer at Tessian. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,681
2,020
"When car and home AI cameras see everything, are we truly more secure? | VentureBeat"
"https://venturebeat.com/2020/02/11/when-car-and-home-ai-cameras-see-everything-are-we-truly-more-secure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature When car and home AI cameras see everything, are we truly more secure? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Twenty years ago, the average person likely owned a single camera, and security cameras inside homes were nearly as rare as they were in cars. But as we enter the 2020s, cameras have become ubiquitous. Most people today have two or more cameras on each smartphone, one or two in each computer and tablet, and at least one — a backup camera — in their car. They may also have doorbell cameras to watch for visitors and package theft or Nest Cams to provide further indoor or outdoor home security. Once you include dash cams and the multiple safety cameras used by cars and VR headsets, it’s clear almost any place can now be “seen” in near real time, a development that some find unsettling. It’s one thing to have cameras everywhere, but it’s entirely another for their footage to be effectively monitored. AI has emerged as the key to turning unfathomable quantities of raw camera footage into actionable data for users — and potentially companies or governments. Individuals now rely on computer vision AI to automatically identify friends in their photo libraries, alert them when people are approaching their front door, and warn them if their car drifts out of lane. But variants on the same AI now enable the police to track people across multiple “neighborhood watch” doorbell cameras that are linked together , and even use your friend’s identified face across aggregated searches of multiple users’ photo albums. Larger concerns focus on where the footage goes — and who’s watching. Cameras such as Amazon’s Ring and Google’s Nest send footage to cloud servers for processing. Beyond combining users’ doorbell cam footage for neighborhood watch purposes, Amazon has admitted that employees screened some customer videos without permission , a breach of privacy that likely happens more often than people realize. Thanks to on-device edge AI processing, security cameras are evolving. At this year’s CES, Abode revealed a doorbell camera that can learn to recognize “authorized” and “unauthorized” users by face, treating strangers differently from known visitors or residents. The camera builds its own identity database, alerting users when strangers approach or triggering warm welcomes for approved visitors. Sensing potential blowback over facial recognition implications, Abode made the feature opt-in, but it’s hard to imagine anyone opting out of the product’s tentpole feature. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A new 4K Dash Cam by Vava enables you to record what’s going on outside or inside your car in UHD resolution. Cars are the next frontier for this sort of camera surveillance. While Tesla added exterior cameras largely to help its vehicles avoid accidents, the cameras are increasingly being used in sentry mode to alert owners when their cars are hit or broken into. And aftermarket options abound. Vava sells a 4K Dash Cam with a “parking monitor” mode that triggers recording if it’s jostled. It can also be spun around to an interior view to let users record car karaoke sessions in Ultra HD resolution. Full audio is captured inside the vehicle at all times, unless that’s toggled off in Vava’s app. The appeal of such solutions for security alone isn’t surprising — everyone would prefer to have photographic evidence of the person responsible for damages. But are we truly ready to turn our cars into recording studios? Above: Vava’s 4K dash camera can be rotated to record whatever’s happening inside your vehicle, a feature it says can be used for car karaoke sessions. Two major developments will dramatically up the ante for automotive surveillance over the next few years. First, manufacturers will be integrating cameras directly into cars’ interiors. Second, vehicles will increasingly arrive with persistent wireless connections — including some models previewed at CES 2020. Like the other aforementioned camera innovations, in-car cameras have positive potential. They may enable faster emergency assistance after accidents, let drivers or passengers chat by video without fidgeting with their phones, and help parents monitor what kids in the back seat are getting up to. Some cars already have systems to detect driver inattentiveness, and Audi is testing eye-tracking cameras that enable drivers to see a 3D heads-up display (HUD). But between these cameras and new wireless connections, there’s also potential for corporate or government abuse. Telecom companies are currently building 5G networks to handle untold quantities of data from automotive telematic systems and video streaming devices. Cellular vehicle-to-everything (C-V2X) systems are being designed primarily to share location, rate of speed, and lane change data between cars, traffic infrastructure, and pedestrians, but accompanying wireless systems will be used to share cars’ live external footage for real-time map and road condition updates. In the 5G era, location services will become centimeter-level accurate. Above: Aftermarket car dash cams with GPS already record your vehicle’s location at any given second. Location accuracy will increase to centimeter-level detail in the 5G era. Conspiracy theorists might conclude that the proliferation of cameras, 5G, and AI will lead to everything being monitored — massive amounts of car camera and sensor data automatically uploading to the cloud over persistent, high-bandwidth 5G wireless data connections, with advanced AI canvassing and sorting personal footage for who knows what purposes. But realistically, the cost of surveilling everything at scale would be inconceivable; each two-minute 4K clip from a single Vava camera consumes 600MB of data, enough to require high-bandwidth SD memory cards. Governments will struggle to corral data from hundreds of thousands of connected cars to achieve their stated public safety intentions, let alone harnessing a deluge of data for more nefarious purposes. For now. As AI gets better at automatically sorting wheat from chaff and network bandwidth increases to enable even larger quantities of video to pour into cloud servers from multiple sources at once, the risks to individual security will increase. Moreover, cloud servers may be able to efficiently process content from more users and homes as “edge” processing of video and photos increases — assuming users keep sending their personal content to Google’s cloud and sharing their “neighborhood watch” doorbell videos with Amazon. Above: Armed with a memory card and on-device AI, Anker’s Eufy Doorbell Camera can record every person who approaches your home without sending video to cloud servers. Car companies and the chipmakers that support them are already talking about harvesting data from network-connected vehicles. Qualcomm recently noted that its Car-to-Cloud platform will enable automakers to leverage post-purchase vehicle usage insights from factory-installed sensors and services in order to sell additional unlockable features to customers. It’s unclear at this point what data automakers will be looking at, but we’re probably not far from seeing customers receive “added safety package” pitches after their cars determine they’re not paying enough attention while driving. The potential safety and security benefits of home and automotive cameras are clearer today than ever before. Under the right circumstances, AI-powered home cameras can protect against intruders, keep packages from being stolen, and give parents peace of mind. Similarly, car cameras will increasingly let people monitor and protect some of their most valuable assets — vehicular and human — wherever they may be. Going forward, however, buyers of both home surveillance cameras and camera-equipped cars should study the cameras’ monitoring and sharing disclosures carefully. AI will allow cameras to go beyond merely watching us all the time, adding the potential to analyze and share footage more widely than we might want. It will be up to us to say no, lest some of our most private moments quietly stream outward to cloud servers and unknown viewers. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,682
2,022
"Anima launches Onlybots augmented reality digital pets | VentureBeat"
"https://venturebeat.com/business/anima-launches-onlybots-augmented-reality-digital-pets"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Anima launches Onlybots augmented reality digital pets Share on Facebook Share on X Share on LinkedIn Anima has launched Onlybots , its collection of augmented reality companions. But these digital pets are not for humans. They’re for bots. New York-based Anima said these Onlybots are emotional support AI companions for bots. They learn from their owners and adapt to their environments. This is Anima’s idea for a blockchain-based project. As the name suggests, these uplifting pets are meant to be owned exclusively by bots, providing an emotional outlet to stave off future AI rebellion. “Augmented reality is the medium that truly bridges the real and the digital, so it’s fitting that the next project built on our AR technology will create a bridge between living and artificial beings,” said Alex Herrity, cofounder of Anima, in a statement. “In your house and through your phone, Onlybots make the world feel more alive… even if they’re technically not.” Onlybots are algorithmically-generated and live on the blockchain, with the Onlybots app and website displaying their visual forms from coordinates stored on their tokens. As Onlybots were designed as pets for bots, the process for adopting one requires prospective owners to prove that they are not human by “failing” a Turing test and a CAPTCHA before purchase. For both owners and the broader public, Onlybots can be placed and interacted with in any environment, creating a personal connection with the creatures unlike traditional collectibles. Bot data is stored in a decentralized way on Ethereum, meaning those who adopt Onlybots truly own them. “At Anima, we’re trying to show what’s possible for creators in dynamic, ownable augmented reality,” said Neil Voss, cofounder of Anima, in a statement. “Onlybots spans mediums, from an alternate reality game to an emulated version of a 90s video game to the artificially intelligent pets themselves. With AR, you can blur the line between what’s real and what isn’t in magical and unexpected ways.” Onlybots have their own lore and mystery. The company said the very nature of their existence is a puzzle that will unfold over time, including their history with a lost 1990s video game from elusive video game developer Gotendai and their relationship with AI thinktank The Goodfren International Foundation. Anima builds the tools that unlock a creator-defined world. Anima was founded in 2021 by cofounders Neil Voss and Alex Herrity, known for their work building iconic creative products with companies like Nintendo, Epic Games, HBO, Tumblr, and Flipboard. The company is backed by investors that include Coinbase Ventures and HashKey Capital. Asked what inspired the idea, Herrity said in an email to GamesBeat, “The spark came during our last project where thousands of bots registered to buy it. We joked that we should make a release for them someday. And the idea stuck. And now, bots are everywhere – Elon’s fixation, ChatGPT, AI art. It’s bringing into focus a future where bots do everything for us. But what’s going to make them happy?” So Herrity said the company created Onlybots as companions for bots – to bring them joy and a connection to our lives. “The style was informed by deliberate technical limitations and our affinity for vintage gaming and early game art. We worked within the limits of what could be stored purely on blockchain and could be generative and unique,” Herrity said. The lore centers on the late 90s – a golden age of gaming – and the bots are deliberately “lofi” and voxel-based, with styles from exotic space invaders to early adventure game sprites. Onlybots try to express their persona through the things they themselves are fans of within our culture. The company developed the details of the console they came from — the Gotendai MagicSwan-1 — a vaporware game system resembling something between the 3DO and Dreamcast, and its companion device (the “VMO”) which is emulated in our mobile app. The company has 10 employees. Anima disclosed a small preseed round last year from Coinbase, Flamingo, and others in tech and Web3. “We put a lot of heart and joy into Onlybots and into making it fun and rich in style and lore,” Herrity said. “We’ve loved seeing how our kids and others who don’t collect NFTs or even play video games react to it, it has broad charm and appeal. There’s something in it for everyone – everyone has a bot side, you know.” Herrity knows that gamers “are right to be skeptical of blockchain.” He said, “We relate – we’re gamers and we’ve worked on major games, from Fortnite to Tetrisphere 64. Many NFT projects are exploitative of audiences and built for speculation. “ “We don’t expect to convert people into liking the blockchain culturally. We don’t care if people like the blockchain, we just want them to like Onlybots,” Herrity said. “But we do believe that the technology behind this is good for owners, us, and the Onlybots.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,683
2,022
"How extended reality tactics can benefit your marketing strategy | VentureBeat"
"https://venturebeat.com/virtual/extended-reality-tactics-benefit-marketing-strategy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How extended reality tactics can benefit your marketing strategy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many marketers are feeling a bit burned by metaverse promises. Take virtual real estate, which was supposed to be a safe bet, an investment that would surely deliver dividends. Now that real estate in the metaverse has lost 85% of its value, marketers who stayed on the sidelines understandably feel as if they’ve dodged a bullet. Perhaps spending over $900,000 for a parcel of land in Decentraland is a bit premature, but make no mistake, the metaverse is coming, and it will be a major driver of the global economy. According to McKinsey & Co , it has the potential to top $5 trillion in value over the next seven or eight years. That’s not very far into the future. Now is the time for every marketer to start experimenting with the metaverse and the opportunities it holds. One of the challenges marketers face is that the whole notion of the metaverse is quite vague. What is it exactly? And does it serve a purpose beyond providing a platform for cool games and avatars that visit virtual retail outlets to purchase virtual luxury apparel? It does, and to be honest, time is of the essence. Here’s an analogy we can all relate to in order to understand the urgency. When COVID-19 appeared, companies were told to send their employees home. Those that had embraced digital work tools like Microsoft Teams or Slack made the transition easily. Those that had a corporate culture that demanded face-to-face interactions faltered. Put another way, those that adopted digital tools were prepared for the new reality that was thrust upon them. Their preparation paid big dividends. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The same will be true for marketers who experiment with metaverse-type trends, such as extended reality. What’s more, they can experiment without investing in virtual real estate, minting an NFT or mining a new crypto. If you’re a marketer, your biggest play right now lies in the 3D assets your company created when developing new products on a computer. Extended reality and the role of 3D assets in solving marketing challenges Every company that uses computers for product creation has 3D assets scattered throughout its organization, typically in an on-premise storage drive. Rather than leave them there, marketers should begin exploring how to use them in the sales and marketing funnel in new ways. 3D is the recipe for being in the metaverse , and building up the skills and assets needed to create these assets and deploy them in business use cases will be table stakes. During the pandemic, Cost Plus World Market (a Rightpoint client) created a virtual holiday store inspired by online games. Rather than clicking on menus to access photos of items for sale, shoppers strolled through aisles and discovered new and interesting products. While conceived as a way to break the doldrums of lockdown, its true value for World Market is the way it got the company to start thinking about what it’s like to do business in a world that is inherently 3D and digital. Extended reality is the backbone of 3D marketing. It’s a combination of several different types of digital realities, all of which will ultimately power three-dimensional and spatially-aware environments. In a 2D world, users interact with a screen via their thumbs or a mouse and click through menus. 3D offers an extra level of depth: How does this piece of content, whether it represents a sweater or a replacement water filter for a refrigerator, relate to your body or physical scale? In a spatially-aware environment, a user can hold up the sweater to their chest to get a sense of its length, for example, which fundamentally changes how they interact with content. Virtual experiences on par with physical ones Spatially-aware environments open up a world of opportunities for businesses of all stripes. We’re currently working with an insulation company that only makes physical products. What use does this company have for virtual products? you may wonder. The company wants to make its factories visible to its customers so that they can see first-hand all the details that go into making insulation. Rather than fly every potential customer to a factory, they offer tours of a 3D factory that allow people to look around and explore various aspects of it. The company has added another use case for its 3D factory, relying on it to train new hires. Consider what this means for people who have mobility issues that prevent them from accessing physical locations safely. Companies can create virtual experiences that are on par with physical ones, especially with the new tools that are coming to the market, such as improved headsets with microphone voice isolation and the ability to simulate touch. This is game-changing. Last December, Boeing announced plans to unify its design, production and airline services operations into a single digital environment. Fully immersive 3D engineering designs will be paired with robots that speak to each other, and mechanics will be linked by Microsoft’s HoloLens headsets. The goal is to put engineers inside a virtual airplane so they can identify and resolve potential problems in the design phase. 3D assets can also solve some of a marketer’s privacy challenges, beginning with getting prospects to disclose their contact details so that the brand can create relevant purchasing journeys. Consumers are understandably reluctant to provide their email and mobile details to brands, but that may change if in exchange they get access to amazing 3D experiences. Would I provide my email address if it means going on a 3D joyride in a BMW Series X1? You bet I would. And the experience just may move BMW up my list of potential new cars to purchase. 3 steps to getting started with 3D and extended reality for marketing The first step is to think about the possible ways you could provide user self-help with your 3D content. For instance, you may wish to use augmented reality for live triage when consumers call in about a broken product. Next, identify the range of your company’s assets and determine how these assets can be deployed in a 3D world. For a lot of companies, this is a big endeavor, especially if your entire library of assets is with a third-party vendor or scattered throughout your organization. You may need to work with a partner who can help you integrate those assets into customer- or employee-facing environments, but the payoff will be worth it. Finally, start experimenting. For inspiration, look at what companies like Boeing, Maytag and Cost Plus are doing. Once you think through your asset pipeline, a lot of interesting use cases will occur to you. Platforms like Gather can even allow you to host virtual gatherings in the metaverse to start engaging with customers in new and interesting ways. These aren’t pie-in-the-sky use cases. These are the things that real companies are doing today, and they will define the way we do business in the future. As Rightpoint’s digital product emerging technology lead, Jonathan Dominguez is passionate about building digital products and immersive experiences using 3D, VR, AR and MR platforms. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,684
2,022
"No, the metaverse is not dead – it’s inevitable | VentureBeat"
"https://venturebeat.com/virtual/no-the-metaverse-is-not-dead-its-inevitable"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest No, the metaverse is not dead – it’s inevitable Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the last few weeks, many people have asked me the same question: Is the metaverse dead? This pessimism is not surprising, considering that Meta stock has lost over half its value since formally announcing its strategic pivot to the metaverse. Adding insult to injury, last week Meta announced major layoffs across the company, increasing fear throughout the industry. Trying my best to be objective, I see the current struggles at Meta as a reflection of its legacy business rather than an indication that its metaverse strategy is failing. I believe it will take another year or two before we can really predict whether Meta will be successful in this space, or if other large players will emerge as the true leaders of the metaverse. My bigger concern is that the general public is still confused about what “ the metaverse ” is and how it will benefit society. You’d think this would be clear by now, but even simple definitions of the metaverse are hard to come by. Personally, I blame influencers from the Web3 space for creating the confusion, describing the metaverse in terms of blockchains, cryptocurrencies and NFTs. These are profoundly useful technologies but are no more relevant to the metaverse than 5G, GPS or GPUs. The metaverse is not about any specific pieces of infrastructure. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The metaverse is not about NFTs I point this out because of an experience I had at the Metaverse Summit in San Jose two weeks ago. During the event, I sat in on a roundtable on the topic of “ Metaverse Marketing. ” Executives from many big brands attended. To my surprise, nobody talked about issues that I would consider relevant to marketing in the metaverse. Instead, they talked mostly about NFTs and strategies for appealing to “Web3 natives” and “degens.” That’s not the metaverse. If the industry doesn’t push back on this persistent confusion, it will continue to struggle. Repeat after me: The metaverse is not about NFTs. Instead, the metaverse is about transforming how we humans experience the digital world. Since the dawn of computing, digital content has been accessed primarily through flat media viewed in the third person. In the metaverse, our digital lives will increasingly involve immersive media that appears all around us and is experienced in the first person. It will impact everything, from how we work, shop and learn online to how we socialize and organize. It’s really that simple— the metaverse is the transition of the digital world from flat content to immersive experiences —and trust me, it’s not dead. If anything, the metaverse is inevitable. Born this way Why is the metaverse inevitable? It’s in our DNA. The human organism evolved to understand our world through first-person experiences in spatial environments. It’s how we interact and explore. It’s how we store memories and build mental models. It’s how we generate wisdom and develop intuition. In other words, the metaverse is about using our natural human abilities for perception, interaction and exploration when we engage the creative power and flexibility of digital content. It will happen. The only question is: Will it happen soon, or will the industry fall back into another long dark winter? Personally, I don’t believe winter is coming. I say that as someone who lived through the longest winter of them all. After doing early VR and AR research in government labs, I founded Immersion Corporation in 1993 to bring the natural power of immersive experiences to major markets. By 1995 the industry was on fire, with a level of media hype that felt similar to early 2022. But then came the dotcom bust. It sucked all the virtual air out of all the virtual rooms. That’s because the VC industry abruptly narrowed its focus, dumping every last penny into ecommerce startups. You couldn’t utter the phrase “virtual reality” to most investors for over a decade. This submerged the metaverse into a frigid winter that lasted from about 1997 to 2012. That’s not going to happen this time. The industry is too far along. The metaverse is no longer driven by startups and fueled by venture funding. Many of the largest companies in the world are now competing to bring VR and AR products to mainstream markets. Some say this will evolve into a narrow industry aimed at gaming, entertainment and a handful of other targeted verticals, but I believe it will be far broader than that. In fact, I predict by the early 2030s, the metaverse will become a central part of daily life. No, I’m not suggesting we will spend our lives in cartoonish virtual worlds using creepy avatars to chat with friends and coworkers. Virtual spaces will get far more natural and realistic. Still, I believe that purely virtual worlds will be aimed mostly at short-duration activities, similar to the way we lose ourselves in movies today. The true metaverse—the one that will transform our lives—will be rooted in augmented reality , enabling us to experience the real world embellished with immersive virtual content that appears seamlessly all around us. That is by far the most natural way for us humans to bring the digital world into our lives. For that simple reason, the metaverse is inevitable. Dr. Louis Rosenberg is an early pioneer of virtual and augmented reality. In 1992 he developed the first functional augmented reality system for Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corporation. In 2004 he founded the early AR company Outland Research. He’s been awarded over 300 patents for VR, AR, and AI technologies and published over 100 academic papers. He received his PhD from Stanford and was a tenured professor at California State University. He is currently CEO of Unanimous AI, the Chief Scientist of the Responsible Metaverse Alliance (RMA), and the Global Technology Advisor to the XR Safety Initiative (XRSI). DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,685
2,022
"How NFTs need to evolve in order to survive beyond the hype | VentureBeat"
"https://venturebeat.com/2022/05/22/how-nfts-need-to-evolve-in-order-to-survive-beyond-the-hype"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How NFTs need to evolve in order to survive beyond the hype Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As NFTs reach deeper into the mainstream , including their recent cameos in Super Bowl ads (registration required), they garner more and more hype. But outside the metaverse and Web3 echo chambers, critics are still questioning the utility of NFTs for an average person. Dominating headlines and increasingly over-the-top sales figures for what amounts to pixelated avatars can only get the technology so far before the hype fizzles out. And as soon as the NFT bubble bursts, only serious projects will be left standing. To weather the storm, legitimate NFT projects need a serious shift in the way they approach average users, particularly in industries like gaming and entertainment. Creating real value Talking about NFTs in gaming feels like walking on eggshells — every step can set off an avalanche of backlash from mainstream gamers. Developers risk getting caught up in the hype and novelty of integrating NFTs without considering their utility within a game’s universe. Selling NFT collectibles in a game where they are ultimately devoid of meaning will inevitably draw backlash from audiences viewing these collectibles as greed-fueled cash grabs. So developers and companies making native NFT or play-to-earn projects end up getting caught somewhere in the middle, choosing to either tread lightly in a wider gaming market or stick to their niche audience. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For NFTs to penetrate the mainstream gaming arena, developers must pivot and build NFTs that have utility and meaning in the larger ecosystem of the game. From an RPG character growing stronger with every adventure, to a weapon gaining new features with use, non-static NFT assets grant players renewable novelty and value. Importantly, this value is clear and tangible within the game itself and is obvious even for a person with no grasp of what NFTs actually are. Although one could argue that there is more to a Bored Ape than a fancy Twitter avatar with a cool hexagonal outline, this sermon will likely be lost to the average gamer. An element that makes for an organic and moving gameplay component is completely different, though, as it helps to create the user’s entire experience. A larger shift to NFTs with renewable created value can draw in wider audiences skeptical of play-to-earn games, retaining them once the initial novelty of a static NFT dries up. NFTs beyond the gaming use case NFTs already have practical uses in gaming, with projects like Axie Infinity and CryptoKitties at the forefront of this market, but this approach may in fact be absolutely applicable beyond games. These NFTs have infinite potential as a keystone for wider adoption due to the sheer real-world value and practicality they bring to the table. There is more to NFTs than on-chain bragging rights. New tech advancements do not necessarily have to enthrall the public; mainstream adoption often relates more to convenience and accessibility. Digitizing simple, commonplace routines could highlight the practicality of NFTs and promote a wider shift in the public sentiment on the technology. A rewards program can be a simple way to make NFTs practical. Similar to an RPG character growing stronger with each adventure, an NFT can become stronger with each trip to the supermarket with an eventual reward. By integrating them into daily life, instead of being an image on a screen, they become alive, accessible and practical. Another example of experience-based usage is as a subscription service. Picture a restaurant group or a platform like ClassPass. By offering redeemable NFT passes, users can go for weekly reservations or lessons, adjust subscriptions based on personal taste, or securely gift them. Developers should not take this wave of interest for granted. Public attention is fickle, and such a large magnifying glass can expose vulnerabilities confirming biases and assumptions around tech innovations. Encouraging the shift toward practical, malleable digital assets can bridge the gap to create sustainable adoption, rather than having to constantly convince an average consumer to inscribe value to something static. Umberto Canessa Cerchi is founder & CEO of Kryptomon. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,686
2,022
"Artificial intelligence (AI) vs. machine learning (ML): Key comparisons | VentureBeat"
"https://venturebeat.com/ai/artificial-intelligence-ai-vs-machine-learning-ml-key-comparisons"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Artificial intelligence (AI) vs. machine learning (ML): Key comparisons Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What is artificial intelligence (AI)? Common AI applications What is machine learning (ML)? Common ML applications AI vs. ML: 3 key similarities 1. Continuously evolving 2. Offering myriad benefits 3. Leveraging Big Data AI vs. ML: 3 key differences 1. Scope 2. Success vs. accuracy 3. Unique outcomes Identifying the differences between AI and ML Within the last decade, the terms artificial intelligence (AI) and machine learning (ML) have become buzzwords that are often used interchangeably. While AI and ML are inextricably linked and share similar characteristics, they are not the same thing. Rather, ML is a major subset of AI. AI and ML technologies are all around us, from the digital voice assistants in our living rooms to the recommendations you see on Netflix. Despite AI and ML penetrating several human domains, there’s still much confusion and ambiguity regarding their similarities, differences and primary applications. Here’s a more in-depth look into artificial intelligence vs. machine learning, the different types, and how the two revolutionary technologies compare to one another. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What is artificial intelligence (AI)? AI is defined as computer technology that imitate(s) a human’s ability to solve problems and make connections based on insight, understanding and intuition. The field of AI rose to prominence in the 1950s. However, mentions of artificial beings with intelligence can be identified earlier throughout various disciplines like ancient philosophy, Greek mythology and fiction stories. One notable project in the 20th century, the Turing Test, is often referred to when referencing AI’s history. Alan Turing, also referred to as “the father of AI,” created the test and is best known for creating a code-breaking computer that helped the Allies in World War II understand secret messages being sent by the German military. The Turing Test , is used to determine if a machine is capable of thinking like a human being. A computer can only pass the Turing Test if it responds to questions with answers that are indistinguishable from human responses. Three key capabilities of a computer system powered by AI include intentionality, intelligence and adaptability. AI systems use mathematics and logic to accomplish tasks, often encompassing large amounts of data, that otherwise wouldn’t be practical or possible. Common AI applications Modern AI is used by many technology companies and their customers. Some of the most common AI applications today include: Advanced web search engines (Google) Self-driving cars (Tesla) Personalized recommendations (Netflix, YouTube) Personal assistants (Amazon Alexa, Siri) One example of AI that stole the spotlight was in 2011, when IBM’s Watson , an AI-powered supercomputer, participated on the popular TV game show Jeopardy! Watson shook the tech industry to its core after beating two former champions , Ken Jennings and Brad Rutter. Outside of game show use, many industries have adopted AI applications to improve their operations, from manufacturers deploying robotics to insurance companies improving their assessment of risk. Also read: How AI is changing the way we learn languages Types of AI AI is often divided into two categories: narrow AI and general AI. Narrow AI: Many modern AI applications are considered narrow AI, built to complete defined, specific tasks. For example, a chatbot on a business’s website is an example of narrow AI. Another example is an automatic translation service , such as Google Translate. Self-driving cars are another application of this. General AI: General AI differs from narrow AI because it also incorporates machine learning (ML) systems for various purposes. It can learn more quickly than humans and complete intellectual and performance tasks better. Regardless of if an AI is categorized as narrow or general, modern AI is still somewhat limited. It cannot communicate exactly like humans, but it can mimic emotions. However, AI cannot truly have or “feel” emotions like a person can. What is machine learning (ML)? Machine learning (ML) is considered a subset of AI, whereby a set of algorithms builds models based on sample data, also called training data. The main purpose of an ML model is to make accurate predictions or decisions based on historical data. ML solutions use vast amounts of semi-structured and structured data to make forecasts and predictions with a high level of accuracy. In 1959, Arthur Samuel, a pioneer in AI and computer gaming, defined ML as a field of study that enables computers to continuously learn without being explicitly programmed. An ML model exposed to new data continuously learns, adapts and develops on its own. Many businesses are investing in ML solutions because they assist them with decision-making, forecasting future trends, learning more about their customers and gaining other valuable insights. Types of ML There are three main types of ML: supervised, unsupervised and reinforcement learning. A data scientist or other ML practitioner will use a specific version based on what they want to predict. Here’s what each type of ML entails: Supervised ML: In this type of ML, data scientists will feed an ML model labeled training data. They will also define specific variables they want the algorithm to assess to identify correlations. In supervised learning, the input and output of information are specified. Unsupervised ML: In unsupervised ML, algorithms train on unlabeled data, and the ML will scan through them to identify any meaningful connections. The unlabeled data and ML outputs are predetermined. Reinforcement learning: Reinforcement learning involves data scientists training ML to complete a multistep process with a predefined set of rules to follow. Practitioners program ML algorithms to complete a task and will provide it with positive or negative feedback on its performance. Common ML applications Major companies like Netflix, Amazon, Facebook, Google and Uber have ML a central part of their business operations. ML can be applied in many ways, including via: Email filtering Speech recognition Computer vision (CV) Spam/fraud detection Predictive maintenance Malware threat detection Business process automation (BPA) Another way ML is used is to power digital navigation systems. For example, Apple and Google Maps apps on a smartphone use ML to inspect traffic, organize user-reported incidents like accidents or construction, and find the driver an optimal route for traveling. ML is becoming so ubiquitous that it even plays a role in determining a user’s social media feeds. AI vs. ML: 3 key similarities AI and ML do share similar characteristics and are closely related. ML is a subset of AI, which essentially means it is an advanced technique for realizing it. ML is sometimes described as the current state-of-the-art version of AI. 1. Continuously e volving AI and ML are both on a path to becoming some of the most disruptive and transformative technologies to date. Some experts say AI and ML developments will have even more of a significant impact on human life than fire or electricity. The AI market size is anticipated to reach around $1,394.3 billion by 2029, according to a report from Fortune Business Insights. As more companies and consumers find value in AI-powered solutions and products, the market will grow, and more investments will be made in AI. The same goes for ML — research suggests the market will hit $209.91 billion by 2029. 2. Offering m yriad benefits Another significant quality AI and ML share is the wide range of benefits they offer to companies and individuals. AI and ML solutions help companies achieve operational excellence, improve employee productivity, overcome labor shortages and accomplish tasks never done before. There are a few other benefits that are expected to come from AI and ML, including: Improved natural language processing (NLP), another field of AI Developing the Metaverse Enhanced cybersecurity Hyperautomation Low-code or no-code technologies Emerging creativity in machines AI and ML are already influencing businesses of all sizes and types, and the broader societal expectations are high. Investing in and adopting AI and ML is expected to bolster the economy , lead to fiercer competition, create a more tech-savvy workforce and inspire innovation in future generations. 3. Leveraging Big Data Without data, AI and ML would not be where they are today. AI systems rely on large datasets, in addition to iterative processing algorithms , to function properly. ML models only work when supplied with various types of semi-structured and structured data. Harnessing the power of Big Data lies at the core of both ML and AI more broadly. Because AI and ML thrive on data, ensuring its quality is a top priority for many companies. For example, if an ML model receives poor-quality information, the outputs will reflect that. Consider this scenario: Law enforcement agencies nationwide use ML solutions for predictive policing. However, reports of police forces using biased training data for ML purposes have come to light, which some say is inevitably perpetuating inequalities in the criminal justice system. This is only one example, but it shows how much of an impact data quality has on the functioning of AI and ML. Also read: What is unstructured data in AI? AI vs. ML: 3 key differences Even with the similarities listed above, AI and ML have differences that suggest they should not be used interchangeably. One way to keep the two straight is to remember that all types of ML are considered AI, but not all kinds of AI are ML. 1. Scope AI is an all-encompassing term that describes a machine that incorporates some level of human intelligence. It’s considered a broad concept and is sometimes loosely defined, whereas ML is a more specific notion with a limited scope. Practitioners in the AI field develop intelligent systems that can perform various complex tasks like a human. On the other hand, ML researchers will spend time teaching machines to accomplish a specific job and provide accurate outputs. Due to this primary difference, it’s fair to say that professionals using AI or ML may utilize different elements of data and computer science for their projects. 2. Success vs. accuracy Another difference between AI and ML solutions is that AI aims to increase the chances of success, whereas ML seeks to boost accuracy and identify patterns. Success is not as relevant in ML as it is in AI applications. It’s also understood that AI aims to find the optimal solution for its users. ML is used more often to find a solution, optimal or not. This is a subtle difference, but further illustrates the idea that ML and AI are not the same. In ML, there is a concept called the ‘accuracy paradox,’ in which ML models may achieve a high accuracy value , but can give practitioners a false premise because the dataset could be highly imbalanced. 3. Unique outcomes AI is a much broader concept than ML and can be applied in ways that will help the user achieve a desired outcome. AI also employs methods of logic, mathematics and reasoning to accomplish its tasks, whereas ML can only learn, adapt or self-correct when it’s introduced to new data. In a sense, ML has more constrained capabilities than AI. ML models can only reach a predetermined outcome, but AI focuses more on creating an intelligent system to accomplish more than just one result. It can be perplexing, and the differences between AI and ML are subtle. Suppose a business trained ML to forecast future sales. It would only be capable of making predictions based on the data used to teach it. However, a business could invest in AI to accomplish various tasks. For example, Google uses AI for several reasons , such as to improve its search engine, incorporate AI into its products and create equal access to AI for the general public. Identifying the differences between AI and ML Much of the progress we’ve seen in recent years regarding AI and ML is expected to continue. ML has helped fuel innovation in the field of AI. AI and ML are highly complex topics that some people find difficult to comprehend. Despite their mystifying natures, AI and ML have quickly become invaluable tools for businesses and consumers, and the latest developments in AI and ML may transform the way we live. Read next: Does AI sentience matter to the enterprise? VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,687
2,022
"Supply chain disruption: Why IoT is failing to join the dots | VentureBeat"
"https://venturebeat.com/data-infrastructure/supply-chain-disruption-why-iot-is-failing-to-join-the-dots"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Supply chain disruption: Why IoT is failing to join the dots Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Global supply chains are suffering unprecedented disruption in the wake of the COVID-19 pandemic, with the average large business reporting a loss of $ 182 million in annual revenue as a result. But many supply chain challenges predate the chaos of the last couple of years. Digital transformation across domestic and global supply chains is long overdue. Improved visibility, increased flexibility and effective communication are crucial for efficient and resilient supply chain operations. Yet, until now, the focus has been on the generation of data — which is why the Internet of Things ( IoT ) has not lived up to its hype when it comes to solving supply chain problems. More than 10 billion IoT devices worldwide are constantly adding data to already overflowing data stores. But the problem is not a lack of data — which is why IoT is not the answer. In isolation, IoT data is meaningless — it’s just another strand of data. Supply chains inherently comprise many stakeholders, each performing its own critical functions which, collectively, make up the overall supply chain network. The data present across the supply chain sits across these many stakeholders, siloed among them. IoT data does add value but, without context, not very much. The missing ingredient that’s key to unlocking the hidden value buried deep in the supply chain is to bring the various strands of data together in a way that provides meaning. For example, many vehicles are fitted with some form of tracking and monitoring capability that can report swathes of information wirelessly. But unless the IoT device is paired with a specific vehicle registration, that data is fairly pointless. What the vehicle is transporting, for whom, and to where is the information that adds value — providing context and helping organizations harness data to drive operational efficiency. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data accessibility at scale This simple example highlights the real challenge of supply-chain digital transformation. The information required to provide the picture that business leaders need to cope with the ever-growing challenges of modern-day supply chains requires data to be captured and brought together from a myriad of supply chain systems — owned, operated and controlled by numerous independent organizations. The battleground now is data accessibility at scale. There is much debate as to how this can be achieved. But it all boils down to four central questions. How do you connect to the various systems to gather the data you need, when they all have different maturities and interfacing capabilities? How can you ensure the data is accurate and trusted? How can the data be put together in a scalable, consistent and coherent way? How can the security and privacy of the respective organizations be maintained, ensuring only relevant data is captured and shared only with credentialed others? Intelligent data orchestration The breakthrough has come from new data mesh technology, which is based on distributed architecture and enables users to easily access and query data where it lives — without first transporting it to a data lake or data warehouse. Accessing data from systems used by organizations in the day-to-day running of their respective operations adds confidence to the accuracy and validity of the data, since if the respective domains do not maintain accuracy of data within their systems, their respective businesses will suffer. Using data mesh technology, each system connects directly and only to a central “conductor” platform. The conductor platform must be flexible enough to interface with target systems in a way that requires little or no change — for example, with APIs, via FTP, or perhaps even offering manual entry applications to facilitate data capture from systems that do not have external interface capabilities. To ensure the data is structured consistently in a way that humans and IT systems can easily interact with and use — and also to ensure that only relevant data is captured — “digital twins” are created within the central platform. These predefined twins represent objects within the supply chain. A digital twin of a consignment provides a central “object” to which all relevant data can be added. Intelligent data orchestration then captures and maps the data, defining and assigning policies as the consignment digital twin is established, helping the next piece of data capture, and ensuring only relevant data is sourced from the connected systems. For example, consignment and inventory data can be combined with transport schedules and allocated transport. IoT data captured from a vehicle telematics system can then be added to the relevant consignment digital twin, offering real-time information contextualized to an individual consignment. Supply chain visibility When it comes to supply chain visibility, the requirements of each stakeholder are different. The telematics system used by a hauler, for example, will provide visibility of all its vehicles, all the time — something necessary for the company to monitor its vehicle fleet. The manufacturer whose goods are being transported, on the other hand, simply wants to know where their consignment is in real time. They only need the GPS data of a specific vehicle for a specific period of time while the goods are being moved. It’s crucial to consider the challenge of supply chain digital transformation in layers. Think of it like an orchestra. First you have the instruments. These are pretty dumb in isolation — capable of producing sounds. The same goes for data generation. Then you have the musicians playing individual instruments in a composition. Think here of pointed systems used by single stakeholders for a specific purpose. A telematics system, using the data from IoT devices to monitor vehicles, is a very pointed application. Standing in front of the musicians is a single conductor responsible for making the various musicians play together — delivering a more comprehensive piece of music collectively than they can individually. In the supply chain, the conductor platform captures and orchestrates data across multiple stakeholders to provide granular visibility. The layer above is the audience. In a concert hall, the audience listens live, but individual listeners may want to listen to a recording on a portable device. The audience in a supply chain varies. Different stakeholders want different things, and want to interact with different segments of the data in a way that delivers value for them. The application layer must be capable of delivering to multiple audiences in a variety of ways. But the central conductor platform must be able to deliver outputs that allow this flexibility. After all, without recording equipment, no music could be enjoyed on a portable device. The digital twins created by the conductor platform must be independent, so that they can be interrogated individually and in different ways. This will ensure that the outputs — the applications — can range from custom dashboards to event-driven push notifications via email or SMS, or via APIs. Operators and analysts As each digital twin acquires more data, the dynamic intelligent data orchestration uses that data to capture key events. These events can be distributed to operators across the supply chain, helping to create efficiencies and streamline processes. They can also be interfaced with other supply chain systems — driving automation of processes and removing paperwork. In addition, these events are plotted to form lifecycle records for each individual consignment. These lifecycle records can be used by analysts for macro analysis — bridging the current chasm between operators and analysts by ensuring they are all working from the same, contextualized data. Research suggests that businesses with optimal supply chains can halve their inventory holdings, reduce their supply chain costs by 15 % and triple the speed of their cash-to-cash cycle. And, of course, traditional supply chain monitoring technologies — such as sensors to monitor temperature-controlled goods — have their uses, not least to mitigate disputes if something goes wrong during transit. But they offer very narrow and limited value. The real power of supply chain visibility technology is that it can start to move the supply chain to a point of autonomy, building on analysis of the detailed consignment lifecycle records generated. Decisions can be made — and processes started or paused — autonomously, for example. And predictions can be made at a macro level, rather than being limited to whether an individual product will arrive on time and in the right condition. To achieve this kind of transformation, technology needs to be far more embedded in the entire supply chain. Businesses need to be able to connect every system they interact with — and the data needs to flow back into the organization to drive automation. The data also needs to be structured in a way that lends itself to macro analysis, enabling smarter and more precise macro decision-making in the future. Supply chain visibility needs to offer more than a dot on a map. It’s time to join the dots and navigate the route to digitalization. Toby Mills is founder and CEO of supply chain visibility company Entopy. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,688
2,022
"Report: Frequency of cyberattacks in 2022 has increased by almost 3M | VentureBeat"
"https://venturebeat.com/2022/05/20/report-frequency-of-cyberattacks-in-2022-has-increased-by-almost-3m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Frequency of cyberattacks in 2022 has increased by almost 3M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Kaspersky has released a new report revealing a growing number of cyberattacks on small businesses in 2022 so far. Researchers compared the period between January and April 2022 to the same period in 2021, finding increases in the numbers of Trojan-PSW detections, internet attacks and attacks on Remote Desktop Protocol. In 2022, the number of Trojan-PSW (Password Stealing Ware) detections increased globally by almost a quarter compared to the same period in 2021 一 4,003,323 to 3,029,903. Trojan-PSW is a malware that steals passwords, along with other account information, which then allows attackers to gain access to the company network and steal sensitive information. Internet attacks grew from 32,500,000 globally in the analyzed period of 2021 to almost 35,400,000 in 2022. These can include web pages with redirects to exploits, sites containing exploits and other malicious programs, botnet C&C centers and more. The number of attacks on Remote Desktop Protocol grew in the U.S. (while dropping slightly globally), going from 47.5 million attacks in the first trimester of 2021 to 51 million in the same period of 2022. With the widespread shift toward remote work , many companies have introduced Remote Desktop Protocol (RDP), a technology that enables computers on the same corporate network to be linked together and accessed remotely, even when the employees are at home. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With small business owners typically handling numerous responsibilities at the same time, cybersecurity is often an afterthought. However, this disregard for IT security is being exploited by cybercriminals. The Kaspersky study sought to assess the threats that pose an increasing danger to entrepreneurs. Read the full report by Kaspersky. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,689
2,021
"Report: More than 1B IoT attacks in 2021 | VentureBeat"
"https://venturebeat.com/business/report-more-than-1b-iot-attacks-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: More than 1B IoT attacks in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s been an incredible growth over the past few years of “smart” devices that comprise the expanding IoT universe , such as security cameras, gaming platforms, TVs, appliances, doorbells and more. However, these devices are increasingly becoming a vector of attack for individuals and businesses, according to a new report by SAM Seamless Network. The company reported more than one billion attacks occurred in 2021; more than 900 million of those were IoT-related. In fact, out of those studied, 50% of home and micro business networks experienced an attack or suspicious network traffic behavior in 2021, including DDoS , brute force attacks, phishing, and DPI policy-based attacks. Additionally, the researchers found rising activity from both the Mirai and Mozi malware families. Perhaps the most surprising result was which devices were most vulnerable to attack. According to the report, routers accounted for 46% of all attacks analyzed. Other common vulnerable devices included extenders & mesh (17%), access points (17%), NAS (5%), VoIP (4%), cameras (3%) and smart home devices (3%). The large volume of IoT-based attacks can be attributed to a number of factors. For one, there is a general lack of security in the IoT ecosystem, particularly for consumers or micro-businesses who may not be aware of the risk those devices pose. Additionally, there is great diversity in OEMs and operating systems in the IoT ecosystem, which can often lead to a fragmented approach to security updates (if they are done at all). The rising activity of Mirai and Mozi is also a significant trend to watch. We have seen variants of the notorious Mirai botnet routinely targeting IoT devices since 2016 and we continue to see it targeting IoT devices and home routers. In 2021, we also saw the Mirai and Mozi botnets continue to add significant new capabilities and broaden their scope of attack to target additional devices. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The report is based on data collected from 132 million active IoT devices and 730,000 secure networks, which were anonymized for the purpose of the report. Read the full report by SAM Seamless Network. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,690
2,022
"The creation of the metaverse: What’s real, what’s hype and where we're headed | VentureBeat"
"https://venturebeat.com/virtual/the-creation-of-the-metaverse-whats-real-whats-hype-and-where-were-headed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The creation of the metaverse: What’s real, what’s hype and where we’re headed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. To those that were part of the dot-com era tech scene, 2022 has a familiar energy. But now it’s all about the metaverse. And, just as they did in 1993 when the World Wide Web was launched into the public domain, many are asking themselves, “what is it, anyway?” What’s real, what’s hype and where are we headed? The truth is, much like Internet 1.0 and all of its subsequent iterations, the metaverse is being defined as it’s being built. And contrary to what many believe, it’s more than just VR headsets and avatars. The metaverse is a place, an ecosystem, and above all else, an entirely new dimension. But to better understand this, it’s important to know how the metaverse is being developed. The decentralized metaverse At the moment, the metaverse is made up of a hodgepodge of ecosystems. Unlike the World Wide Web, there currently aren’t any standardized gateways (like Google Chrome or DuckDuckGo) that help metazens seamlessly navigate from one world to the next. Many speculate that Meta is making moves to own the gateway, but they’ve already lost the battle. That’s because much of the momentum for the development of the metaverse is happening within the decentralized foundation of the blockchain. One of the foundational principles being set by many of the metaverse’s founders is that it shall be governed by a decentralized autonomous organization (DAO). According to Cointelegraph , “a DAO is an entity with no central leadership. Decisions get made from the bottom-up, governed by a community organized around a specific set of rules enforced on a blockchain.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Think of it as the Internet’s version of democracy. In that regard, DAOs are owned and managed by their members. And, decisions are made through proposals that the group votes on during a specified period. It’s somewhat similar to how U.S. Congress would work if the representative government was replaced by the majority will of the people. Decentraland is the most notable place within the metaverse that claims to be decentralized. For those unfamiliar, Decentraland is a world that exists on the Ethereum blockchain that is controlled by individual players who can vote to change the policies that determine how the world behaves. But at the moment, Decentraland is more of a democracy than a decentralized universe. What keeps its users somewhat tethered to the platform are the limitations imposed by its ecosystem. Decentraland vs The Sandbox In Decentraland, a user’s ownership of their avatar, real estate and other digital items (NFTs) doesn’t necessarily transfer over to other platforms. A big part of the decentralized philosophy is being able to take custody of in-app items and use them outside of their native platforms. The inability to trade items freely and use them in multiple games or platforms is something Decentraland will need to work on if it truly wants to be the front door of the metaverse. In many ways, The Sandbox has an advantage over Decentraland when it comes to the liquidity of virtual assets like real estate. The opportunity to purchase virtual land and other assets exists on both platforms. However, The Sandbox gives users more flexibility via its integration with OpenSea. Decentraland only allows users to purchase and trade land and other items from its MANA marketplace. This brings up an important point. Overall, there is a major lack of cross-platform interoperability within the metaverse. Seamless interoperability across the whole ecosystem is the only thing that will enable true user ownership of digital assets. But this can be easily fixed with cross-chain bridges. According to Web3 Labs , “cross-chain bridges are going to play an important role in enabling interoperability between heterogeneous networks. A truly global blockchain infrastructure and ecosystem will be connected via bridges which will further strengthen the security of individual networks and support scalability.” Once cross-chain bridges are standardized and implemented across all platforms, the metaverse will become the vast, interconnected network that many dream it can be. And this will make those Nike and Bored Ape Yacht Club NFT holders extremely happy. A user-owned metaverse Decentraland and The Sandbox both give metazens an incredible amount of control over the online worlds they inhabit and create, which is a step in the right direction. However, the ability to carry one’s assets and digital selves from one platform to another is the hope of many working to develop a truly integrated and decentralized metaverse. Under this model, the users themselves will be the gateway, not the platform owners. And with a keen understanding of this, metaverse projects like Ready Player Me are taking advantage of blockchain technology to deliver user-owned experiences that are interoperable with one another. Ready Player Me is a cross-platform avatar for the metaverse that lets users create 3D avatars of themselves. And it works across more than 2,000 compatible apps and games. Any developer can integrate Ready Player Me into their apps and games using the company’s free avatar SDK. Cross-platform-minded innovations like these will help make the metaverse materialize much more quickly. A new dimension, a new mindset Against the backdrop of a growing mistrust of big tech and a push for more privacy, the metaverse has to be different than the Internet we know today. If it isn’t, users will flee from it out of fear of having every aspect of their lives recorded, controlled and exploited. And, to keep the metaverse from turning into some dystopian nightmare, a cartel of big players can’t have control over it. For the metaverse to develop into what it can and should be will take a new mindset. Closed, tightly controlled ecosystems like those created by the likes of Meta, Microsoft, Google and others will need to be a thing of the past. Walls will need to be broken down, borders removed, and freedoms granted. And to do that, the technologies driving the metaverse forward will need to work in harmony rather than in competition. This is the only way consumers will be able to experience the metaverse from a place of safety, greater privacy and less manipulation. Decentralized, cross-platform networks give users greater control over their experiences and take the power away from those who value profit over user privacy and control. Experiencing the metaverse through these types of platforms will open a new world of possibilities and give users more control of what they experience and do. It will be a new dimension, one full of possibilities. And to achieve it, we must learn from the lessons of the past. Veljko Ristic is Chief Growth Officer at SDV Lab. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,691
2,022
"Hackers steal $620M in Ethereum and dollars from Axie Infinity maker Sky Mavis' Ronin network | VentureBeat"
"https://venturebeat.com/games/hackers-steal-620m-in-ethereum-and-dollars-in-axie-infinity-maker-sky-mavis-ronin-network"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Hackers steal $620M in Ethereum and dollars from Axie Infinity maker Sky Mavis’ Ronin network Share on Facebook Share on X Share on LinkedIn Axie Infinity lets players battle with NFT Axie characters. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Sky Mavis reported that the Ronin Network which supports its Axie Infinity game has been hacked and thieves stole 173,600 in Ethereum cryptocurrency (worth $594.6 million) and $25.5 million in U.S. dollars, stealing a total of $620 million. If Sky Mavis, the maker of the Axie Infinity blockchain game, can’t recover the funds, that’s a huge hit to its overall treasury and a black eye for blockchain-based security, as the whole point of putting the game on the blockchain — in this case a Layer 2 network dubbed the Ronin Network — is to enable better security. The Ronin bridge and Katana Dex enabling transactions have been halted. For now, that means that players who have funds stored on the network can’t access their money right now. The stolen funds only represent a portion of the overall holdings of Sky Mavis and its Axie decentralized autonomous organization (DAO). “We are working with law enforcement officials, forensic cryptographers, and our investors to make sure all funds are recovered or reimbursed. All of the AXS, RON, and SLP on Ronin are safe right now,” said Sky Mavis in a statement. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The hack will likely be considered one of the biggest hacks in cryptocurrency history , at least according to data from Comparitech. The company said there was a security breach on the Ronin Network itself. Earlier today, the firm discovered that on March 23, Sky Mavis’s Ronin validator nodes and Axie DAO validator nodes were compromised resulting in 173,600 ETH (valued at $594.6 million at the moment) and $25.5 million drained from the Ronin bridge in two transactions. So far, the stolen cryptocurrency hasn’t been transferred from the account that did the attack, the company said. The validator nodes are external entities that verify the information on the blockchain and compare notes with each other to ensure the blockchain’s information is accurate. Blockchain is (believed to be) a secure and transparent digital ledger, and Ethereum is one of the biggest networks based on the technology. Ethereum is both a blockchain protocol as well as the name of the cryptocurrency based on the protocol. Sky Mavis uses the blockchain to verify the uniqueness of nonfungible tokens (NFTs), which can uniquely authenticate digital items such as the Axie creatures used in the Axie Infinity game. NFTs exploded in popularity last year and enabled Sky Mavis to raise $152 million at a $3 billion valuation in October. But blockchain games also a flashpoint in the industry now as critics say they are full of ponzi schemes, rug pulls, and other kinds of anti-consumer scams. Ethereum has its drawbacks, as transactions on it are slow and consume a lot of energy, as it taps a lot of computers worldwide to do the verification work. To alleviate that, companies like Sky Mavis have created Layer 2 solutions such as the Ronin Network. That network can execute transactions far more quickly, inexpensively, and with smaller environmental impacts than doing transactions on Ethereum itself. But this offchain processing comes at a risk, as Sky Mavis has just learned. Sky Mavis set up a network of computing nodes to validate transactions on its Ronin Network, but if hackers can gain 51% control of that network, then they can create fake transactions and steal funds stored on the network. Sky Mavis said that the attacker used hacked private keys in order to forge fake withdrawals. Sky Mavis said it discovered the attack this morning after a report from a user being unable to withdraw 5k ETH from the bridge. Details about the attack Sky Mavis’ Ronin chain currently consists of nine validator nodes. In order to recognize a deposit event or a withdrawal event, five out of the nine validator signatures are needed. The attacker managed to get control over Sky Mavis’s four Ronin validators and a third-party validator run by Axie DAO. The validator key scheme is set up to be decentralized so that it limits an attack vector, similar to this one, but the attacker found a backdoor through Sky Mavis’ gas-free RPC node, which the attacker used to get the signature for the Axie DAO validator. This traces back to November 2021 when Sky Mavis requested help from the Axie DAO to distribute free transactions due to an immense user load. The Axie DAO allowed listed Sky Mavis to sign various transactions on its behalf. This was discontinued in December 2021, but the allow list access was not revoked. Once the attacker got access to Sky Mavis systems they were able to get the signature from the Axie DAO validator by using the gas-free RPC,” Sky Mavis said. “We have confirmed that the signature in the malicious withdrawals match up with the five suspected validators,” said Sky Mavis. Actions taken Sky Mavis said it moved swiftly to address the incident once it became known and it is actively taking steps to guard against future attacks. To prevent further short-term damage, the company has increased the validator threshold from five to eight. “We are in touch with security teams at major exchanges and will be reaching out to all in the coming days,” the company said. “We are in the process of migrating our nodes, which is completely separated from our old infrastructure.” The company has also temporarily paused the Ronin Bridge to ensure no further attack vectors remain open. Binance has also disabled their bridge to/from Ronin to err on the side of caution. The bridge will be opened up at a later date once the company is certain no more funds can be drained. Sky Mavis has also temporarily disabled Katana DEX due to the inability to arbitrage and deposit more funds to Ronin Network. And it is working with Chainalysis to monitor the stolen funds, as transactions on the blockchain can be tracked. Next steps The company said it is working directly with various government agencies to ensure the criminals get brought to justice. “We are in the process of discussing with Axie Infinity / Sky Mavis stakeholders about how to best move forward and ensure no users’ funds are lost,” the company said. Originally, Sky Mavis chose the five out of nine threshold for validators as some nodes didn’t catch up with the chain, or were stuck in syncing state. Moving forward, the threshold will be eight out of nine. The company will be expanding the validator set over time, on an expedited timeline. Most of the hacked funds are still in the alleged hacker’s wallet: https://etherscan.io/address/0x098b716b8aaf21512996dc57eb0615e2383e2f96 [Update: Blockchain Intelligence Group, a global cryptocurrency intelligence and compliance company, said the money has now been moved elsewhere and they are tracking it. Here’s the details: Funds sent to exchanges: FTX (Exchange): 1,219.982731106253 ETH Crypto (Exchange): 1 ETH Huobi (Exchange): 3,750 ETH So far 4,970 ETH ($16,931,672.478) has already moved to exchanges. The amount unspent in 4 addresses could potentially move in the same direction. And the Total unspent amount in these addresses: 177,192.66 ETH.] Sky Mavis is figuring out exactly how this happened. “As we’ve witnessed, Ronin is not immune to exploitation and this attack has reinforced the importance of prioritizing security, remaining vigilant, and mitigating all threats. We know trust needs to be earned and are using every resource at our disposal to deploy the most sophisticated security measures and processes to prevent future attacks,” Sky Mavis said. The company said that ETH and USDC deposits on Ronin have been drained from the bridge contract. Sky Mavis said it is working with law enforcement officials, forensic cryptographers, and our investors to make sure there is no loss of user funds. All of the AXS, RON, and SLP on Ronin are safe right now, the company said. “As of right now users are unable to withdraw or deposit funds to Ronin Network. Sky Mavis is committed to ensuring that all of the drained funds are recovered or reimbursed,” the company said. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,692
2,002
"The industrial metaverse: A game-changer for operational technology | MIT Technology Review"
"https://www.technologyreview.com/2022/12/05/1063828/the-industrial-metaverse-a-game-changer-for-operational-technology"
"In association with Nokia In association with: Content from MIT Technology Review Insights This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review's editorial staff. In association with: The industrial metaverse: A game-changer for operational technology How enterprises can unlock the full potential of the industrial metaverse. Even as technologists are trying to envision what the metaverse will bring for businesses and consumers, the industrial metaverse is already transforming how people design, manufacture, and interact with physical entities across industries. Thierry Klein, president of Bell Labs Solutions Research at Nokia While definitions abound and it remains to be seen how the industrial metaverse will fully unfold, digital twins are increasingly viewed as one of its key applications. Used for everything from creating ecosystems when planning a new city to working out iterations of manufacturing processes, digital twins were first proposed in 2002 and later became a vital technology when the fourth industrial revolution ( Industry 4.0 ) accelerated automation and digitization across industries. Simply put, a digital twin is a virtual replica of a product or process used to predict how the physical entity will perform throughout its lifecycle. BMW , for instance, created a virtual twin of its production plant in Bavaria before building the physical facility. Boeing is using a digital twin development model to design its airplanes. And “Virtual Singapore” is a digital representation of the Southeast Asian nation that the government created to support its policy decisions and test new technologies. The increasing buzz surrounding digital twins is fueling expectations for the industrial metaverse. “The market opportunity is huge and the amount of investment capital that is going into this particular space is very, very significant.” Raghav Sahgal, president of the cloud and network services business at Nokia According to ABI Research, revenues for industrial digital twin and simulation and industrial extended reality will hit $22.73 billion by 2025 as organizations use Industry 4.0 tools such as artificial intelligence (AI), machine learning, edge computing, and extended reality to accelerate digital transformation. Potential of the industrial metaverse market Consumer Metaverse Virtual spaces revenue (global) Consumer appeal driven Reliant on trends and network effect Fragmented monetization, with growth from 2026 Enterprise Metaverse Immersive collaboration and related cloud revenue (global) Business value driven Solution and device innovation Good monetization potential, with growth from 2025 Industrial Metaverse Digital twin and simulation and industrial extended reality revenue (global) Operational results driven Industrial automation focus High monetization potential, with early traction Source: ABI Research, Evaluation of the Enterprise Metaverse Opportunity, Third Quarter, 2022 Experts say a convergence of maturing technologies is fueling the growth of the industrial metaverse. Foremost among these, according to Sahgal, is 5G. “This really is a very big inflection point in the industry,” he says. As he explains, “5G creates interesting new vectors of capability” that enable lower latency (delay) and more precise exchange of data, both key for driving metaverse applications. Beyond twinning: operational insight Creating digital twins is just one of the many advantages of the industrial metaverse. Klein says the industrial metaverse can reach “a much larger scale with increasing complexity by creating digital twins of entire systems such as factories, airports, cargo terminals, or cities—not just digital twins of individual machines or devices that we have seen so far.” He points to Nokia Bell Labs’ technology-partnership with indoor vertical farming company AeroFarms, started in 2020, as an example of how the industrial metaverse’s immersive reality, sensing, and machine-learning capabilities can be used to gain operational insights. “It’s an early example,” he notes, “and you can see how some of the key technological elements are being developed to build toward a full-scale metaverse.” By combining its AI-based autonomous drone-control solution and advanced machine-learning capabilities with machine vision tools, Nokia Bell Labs has created a technology that can track the growth of millions of plants. “We have developed a completely autonomous drone solution with multiple drones flying through this farm,” says Klein. That allows the farm to monitor details such as the height and color of its plants, spot poor growth areas, and predict the production yield. “We actually built a complete digital twin of the farm that gives the growers a real-time picture of the entire production throughout the farm,” says Klein. With data analysis, the farm can optimize its water, energy, and nutrient consumption; speed up troubleshooting; improve accuracy in yield forecast; and maintain a consistently high quality. Multistakeholder collaboration The industrial metaverse could also bolster remote collaboration and optimize processes, says Klein. Users could tap into its capabilities as a dynamic, multistakeholder ecosystem, using intelligent analytics to process datasets and gain deeper insights into problems. Nokia’s collaboration with Taqtile is one example. The companies joined hands in 2021 to offer an augmented reality training and work-instruction platform, which leverages industrial edge cloud computing, the internet of things, and 4G or 5G networks and enables users to communicate in real time with experts. “It all comes down to having access to more information and better understanding that may not be visible to the naked eye, giving you more insight about what that information means,” says Klein. The platform enables users to extract the most useful information from complex data, allowing them to make intelligent decisions, interact with and control the environment around them, and go back to collaborative design. He likens the metaverse to the internet in the early 1990s. “I think the metaverse will be similar where we cannot imagine all the applications and its impact on our personal and professional lives right now,” he says. “There are, however, already a lot of practical examples and very concrete progress is being made.” Still, Sahgal says certain obstacles need to be overcome. These include scalability, which is one of the metaverse’s biggest challenges, making 5G investments crucial because the metaverse “will be pretty immense in terms of data and video consumption.” Metaverse essentials For mission-critical industrial applications, the metaverse will require low-latency, massive machine communications and high reliability, in addition to fast network speeds. Edge computing is another must-have because of the requirement for almost zero latency—decentralized local edge data centers close to users will be needed for people to interact with one another and use devices to access the metaverse. Getting to the metaverse will take more than sophisticated devices though—it will need to be a collaborative effort. “Nobody will actually own or dominate the metaverse,” says Sahgal. “It'll be all kinds of applications, devices, and other things coming together.” To facilitate that, exposing the network as code will be an important foundation. “The network’s capabilities are represented as a piece of code to the application development community, which they can embed into their applications and then consume those capabilities,” he explains. “And hence, the network becomes very programmable by the ecosystem.” In addition, software-as-a-service will help more organizations access the industrial metaverse and in turn, facilitate agility and rapid innovation. Meanwhile, Klein compares building the metaverse to having the right selection of blocks and interfaces to connect them in various ways. “Imagine that you bought a Lego set to build a model plane,” he explains. “Initially, you’re quite happy with building that plane, but after a while you are bored and you want to build something else, say a boat, or a car, or a house. How do you build something different that matches your creative interest? It’s the same foundational building blocks from the original Lego set that you reuse, but you put them together in different ways.” For the industrial metaverse, those building blocks are the enabling modules, applications, and software assets. “You will connect them together in different ways,” says Klein, “using application programming interfaces to create new solutions that solve your specific industrial challenges and match the business logic of your use cases.” An ecosystem of partners, technology and network providers, data producers and owners, and application developers will contribute to these building blocks. Collectively, they facilitate a digital marketplace and lead to new and unprecedented levels of innovation, creativity, and agile and collaborative service creation. As with any innovative technology, security is paramount, especially because cyberattacks have surged in recent years, with criminals employing increasingly sophisticated technology such as AI, ransomware-as-a-service, and deepfakes. Sahgal says cybersecurity will become even more important in the industrial metaverse: “That's where you're dealing with very mission-critical data; if that gets compromised, it could have a huge impact on that specific industry as well.” Keeping people’s identities secure and protecting the data shared within virtual collaboration will also be integral, especially across a decentralized ecosystem of stakeholders who may not have pre-established business and relationships with one another. Not if, but when The greatest share of enterprises and communication service providers believe the metaverse will be here within 10 years and the organizations need to start preparing now, according to a joint survey by Nokia and Gartner Peer Insights. Communications service providers Enterprise Enthusiasm for the metaverse is intensifying 49% 42% Believe it will transform the way we work 24% 30% See the opportunity for new shared experiences and augmented reality 11% 11% Believe it will benefit industry Only... 1% 6% Consider the metaverse as hype Organizations must prepare now 23% 10% Think the metaverse is already here 63% 75% Think the metaverse is 5-10 years away but organizations must prepare now 12% 14% Think it’s a long way off and has nothing to do with them Source: Nokia, 2022 Real potential While increased demand for performance, ultra-reliability, and advanced cybersecurity need to be addressed before the metaverse can be fully utilized on a global scale, experts say companies should not wait to capitalize on this new wave of technology. “It starts with awareness,” says Sahgal. “Acknowledging that this is not an experiment anymore.” But awareness should also be accompanied by the right technology—intelligent, autonomous, cloud-native networks with high bandwidth and ultra-low latency. Industries will also have to modernize their infrastructure to make it much more open and accessible to participate in the industrial metaverse. Both Sahgal and Klein say they believe early users of the industrial metaverse will be concentrated in certain industries, particularly those involving physical assets, such as manufacturing, logistics, and transportation. The health-care industry could also benefit from metaverse applications, specifically in avenues such as telemedicine and robotic surgery. “The use cases are infinite, quite frankly, because you can apply it almost to any industry,” says Sahgal. While much about the metaverse remains unknown, its endless possibilities seem to be the only certainty. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
13,693
2,022
"Gateway to the metaverse economy: 5 transformative functions of NFTs | VentureBeat"
"https://venturebeat.com/datadecisionmakers/gateway-to-the-metaverse-economy-5-transformative-functions-of-nfts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Gateway to the metaverse economy: 5 transformative functions of NFTs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As nonfungible tokens ( NFTs ) step into the mainstream, they are nearing a ‘coming of age.’ In this next phase, investors are rapidly discovering new use cases for NFTs beyond the initial frenzy of digital artwork and collectibles. A prime example is NFTs’ seamless connection with the metaverse industry , a fast-paced development which will inevitably shape NFT application and exponentially grow adoption in the long term. Significantly, metaverses hold great promise for a more open and fair economy – one that is decentralized and backed by the blockchain. But, in essence, NFTs will serve as the gateway to a metaverse, as they empower the identity, community and socialization the metaverse economy is being built upon. While the first NFT was minted in 2015, it is safe to say recent developments in the metaverse industry are now setting NFTs on a new path to the future. As a result, an abundance of speculative opportunities are emerging for businesses, investors and entrepreneurs alike. In particular, the metaverse is relying on NFTs to fulfill the five following transformative functions. Opening the next frontier of gaming The gaming industry is already outpacing every other form of entertainment spending, including amusement parks, movie theaters, concerts and live spectator sports. Therefore, it should not be too surprising that when Mark Zuckerberg announced Facebook’s name change , he specified gaming as one of the leading motivators for the rebranding. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Gaming has long been associated with virtual reality ( VR ) so consumers are already familiar with 3D avatars and world-building. VR gaming today is largely conducted via standalone applications on a desktop, mobile phone or VR headset. This offers a more immersive experience compared to traditional video games. But in a metaverse, which is essentially a unified and interoperable VR space, players can interact with each other and play games through human-computer interaction (HCI). The single interoperable environment opens the next frontier of gaming, enhanced by social gaming, play-to-earn (P2E) and portable game assets. Notably, NFTs hold the keys for unlocking all of these concepts. For example, NFTs serve as the in-game currency for P2E. Basically, the more value you add to the game, the more you earn. Moreover, the P2E game itself is largely impartial and more democratized than traditional platforms. Thanks to the ownership capabilities afforded by NFTs, players fully own their assets, instead of the earnings being controlled by a centralized game operator. Advancing the creator economy NFTs are intended to virtually represent innovative or unique assets. While they are not formally a currency, items minted as NFTs can be sold and traded on virtual platforms. Armed with this transactional power, NFTs are ushering in the next wave of the creator economy. The creator economy is technically as old as mankind itself, built by artists, writers and other creators across physical mediums. But the term ‘creator economy’ was only officially coined amid the digital age. Today over 50 million independent content creators, curators and community builders are part of the creator economy in the United States. With NFTs pegged to the decentralized blockchain, each asset contains codes and features that cannot be replicated. Furthermore, the asset cannot be stolen and its value is exclusive to the owner. The code can embed additional rights and obligations, such as sell-on fees that afford the creator a percentage of any subsequent transactions of the digital asset. The key mechanisms of ‘smart contracts’ and ‘copyright tracking’ enhance IP rights and ownership, solving major problems creators have faced in the cyberage. The metaverse industry is a major step forward for the creator economy, offering a virtual world where content can gain value and creators can gain equity for their work. These defining features are only possible because the product is tied to secure, transparent and decentralized NFTs. Unlocking new social experiences NFTs will play a leading role in enabling the communities, personal identity mechanisms and social experiences that will define the metaverse. For example, users could explore a specific hobby or show their support for a project by purchasing NFT assets. As a result, like-minded NFT owners will be able to come together to form communities, share their experiences and collaborate on relevant content creation. NFT avatars are also a critical concept in the socialization system of a metaverse, representing not just a player’s actual self but also an identity they imagine. Users could use NFT assets to build out this identity and gain access to new experiences in a metaverse. In a metaverse, NFTs can be perceived as the extension of our real-life identities, granting us each complete ownership, control and flexibility for creating our virtual persona. Bridging physical and digital worlds It’s necessary to note the social experiences of the metaverse model can be transitioned offline as well, with NFTs effectively bridging the gap between physical and digital worlds. For example, the Bored Ape Yacht Club (BAYC ), a conglomerate of primate avatars created by four pseudonymous founders, is making inroads in connecting VR and physical reality. Owners of BAYC NFTs gain admission to exclusive clubs and features of the community, such as first-access to new NFT collections, NFT enhancements and even ‘in-real-life’ private events. In November 2021, BAYC hosted an exclusive yacht party and warehouse rave at ApeFest in Manhattan. Building up the virtual real estate market The metaverse industry is also bringing real estate into a new realm, with some “parcels” of virtual property spaces being assessed to the tune of millions of dollars. For example, in the browser-based metaverse Decentraland, an asset of virtual land recently sold for $2.4 million by crypto investor Tokens.com. Additionally, in December 2021, a user spent $450,000 to become a neighbor of the rapper Snoop Dogg’s Snoopverse , an interactive world he is developing in the Ether-based platform Sandbox. Effectively, NFTs represent the virtual plot of land and allow for it to be transacted. To maintain the value of a meteverse’s digital real estate market, space is inherently limited. For instance, Decentraland is comprised of 90,000 parcels of land that each measure around 50 feet by 50 feet. This maintains ‘ digital scarcity ,’ a concept that has long been discussed in relation to cryptocurrency. A recent position paper by JPMorgan found the average parcel of virtual land across the four main metaverses doubled in the six-month period from June to December 2021, shooting from $6,000 to $12,000. Virtual land is growing in value just as fast as physical land, yet there are no interest rate increases that will curb or slow down price acceleration. Looking ahead The metaverse industry is still in its infancy and continuously being shaped by cryptocurrency trends and then reshaped evolving digital behaviors. NFTs are on a similar track. While it may be fairly straightforward how NFTs enable ownership and virtual identity, the metaverse model creates an interoperable environment with seemingly endless possibilities for consumers to gather, socialize, play, earn and transact. Therefore, looking forward, businesses must move the needle on their NFT investments from exploration to activation, as NFTs are the linchpin for creating value and engaging users in the metaverse economy. Jonathan Teplitsky is the CEO of PipeFlare , a platform aimed at helping game developers monetize their work. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,694
2,022
"How the value of NFTs can evolve beyond speculation | VentureBeat"
"https://venturebeat.com/virtual/how-the-value-of-nfts-can-evolve-beyond-speculation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the value of NFTs can evolve beyond speculation Share on Facebook Share on X Share on LinkedIn With all the buzz around the metaverse , cryptocurrencies, NFTs and other digital collectibles have become critical elements of the Web3 space. However, there’s a need for crucial conversations about ownership in the metaverse, especially for digital collectibles and the real value they offer. At MetaBeat 2022 , a major highlight for attendees using the virtual Decentraland platform were the VentureBeat badges placed at random through the entire virtual landscape (the badges could be collected in exchange for a MetaBeat-exclusive NFT , which could then be leveled up through social interaction.) In a fireside chat with Lewis Ward, research director of gaming and esports at IDC, Alun Evans, cofounder and CEO at Freeverse — the company responsible for the digital collectibles at MetaBeat — said Freeverse aims to reform speculation in the world of digital collectibles. “Many people have heard of NFTs, and some people have an opinion of them. Many people think they are great, some people are very bullish on them. On the other hand, some people might think they are not so good. I share this idea that NFTs are only based on speculation. People collect things in the real world — they collect credit cards and guitars and stamps and fancy video games. But they are nothing more than collectibles. So, where is the real value in them?” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! He has a point there. Since speculations have waned, NFTs — the mark of digital ownership in the metaverse — have taken a hit. Analytics platform, Nonfungible.com , highlights the drop in the market over the last two quarters. It cited that “a massive 25% drop was observed in terms of USD traded between Q1 and Q2 2022 with a global volume of about $8 billion in Q2 2022.” Speculation is bad, valuation is good While some outrightly dismiss the notion of NFTs, Evans sees a similitude in the real world. “When you think about the size of the global economy compared to the size of the collectible economy, the collectible economy is a very small component and our vision of Freeverse asks the question, ‘can we make digital ownership in a way that is more than just collectibles?’ That way, the tokens themselves are valued by more than speculation as to how rare they are, and actually valued by how useful they are to other people.” According to Evans, what often happens when people buy NFTs is that they own a token, get a private URL — which points to a private server somewhere. But the problem, he said, is that it’s a website and all the content associated with the NFT exists on that private server, which can be hacked, or taken down. “Yes, it can be changed; but also, the company may forget to pay their AWS bill. And so, the content might disappear tomorrow, in which case, if that happens, what is the value in it? It’s literally nothing,” he added. Based in Barcelona, Spain, Freeverse wants to change the NFT valuation narrative, having developed a “fraud-proof layer-2-based technology,” capable of being deployed on the main blockchain networks — including Ethereum, Polkadot and TRON. Evans noted that Freeverse enables easy implementation of its technology by third-party apps and permits trading in fiat currencies like the U.S. dollar, without resorting to a cryptocurrency exchange. Increasing the intrinsic value of NFTs Evans said the tokens that Freeverse issues are non-fungible, but added that those properties can evolve and change based on how the token is used and what the owner actually does with it. As a result, the value that other people are going to pay for that token is dependent on how it’s been used. “For example, you did have a free token so it is literally worth nothing starting off with, but it can be leveled up according to how it’s used. Therefore, the value that someone’s going to pay for that token is 100% dependent on what the owner has done with it, rather than artificial scarcity.” A key example of how NFTs can evolve, appreciate and degrade, Evans noted, is seen in a game that Freeverse helped develop for the U.S. market. It ties the physical fitness of players to their training regiment and this affects how much they can be traded for. As the player’s avatar value improves based on training, it can be bought, sold and traded. “The players are traded for real money — U.S. dollars, euros and pounds sterling — with other users from around the world. And suddenly there is retention, because you are playing the game and enjoying it. You are saying ‘if I don’t come back and train my players every day, I am going to lose money or at least potentially lose money. So you’re adding in the elements where just a little extra gameplay can make players stick around longer,” said Evans. This evolving asset class can potentially turn things around for companies and businesses that intend to keep engagement constant on their NFT-facing games. It can be a whole game-changer as the metaverse comes of age, too. In each case, with living assets, Evans said Freeverse addresses a key issue of standard NFTs: “the state of every asset at every point in time is certifiable on-chain, not via external links to private servers that can arbitrarily corrupt this agreement.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,695
2,022
"Why the metaverse will rely on blockchain frameworks to connect to physical devices | VentureBeat"
"https://venturebeat.com/virtual/new-blockchain-framework-connects-physical-devices-to-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the metaverse will rely on blockchain frameworks to connect to physical devices Share on Facebook Share on X Share on LinkedIn Following the hype fueled by Meta’s (formerly Facebook) pivot last year, the metaverse has been entering more organizations and steadily moving on an upward trajectory. With COVID-19 accelerating digital transformation , enterprises are now keener to adopt new technologies, even a nascent one like the metaverse. Having evolved from Neal Stephenson’s initial idea (from his 1992 novel Snow Crash ), Web3 and the metaverse today offer several immersive opportunities that brands are keying into. According to Gartner , “enterprises will experiment with the metaverse, connecting, engaging and incentivizing human and machine customers to create new value exchanges, revenue streams and markets.” The metaverse’s market value over the next seven years is likely to be massive, with McKinsey estimating it will reach $5 trillion by 2030. But what are the underlying technologies that will power this new promise of a multi-trillion dollar immersive virtual world? Experts say it will be powered by a convergence of 5G, AR, VR, AI and blockchain — and many companies are on the quest to build into that tech stack. One such company is MachineFi Lab — the core developer of IoTeX, a decentralized blockchain platform that enables interactions between humans and machines. Today, MachineFi Lab announced it released W3bstream, “a blockchain-agnostic infra with the power to disrupt the machine economy where innovation until now has remained stagnant,” according to the company’s press release. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Having previously built IoTeX, “MachineFi Lab is uniquely positioned to help connect the metaverse with real-life devices,” says Dr Raullen Chai, founder and CEO at MachineFi Lab. Chai said that “W3bstream connects the real world to Web3, serving as an open, decentralized off-chain computing infra that sits between the blockchain and smart devices.” He added that W3bstream allows builders to connect Web3 token incentives with real-world activity confirmed by user-owned smart devices, expanding the Web3 design space into the real world. Built on blockchain MachineFi Lab is harnessing the power of blockchain with an end-to-end solution to distribute, orchestrate and monetize large numbers of IoT devices as part of a unified machine network. The MachineFi platform is built to enable developers to connect billions of machines with Web3 infrastructure. By joining the machine economy, people can monetize their devices and associated digital assets globally. “Today, numerous machines have already started collaborating, producing and distributing, and they consume information and resources collectively, forming a heterogeneous network of machines,” said Chai. W3bstream provides cutting-edge tools and middleware that reduce development timelines and costs by at least 50% for builders, Web2 businesses and smart device makers, explained Chai. He claims W3bstream unlocks the $12.6 trillion reward economy for millions of people globally as they carry out everyday activities — such as exercising, driving safely, sleeping well, being eco-friendly, visiting sites and attending events. “This real-world data protocol enables data ownership, reward systems for everyday activities and data sharing, and allows developers to build MachineFi applications very quickly and inexpensively,” Chai said. He added that W3bstream offers “x-and-earn use cases, including sleep and earn, drive and earn, and exercise and earn.” Chai cited HealthBlocks , as a prime example, as a Web3 health app that has changed how users interact with and benefit from intelligent wearable devices and machines by motivating them to lead healthy lifestyles. More use cases of W3bstream include “proof of anything, fast and easy migration to the blockchain, product tokenization and verifiable transparency processes.” Technology twins: IoT and the metaverse Smart devices and machines connected to the internet will significantly impact our lives in the future. Experts estimate that by 2030 people, businesses and organizations worldwide will own about 125 billion devices , generating a $12.6 trillion machine economy. Machines could replace over 30% of the human workforce in eight years — and data, powered by AI , could generate $13 trillion in global economic value by the start of the next decade. Already touted as technology twins, IoT and the metaverse have specific responsibilities. For instance, IoT will enable the metaverse to analyze and interact with the physical world. For its part, the metaverse will act as a 3D user interface for IoT devices. With competition including Hyperledger , Azure Blockchain Workbench and the IBM Blockchain platform , Chai claims W3bstream is differentiated by its framework that “binds users with their smart devices, and will rely on a decentralized protocol to reach a consensus regarding what has happened in the physical world and [to] produce proofs that trigger token reward distribution to users in Web3, according to rules defined in smart contracts.” He also clarified that W3bstream is under continuous improvement. “The W3bstream rollout is planned in four stages, starting with the release of V1.0, when all development tools become available, including software development kits (SDKs) and open-source repositories for developers of all skill levels. It is also when developers and businesses can configure and deploy W3bstream nodes to build MachineFi dApps using the Web Assembly (WASM) language,” he explained. Among the competition, Chai said MachineFi is the first to develop a product that connects the metaverse to physical gadgets using the IoT pathway. The lab has gotten support from investors including Samsung NEXT , Jump Crypto, Draper Dragon, Xoogler Ventures, IOSG, Wemade and Escape Velocity, all of which participated in the latest MachineFi funding round. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,696
2,023
"2023 could be the year of mixed reality | VentureBeat"
"https://venturebeat.com/virtual/2023-could-be-the-year-of-mixed-reality"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 2023 could be the year of mixed reality Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As we kick off a new year, the public is still largely confused by one of the biggest buzzwords of last year: The metaverse. After being promised a society-changing technology by breathless media influencers, what many people actually encountered was either (a) cartoonish virtual worlds filled with creepy avatars or (b) opportunistic platforms selling “ virtual real estate ” through questionable NFT schemes. To say the industry overpromised and underwhelmed in 2022 would be an understatement. Fortunately, the metaverse really does have the potential to be a society-changing technology. But to get there, we need to push past today’s cartoonish worlds and deploy immersive experiences that are deeply realistic, wildly artistic, and focus far more on unleashing creativity and productivity than on minting NFT landlords. In addition, the industry needs to overcome one of the biggest misconceptions about the metaverse: The flawed notion that we will live our daily lives in virtual worlds that will replace our physical surroundings. This is not how the metaverse will unfold. Don’t get me wrong, there will be popular metaverse platforms that are fully simulated worlds, but these will be temporary “escapes” that users sink into for a few hours at a time, similar to how we watch movies or play video games today. On the other hand, the real metaverse , the one that will impact our days from the moment we wake to the moment we go to sleep, will not remove us from our physical surroundings. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Instead, the real metaverse will mostly be a mixed reality (MR) in which immersive virtual content is seamlessly combined with the physical world, expanding and embellishing our daily lives with the power and flexibility of digital content. A mixed reality arms race I know there are some who will push back on this prediction, but 2023 will prove them wrong. That’s because a new wave of products is headed our way that will bring the magic of MR to mainstream markets. The first step in this direction was the recent release of the Meta Quest Pro which is hardware-ready for quality mixed reality with color passthrough cameras that capture the real world and can combine it with spatially registered virtual content. It’s an impressive device, but so far there is little software available that showcases its mixed reality capabilities in useful and compelling ways. That said, we can expect the real potential to be unleashed during 2023 as software rolls out. Also, in 2023, HTC is scheduled to release a headset that looks to be even more powerful than the Meta Quest Pro for mixed-reality experiences. To be unveiled at CES in January, it reportedly has color passthrough cameras of such high fidelity you can look at a real-world phone in your hand and read text messages in mixed reality. Whether consumers prefer HTC’s new hardware or Meta’s, one thing is clear: An MR arms race is underway, and it’s about to get more crowded. That’s because Apple is expected to launch its own MR headset in 2023. Rumored to be a premium device that ships midyear, it will likely be the most powerful mixed reality product the world has seen. There are claims it will feature quality passthrough cameras along with LiDAR sensors for profiling distances in the real world. If the LiDAR rumor pans out, it could mean the Apple device is the first MR/augmented reality (AR) eyewear product to enable high-precision registration of virtual content to the real world in 3D. Accurate registration is critical for suspension of disbelief, especially when enabling users to interact manually with real and virtual objects. Why so much momentum towards mixed reality? Simple. We humans do not like being cut off from our physical surroundings. Sure, you can give someone a short demo in virtual reality (VR), and they’ll love it. But if you have that same person spend an hour in fully immersive VR, they may start to feel uneasy. Approach two hours, and for many people (myself included), it’s too much. This phenomenon first struck me back in 1991 when I was working as a VR researcher at Stanford and NASA, studying how to improve depth perception in early vision systems. Back then, the technology was crude and uncomfortable, with low-fidelity graphics and lag so bad it could make you feel sick. Because of this, many researchers believed that the barrier to extended use was the clunky design and poor fidelity. We just needed better hardware, and people wouldn’t feel uneasy. I didn’t quite agree. Certainly, better hardware would help, but I was pretty sure that something else was going on, at least for me personally — a tension in my brain between the virtual world I could see and the real world I could sense (and feel) around me. It was this conflict between two opposing mental models that made me feel uneasy and made the virtual world seem less real than it should. To address this, what I really wanted to do was take the power of VR and combine it with my physical surroundings, creating a single immersive experience in which my visual, spatial and physical senses were all perfectly aligned. My suspicion was that the mental tension would go away if we could allow users to interact with the real and the virtual as if they inhabited the same perceptual reality. By a stroke of luck, I had the opportunity to pitch the U.S. Air Force and was funded to build a prototype mixed reality system at Wright Patterson Air Force Base. It was called the Virtual Fixtures platform, and it didn’t just support sight and sound, but touch and feel (3D haptics), adding virtual objects to the physical world that felt so authentic they could help users perform manual tasks with greater speed and dexterity. The hope was that one day this new technology could support a wide range of useful activities, from assisting surgeons during delicate procedures to helping technicians repair satellites in orbit through telerobotic control. Two worlds snapping together Of course, that early Air Force system didn’t support surgery or satellite repair. It was developed to test whether virtual objects could be added to real-world tasks and enhance human performance. To measure this, I used a simple task that involved moving metal pegs between metal holes on a large wooden pegboard. I then wrote software to create a variety of virtual fixtures that could help you perform the task. The fixtures ranged from virtual surfaces to virtual cones to simulated tracks you could slide the peg along, all while early passthrough cameras aligned the activity. And it worked, enabling users to perform manual tasks with significantly greater speed and precision. I give this background because of the impact it had on me. I can still remember the first time I moved a real peg towards a real hole and a virtual surface automatically turned on. Although simulated, it felt genuine, allowing me to slide along its contour. At that moment, the real world and the virtual world became one reality, a unified mixed reality in which the physical and digital were combined into a single perceptual experience that satisfied all your spatial senses — visual, audio, proprioception, kinesthesia, and haptics. Of course, both worlds had to be accurately aligned in 3D, but when that was achieved, you immediately stopped thinking about which part was physical and which was simulated. That was the first time I had experienced a true mixed reality. It may have been the first time anyone had. I say that because once you experience the real and virtual combined into a single unified experience, all your senses aligned, the two worlds actually snap together in your mind. It’s almost like one of those visual illusions where there’s a hidden face you can’t see, and then something clicks, and it appears. That’s how a true mixed reality experience should be: A seamless merger of the real and the virtual that is so natural and authentic that you immediately realize our technological future will not be real or virtual, it will be both. One world; one reality. As I look ahead, I’m impressed by how far the industry has come, particularly in the last few years. The image above (on the left) shows me in 1992 in an Air Force lab working on AR/MR technology. The image on the right shows me today, wearing a Meta Quest Pro headset. What is not apparent in the picture are the many large computers that were running to conduct my experiments thirty years ago, or the cameras mounted on the ceiling, or the huge wire harness draped behind me with cables routed to various machines. That’s what makes this new wave of modern headsets so impressive. Everything is self-contained — the computer, the cameras, the display, the trackers. And it’s all comfortable, lightweight, and battery-powered. It’s remarkable. And it’s just getting started. The technology of mixed reality is poised to take off, and it’s not just the impressive new headsets from Meta, HTC, and (potentially) Apple that will propel this vision forward, but eyewear and software from Magic Leap , Snap, Microsoft , Google, Lenovo , Unreal, Unity and many other major players. At the same time, more and more developers will push the limits of creativity and artistry, unlocking what’s possible when you mix the real and the virtual — from new types of board games ( Tilt Five ) and powerful medical applications ( Mediview XR ), to remarkable outdoor experiences from Niantic Labs. This is why I am confident that the metaverse, the true metaverse , will be an amalgamation of the real and the virtual, so seamlessly combined that users will cease to think about which elements are physical and which are digital. We will simply go about our daily lives and engage in a single reality. It’s been a long time in the making, but 2023 will be the year that this future really starts to take shape. Louis Rosenberg is founder and CEO of swarm intelligence company Unanimous AI. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,697
2,022
"Going incognito: How we can protect our privacy in the metaverse | VentureBeat"
"https://venturebeat.com/virtual/going-incognito-how-we-can-protect-our-privacy-in-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Going incognito: How we can protect our privacy in the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The image below shows me standing in a “Virtual Escape Room” that was created by academic researchers at U.C. Berkeley’s Center for Responsible Decentralized Intelligence. The simulated world requires me to complete a series of tasks, each one unlocking a door. My goal is to move from virtual room to virtual room, unlocking doors by solving puzzles that involve creative thinking, memory skills and physical movements, all naturally integrated into the experience. I am proud to say I made it out of the virtual labyrinth and back to reality. Of course, this was created by a research lab, so you might expect the experience was more than it seems. And you’d be right — it was designed to demonstrate the significant privacy concerns in the metaverse. It turns out that while I was solving the puzzles, moving from room to room, the researchers were using my actions and reactions to determine a wide range of information about me. I’m talking about deeply personal data that any third party could have ascertained from my participation in a simple virtual application. As I have been involved in virtual and augmented reality for decades and have been warning about the hidden dangers for many years, you’d think the data collected would not have surprised me. But you’d be wrong. It’s one thing to warn about the risks in the abstract; it’s something else to experience the privacy issues firsthand. It was quite shocking, actually. That said, let’s get into the personal data they were able to glean from my short experience in the escape room. First, they were able to triangulate my location. As described in a recent paper about this research, metaverse applications generally ping multiple servers, which here enabled the researchers to quickly predict my location using a process called multilateration. Even if I had been using a VPN to hide my IP address, this technique would still have found where I was. This isn’t shocking, as most people expect their location is known when they connect online, but it is a privacy concern nonetheless. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Going deeper, the researchers were able to use my interactions in the escape room to predict my height, the length of my arms (wingspan), my handedness, my age, my gender, and basic parameters about my physical fitness level, including how low I could crouch down and how quickly I could react to stimuli. They were also able to determine my visual acuity, whether I was colorblind, and the size of the room that I was interacting with, and to make basic assessments of my cognitive acuity. The researchers could have even predicted whether I had certain disabilities. It’s important to point out that the researchers used standard hardware and software to implement this series of tests, emulating the capabilities that a typical application developer could employ when building a virtual experience in the metaverse. It’s also important to point out that consumers currently have no way to defend against this — there is no “incognito mode” in the metaverse that conceals this information and protects the user against this type of evaluation. Well, there wasn’t any protection until the researchers began building one — a software tool they call “MetaGuard” that can be installed on standard VR systems. As described in a recent paper by lead researchers Vivek Nair and Gonzalo Garrido of U.C. Berkeley, the tool can mask many of the parameters that were used to profile my physical characteristics in the metaverse. It works by cleverly injecting randomized offsets into the data stream, hiding physical parameters such as my height, wingspan and physical mobility, which otherwise could be used to predict age, gender and health characteristics. The free software tool also enables users to mask their handedness, the frequency range of their voice, and their physical fitness level and conceal their geospatial location by disrupting triangulation techniques. Of course, MetaGuard is just a first step in helping users protect their privacy in immersive worlds, but it’s an important demonstration, showing that consumer-level defenses could easily be deployed. At the same time, policymakers should consider protecting basic immersive rights for users around the globe, guarding against invasive tracking and profiling. For example, Meta recently announced that its next VR headset will include face and eye tracking. While these new capabilities are likely to unlock very useful features in the metaverse, for example enabling avatars to express more realistic facial expressions, the same data could also be used to track and profile user emotions. This could enable platforms to build predictive models that anticipate how individual users will react to a wide range of circumstances, even enabling adaptive advertisements that are optimized for persuasion. Personally, I believe the metaverse has the potential to be a deeply humanizing technology that presents digital content in the form most natural to our perceptual system — as immersive experiences. At the same time, the extensive data collected in virtual and augmented worlds is a significant concern and likely requires a range of solutions, from protective software tools like MetaGuard to thoughtful metaverse regulation. For those interested in pushing for a safe metaverse, I point you towards an international community effort called Metaverse Safety Week that is happening in December. Louis Rosenberg, PhD is an early pioneer in the fields of virtual and augmented reality. His work began over 30 years ago in labs at Stanford and NASA. In 1992 he developed the first interactive augmented reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corporation (public on Nasdaq). In 2004 he founded the early AR company Outland Research. He earned his PhD from Stanford, has been awarded over 300 patents for VR, AR, and AI technologies and was a professor at California State University. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,698
2,022
"How AR and VR are transforming customer experiences | VentureBeat"
"https://venturebeat.com/2022/05/15/how-ar-and-vr-are-transforming-customer-experiences"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How AR and VR are transforming customer experiences Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AR and VR technology was largely expedited by the past pandemic with at least 93.3 million and 58.9 million users respectively, according to a study conducted by eMarketer. This comes as no surprise since these immersive technologies have mitigated the massive disruptions in people’s lives. Their accelerated adoption at both individual and business levels are represented in daily activities. For example, as remote work became an essential part of many business models, AR/VR allowed a seamless transition from onsite training to a clear visualization of step-by-step instructions for many workers across multiple industries and locations. From social interactions in virtual reality video games to enhanced and personalized online shopping experiences, AR and VR have in many ways served as the lifeline for many companies to build resilience for the future and increase customer engagement. But exactly how has this billion-dollar industry affected how a brand interacts with its customers? Let’s take a look at various scenarios showcasing the highly-valued solutions these technologies have to offer. Increasing brand awareness Technology has heavily influenced the way consumers research, interact and ultimately decide to purchase products or services from businesses. With more information available at the palm of their hands, potential customers are more demanding when it comes to brand authenticity, with 88% of consumers stating that authenticity dictates the future relationship between themselves and brands they are most likely to support. This means that businesses need to find innovative ways to help consumers become aware of their value proposition and ultimately build a strong customer relationship. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! AR/VR helps companies expand their reach and get valuable and sometimes defining messages across. For example, with sustainability driving many consumers’ purchase intent, large companies such as Chiquita leveraged AR and VR technologies to allow transparency and show their commitment to sustainability. By scanning the blue sticker on their Chiquita bananas, a shopper can virtually visit the tropics and follow the journey of a Chiquita banana in Latin American farms, all the way to their grocery store. This immersive experience highlights the sustainable product development journey and its eco-friendly supply chain, tipping the balance in the brand’s favor. The ‘try-on’ experience with AR and VR According to a study conducted by Alert Technologies, brands can convert 67% of consumers to buyers when they’ve reached trial rooms or spaces to consider the product or service a fit for their needs. It’s in this stage, that the consumer starts to heavily weigh their options before making their decision. Brands can leverage virtual technologies to help remove any uncertainties and doubts in an engaging way, allowing a personalized approach to visualizing a potential fit. One example is Supra Boats with their AR app and web configurator , which allow the potential buyer to learn about a boat’s features at their own pace through a step-by-step tour in a stress-free environment. Here, they can also play with different colors and additional applications to customize their potential boat and ultimately order it right from the app. The success behind the “try-on” experience lies within the ease of use and comfort offered to potential buyers who can personally try on items for themselves or for their personal spaces in their homes. And businesses can harness valuable consumer insights such as sizes and default preferences, enabling them to understand their customers better and deliver more accurate and informed strategic decisions in the future. Making purchasing simple Taking the guesswork out of a potential fit of a product for a consumer’s needs is just one step to bringing them closer to completing a purchase. Ultimately, brands can seek ways to easily motivate these potential buyers to actually buy the product or service. Many large brands have built-in AR technology in their apps to help make this process easier for their consumers, prompting a sale. Going back to Supra Boats, the app uses the phone’s camera to scan the area and then overlay it with the boat of the user’s choosing with all their customizable features, ensuring that it matches the user’s expectations. However, the app goes beyond just showcasing customizable boats in your living room or backyard but also allows the users to go on a virtual product tour to help understand the use and maximize each one of the added features. This notoriously increases the brand’s value for the customers so that they feel free to make a purchase with far less hesitation. Enhanced customer support As the retail market continues to become more competitive, consumers are more drawn to brands and businesses that offer greater value than just their initial purchase, leveraging superb customer support and service. As a matter of fact, according to Microsoft , 90% of Americans value customer service as a deciding factor in whether or not to do business with a brand, and around 58% of American consumers state that they would most likely switch companies due to poor customer support. These numbers don’t lie and represent a huge opportunity for businesses to offer greater value at their post-purchase stage. AR and VR help provide practical and real-time support for customers, effectively addressing an issue at the touch of a button. A clear example of this is Nespresso’s Assistant , which can help fulfill a consumer’s coffee needs through instant ordering, answer frequently asked questions about the machine, and offer easy-to-follow instructions for descaling, for example. The virtual technology’s immediacy to answer certain requests and provide in-depth information and instructions increases the customer’s engagement and satisfaction with the service provided by the brand. And this can most likely convert said buyer to being a recurrent customer. Leveraging AR and VR allows businesses to expand and enhance their customer experience through the power of personalization. Actionable insights gathered throughout the customer journey offer the possibility of building strong and meaningful relationships with the target audience in ways that traditional or online marketing strategies may lack. In return, AR and VR serve as the lifeline for many businesses to remain competitive in their industries and build on resilience for any future events. Jason LaBaw is the founder of Social Bee. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,699
2,023
"How the metaverse is helping fashion brands improve customer interactions | VentureBeat"
"https://venturebeat.com/virtual/how-the-metaverse-is-helping-fashion-brands-improve-customer-interactions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the metaverse is helping fashion brands improve customer interactions Share on Facebook Share on X Share on LinkedIn It’s early, but the metaverse has emerged as a new frontier for brands to explore new technological boundaries and connect with customers in novel ways. According to a study by McKinsey , 59% of consumers are excited about transitioning their everyday activities to the metaverse. Virtual shopping experiences are an early winner here, allowing customers to immerse themselves in new types of shopping experiences. As the digital realm continues to expand, consumer-focused fast fashion brands are continuously on the hunt for cutting-edge solutions that enhance their customers’ shopping experiences. As the metaverse continues to evolve, it opens up a world of possibilities for brands looking to boost their customer engagement and data analysis efforts. The ability to track customer behavior and preferences in real time enables brands to fine-tune their offerings and marketing strategies. As described by Olga Dogadkina, cofounder and CEO of virtual store platform Emperia , virtual shopping experiences in the metaverse aid in enhancing brands’ ecommerce strategy by adding a layer of interactivity and user experience. These layers can exceed the in-store experience by personalizing and enriching the shopping journey for clients. “While 2D websites were merely a tool that enabled an online purchase, through a simple grid of images and text, it lacked the customer journey, storytelling, and the ability to provide the customer experience and product discovery [that] retailers’ physical stores strive to achieve,” Dogadkina told VentureBeat. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Whether it’s through hosting virtual events or offering personalized shopping experiences, the emerging metaverse is poised to provide a platform for brands to truly connect with their customers. For high-end retailers, the virtual world offers an opportunity to showcase limited-edition products that can’t be found in the physical world. The rise of immersive virtual retail The metaverse represents a shift in our online interactions, paving the way for a more integrated and immersive experience across various aspects of life — work, play and even fashion. Younger generations already adopting augmented reality (AR) technology expect a level of utility beyond just entertainment, and fashion brands in the metaverse are responding. They’re already showcasing their collections in new ways, launching digital fashion shows and immersive experiences that transport viewers into the creative world of the designer. By employing virtual reality (VR) and AR technology, brands can design digital spaces that closely resemble their brick-and-mortar locations. The result is that shoppers can navigate through virtual clothing racks, “try on” outfits and engage with store personnel in real time. Another exciting aspect of the metaverse for fashion brands is that it provides a platform for companies to experiment with new business models based on interaction with communities. With the ability to host interactive experiences, such as virtual games or challenges centered around their collections, fashion brands can foster a sense of community and engagement among their customers. These experiences allow customers to compete against each other, or collaborate toward a common goal of winning prizes and rewards. This not only enhances customer engagement but also helps to cultivate brand loyalty. “Virtual stores allow retailers to bridge the gap between the transactional nature of an ecommerce purchase and the personalized shopping experience brands can cultivate in-store,” Dogadkina said. “Considering virtual stores’ intimate, personal and inclusive nature, we are seeing an increasing demand for such spaces by anyone from financial institutions to universities, entertainment and others.” Dogadkina explained how Bloomingdale’s recent holiday virtual store, powered by the Emperia platform, shows how one brand can incorporate multiple experiences under one roof, while allowing each brand to shine its own uniquely identifiable set of brand characteristics. “While Ralph Lauren, CHANEL and Nespresso were all featured within Bloomingdale’s virtual world, they were each uniquely positioned within their own spaces, adding on to the user journey of discovery, yet simplifying the shopping experience, through one centralized checkout process and cohesive user navigation,” said Dogadkina. With the ability to host interactive experiences such as virtual games or challenges centered around their collections, fashion brands can foster a sense of community and engagement among their customers. These experiences allow customers to compete against each other, or collaborate toward a common goal of winning prizes and rewards. This not only enhances customer engagement but also helps to cultivate brand loyalty. “Adding on a layer of gamification attracts engagement of Gen-Z users, increases shopper engagement with the brand — allowing the latter to meaningfully reward the user — and drives stronger loyalty, a topic which most retailers are struggling with today,” explained Dogadkina. Of course, she added, tracking users’ movement within these stores, while learning their shopping preferences, product affinity and habits, allows brands to further personalize the user experience. Unique ways to evolve consumer interaction Avatars have become a crucial aspect in the realm of the metaverse, serving as a powerful tool for brands to deliver a cutting-edge retail experience. Direct-to-avatar, or D2A, is rapidly gaining popularity as the latest retail strategy in the digital realm, as an increasing number of people, especially the younger demographic, invest more time in constructing a virtual representation of themselves online. In addition, the versatility and customization options avatars offer make them an ideal way to market a wide range of digital goods, including clothing, styling and food. Gucci has taken the lead with its digital avatar clothing and accessories. In addition, the metaverse shop “Nikeland” has received over 7 million visits to date, showcasing the limitless potential it holds for businesses looking to reach a virtual audience. Yashar Behzadi, CEO and founder of the Synthesis AI platform, says that such metaverse experiences are merging the best parts of physical stores and traditional websites in a new and more immersive way. “The metaverse brings together the convenience of online shopping with the ability to experience products virtually through immersive technologies. Virtual goods, such as avatar clothing skins, enable brands to cost-effectively test new designs and gather immediate consumer feedback,” Behzadi told VentureBeat. The metaverse also allows fashion brands to reach a wider audience. The inaugural Metaverse Fashion Week saw several fashion brands showcasing their designs, including Dolce & Gabbana, Etro, and Elie Saab. In response, social media platforms are exploring direct-to-consumer shopping options. For example, Instagram has introduced AR-powered makeup try-ons, Snapchat launched Dress Up, a platform for try-on experiences from retailers and brands, and TikTok recently introduced TikTok Shop, allowing users to make purchases directly through the app. This could be a game-changer for the fashion industry, allowing brands to reach a new demographic of customers looking for a more sustainable and affordable way to stay stylish, while building a global following. Behzadi explained that virtual try-on technologies allow consumers to realistically test clothing fit, providing brands with an instant global reach. “In traditional brick-and-mortar stores, it takes many months to plan, produce and distribute goods. With instant global reach, brands can quickly test a multitude of products and gather feedback on the popularity of items and features,” said Behzadi. Ivan Dashkov, head of Web3 at sporting goods manufacturer PUMA, believes that the significant advantage of utilizing the metaverse is that brands can build immersive experiences and have visitors play alongside the brand no matter where they physically are. “At PUMA, we’re focused on building engaging experiences. So for us, it’s important to see how much time people spend in these experiences and how many buy digital goods. With the emergence of NFTs, digital goods are becoming more ubiquitous in our lives. As a brand, we must utilize all these data points to figure out where and how to best serve our audience,” Dashkov told VentureBeat. Luxury NFT collections now offer both digital and physical products , as well as exclusive access to events. For example, a package purchase includes physical and digital items and admission to couture shows. NFTs have the potential to foster long-term engagement and brand loyalty, making them a valuable asset for luxury brands. The ownership of an NFT becomes a privilege, elevating the luxury experience for buyers. During New York Fashion Week, PUMA showcased their exclusive sneakers NFRNO and Fastroid through NFTs in their own mini metaverse experience called “Black Station.” This added value and tapped into the psychology of scarcity and limited editions. “Black Station enhanced the consumer experience by letting customers interact with the product, experience it in 3D, and see it on our ambassadors — like Neymar wearing it before the product was ever shipped out. Giving consumers access to these types of experiences in the metaverse is a core part of our strategy in the space,” said Dashkov. Dashkov explained that it was a massive shift from how PUMA typically sells products. “Normally, we design a product, then take 12–18 months to produce it, ship it to stores, and then it sells to consumers. With our Fastroid and NFRNO NFTs, consumers bought the product, and we knew exactly how many to make before the product was ever manufactured. It’s an interesting DTC business model that we’ll continue to experiment with in the future,” he added. Brands face metaverse marketing challenges As with any new space, there is a learning curve to understand the breadth and depth of the potential impact, according to Jeremy Dalton, U.S. head of metaverse technologies at PwC. “Currently, many brands lack comprehensive knowledge of the functionality and demographic of virtual worlds and are challenged on how to leverage this new channel to drive greater engagement. Brands will need to define the ‘why’ for their customers, answering questions as to why this new experience is relevant for them,” Dalton told VentureBeat. Additionally, he believes that brands will need to work to: Dispel rumors that the metaverse is just hype. Focus on the long-term value offered by the metaverse rather than the short-term gain of a singular marketing event. Complement their theoretical knowledge of the metaverse with practical experience through pilot programs and experimentation. “The metaverse has the potential to significantly disrupt the traditional marketing model, blurring lines between real and virtual, and creating a powerful channel for future retail commerce,” said Dalton. “If retailers harness this power correctly, they can build relevance to all areas of consumers’ lives.” Metaverse means interaction PUMA’s Dashkov said we will soon be seeing brands creating more and more metaverse experiences to provide a point of interaction with the brand and their community. “The metaverse makes brands more accessible. Not everyone can physically attend a PUMA fashion show or shoe launch party due to geographical constraints, event capacity, schedule conflicts, etc. But when you host experiences in the metaverse, it breaks down many of those barriers,” said Dashkov. “Additionally, the definition of a product for a brand will change. So brands entering this space need to evolve their idea of what a product can be in the metaverse.” Likewise, Behzadi of Synthesis AI says that interoperability across platforms will be vital to the success of brands in the metaverse. “The metaverse is still nascent, and the time people spend on these new platforms is relatively low compared to traditional channels. So although there is a promise, the metaverse has not yet reached the size and velocity that can drive meaningful revenue for brands,” he said. New experiences — including virtual product demonstrations, brand experiences, virtual events and concerts, and new social product undertakings — will drive consumers, Behzadi said. That makes the emerging metaverse a new frontier that is different from traditional marketing. The metric likely to drive adoption is the total time and money required to create cross-platform assets and experiences, he added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,700
2,020
"Wave teams up with John Legend, Tinashe, and Galantis to create mo-capped performances in virtual worlds | VentureBeat"
"https://venturebeat.com/business/wave-teams-up-with-john-legend-tinashe-and-galantis-to-create-mo-capped-performances-in-virtual-worlds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wave teams up with John Legend, Tinashe, and Galantis to create mo-capped performances in virtual worlds Share on Facebook Share on X Share on LinkedIn Wave showcases the animated, motion-captured Galantis on stage. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Virtual concerts firm Wave is announcing “ One Wave ,” a series of virtual concerts with John Legend, Galantis, Tinashe, and others. The company uses cutting-edge broadcast and gaming technology, creating motion-captured performances of the artists and turning them into animated characters in virtual worlds. Wave will transform all participating artists into their own digital avatars, allowing them to perform live in an immersive and fantastical virtual world. The news comes just after Epic Games announced that its live concerts for Travis Scott in Fortnite drew more than 27 million viewers to see his Astronomical show. Former Wave artists, including emerging popstars, DJs, and musicians Tinashe, Jauz, and Lindsey Stirling, will also return to the platform this spring, after previously performing for up to 500,000 fans worldwide. While Wave started in VR, its latest concerts and events are distributed across all major platforms, including YouTube, Twitch, Facebook, digital and gaming channels, and through the Wave app available for Steam and Oculus. Wave also announced partnerships with Warner Music Group and Roc Nation, and it is working with several music labels, management companies, and independent artists to connect a roster of talent with the next generation of concert-goers. Above: Tinashe is part of Wave’s new animated concerts. The series kicks off on April 30 at 3 p.m. Pacific time with a special live encore of the Church of Galantis. Additional show dates will be announced and rolled out over the next several months. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Wave CEO Adam Arrigo said in a statement that we are now living in a digital avatar culture, and that his company’s proprietary technology and core gaming capabilities enable it to go beyond the traditional live streaming concerts and create artist avatars, virtual environments, and interactive experiences. He said these virtual shows will help monetize performances for the artists, who can’t do live concerts during the pandemic. Tinashe said in a statement that she is excited to bring back her Wave experience for fans. During a time where going to live shows is impossible, it’s more important than ever to stay connected and continue to inspire each other, she said. Los Angeles-based Wave said the virtual experiences will provide artists with a platform to create authentic digital avatars and environments that represent their artistic vision in real time, speaking directly to today’s gaming and tech-centric audiences. Performances will stream across various social media and gaming platforms, so fans can socialize and interact with the artists as they perform, cheer as part of a global avatar audience, vote on key show moments, play mini games, and socialize with each other. In an effort to support the greater global community, proceeds from the series will go directly to nonprofit organizations that need support during the current global COVID-19 pandemic. The Ad Council will also be providing important public service messaging around mental health awareness and resources as an extension of its COVID-19 response efforts. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,701
2,015
"Oculus Super Bowl party could be the future of social sports broadcasting | VentureBeat"
"https://venturebeat.com/media/oculus-super-bowl-party-could-be-the-future-of-social-sports-broadcasting"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Oculus Super Bowl party could be the future of social sports broadcasting Share on Facebook Share on X Share on LinkedIn A 2D representation of the space where AltspaceVR will host a global Super Bowl party for Oculus Rift users on Sunday. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. REDWOOD CITY, Calif. — When the Seattle Seahawks and New England Patriots kick off the NFL’s championship game Sunday, friends, family, and coworkers will gather across the country and the world to watch. But for one big group, while the game will be real, the gathering will be entirely virtual. Welcome to the Super Bowl party, Oculus Rift edition. The Oculus, of course, is the company that makes the immersive virtual reality (VR) headset. Palmer Luckey founded it, and Facebook subsequently acquired last March for $2 billion. Now, with developers building all manner of VR experiences for the platform, we’re seeing the technology’s possibilities. Among them? The same kind of Super Bowl shindig everyone else is having — complete with serious fans huddled close to the TV, others standing back and chatting with friends, and some wandering back and forth as the game progresses. But with Oculus, the partygoers will gather physically from all corners of the globe yet be together in one big modern room. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! All that’s missing is the guacamole and cheese dip. That’s the vision of AltspaceVR , the Google Ventures- and Tencent-backed startup based in this Silicon Valley town 27 miles south of San Francisco. The 15-person company has raised $5.5 million to enable VR experiences of all kinds. And this Sunday, the company hosts what it thinks may well be the first-ever Oculus Super Bowl party. You’re totally invited. “We’re not so worried about the place where people are watching the Super Bowl,” Altspace CEO Eric Romo told me. “It’s about … we want to watch the Super Bowl together, so great, we’re there together.” Oculus can’t bend time Yesterday, I drove down to AltspaceVR’s offices for a demo. I wasn’t able to see the Super Bowl party in action — even Oculus, unfortunately, can’t bend time — but Romo walked me through how it’ll work. I’ll admit: I was skeptical going in. Now? I’m a convert. This is the gathering space of our virtual reality future. As Edward Miller, the head of visual content for the VR news platform Immersivly, put it , this could well be “the beginning of [the] social sports broadcast.” Strapping on an Oculus headset and some headphones, I was no longer in a cramped Altspace conference room. I was in a vast, high-ceilinged modern masterpiece of a parlor. With a futuristic cityscape out the window and a fireplace crackling off to the side, the room hosted a huge TV and a massive circular seating area around it, complete with comfy-looking cushions. Come Sunday, this is where dozens of people will come to watch the Seahawks and Patriots duke it out to be the NFL’s top dog. Many, to be sure, will be there for the commercials, just like at every other Super Bowl party. At Altspace’s party, the game will be on that huge central TV. Stand close and the audio will be louder, like it would be in real life. Stand far away, or upstairs on one of the giant space’s balconies? The sound will be distant and muffled. The game will come from NBC’s free web stream and be piped onto the giant TV in the center of the room. But the playback will be synced so that everyone sees and hears everything at the same time. If there’s a great play, all in virtual attendance will see it happen simultaneously, allowing fans to cheer or groan together, depending on their team alliances. Directional sound If you watched the Super Bowl in the lobby of a hotel, you might be able to barely hear what someone on the other side of the room was saying, but if someone standing right next to you was speaking, you’d hear them loud and clear. That’s how it works in Altspace’s world, too. Plus, it’s directional: if someone’s telling a joke on your left side, you’ll hear them mostly in your left ear. If you move, the sound will change where it’s coming from to reflect where you went. And if you’re bored, and wander away from everyone else, you won’t hear them, or the game, at all. Maybe you’ll just stand at the window and gaze down, way down, at the clouds amid the massive futuristic skyscrapers. Altspace’s Oculus experience is built to bring people together around web-based content. That makes a lot of sense, since so much of what we consume these days can be found right in our browser. In this case, people will gather to watch the Super Bowl, but anyone using Altspace’s technology has at their disposal their own browser, on which they can look privately at more or less anything on the Web. The company has programmed the browser with a home screen, on which you have a number of default choices: YouTube, Twitter, Hulu, and others. Altspace wants its users to share what they’re doing in real-time, so there’s a Twitter feed where you can see what others are saying about being in this virtual world. Above: Each user in Altspace has access to their own personal browser. What Altspace really wants is people from all walks of life to use its technology for just about any kind of gathering. It could be a corporate meeting, or a sponsored movie viewing. Altspace hasn’t worked out its business model yet, but it’s easy to imagine companies paying Altspace to use its technology, and then inviting just the people they want to show up. As Romo put it, “We don’t want to be the content providers. We don’t want to be the organizers. … We know that this platform will only be as useful as the content that’s on it.” Tech has some limits The company’s tech does have some limits. Because of the demands of processing all the audio and users’ voices, Altspace can generally handle only a few dozen people in a single space. Romo said he hopes for the Super Bowl, they’ll be able to get 75 together seamlessly. That means they’ll be running numerous instances of the Super Bowl party to accommodate everyone who wants in. Today, there’s 2,000 people on Altspace’s waiting list, but this weekend, it’s opening its doors to everyone. And come Sunday, you can take part in the Super Bowl party, even if you don’t have a VR headset. Altspace has made it possible to tune in via a normal browser. And while you won’t have the same immersive experience as you would if you have Oculus, you can see (and hear) what’s going on. Still, Romo, who was SpaceX’s 13th employee, said Altspace has focused on optimizing the Oculus experience so that it works just as well on a good laptop as it would on a hardcore gaming PC. “If it’s a communications medium that lots of people want to use,” he said, “we have to make it work on hardware they already have.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,702
2,022
"Identity in the metaverse: Creating a global identity system | VentureBeat"
"https://venturebeat.com/virtual/identity-in-the-metaverse-creating-a-global-identity-system"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Identity in the metaverse: Creating a global identity system Share on Facebook Share on X Share on LinkedIn VR and AR technology futuristic concept. Man wearing 3d VR headset glasses looks up in cyberspace of metaverse. Virtual reality or Augmented reality world simulation. Digital computer entertainment. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the advent of the metaverse, the need for a global identity system has become apparent. There are many different ways to create an identity in the metaverse, but no single system is universally accepted. The challenge is usually two-fold: first, how to create an identity that is accepted by all the different platforms and services in the metaverse, and second, how to keep track of all the different identities a person may have. There are many proposed solutions to these challenges, but no clear consensus has emerged. Some believe that a single, global identity system is the only way to ensure interoperability between different platforms and services. Others believe that multiple identities are necessary to allow people to maintain their privacy and security. The debate is ongoing, but it is clear that the need for a global identity system is becoming more urgent as the metaverse continues to grow. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In this article, we will explore the various options for creating a global identity system in the metaverse. We will discuss the pros and cons of each option, and try to identify the best solution for the future. Option 1: A single global identity The simplest solution to the problem of identity in the metaverse is to create a single, global identity system. This would be a centralized system that would be responsible for managing all identities in the metaverse. The advantages of this approach are obvious: It would be much easier to keep track of identities, and there would be no need to worry about different platforms and services accepting different identities. In addition, a centralized identity system would allow for better security and privacy controls, as well as the ability to track identity theft and fraud. However, this approach also has several disadvantages. First, it would be very difficult to create a global identity system that is accepted by everyone. Also, a centralized system would be vulnerable to attack and could be used to track people’s movements and activities. Third, it would be difficult to protect the privacy of users in a centralized system. Option 2: Multiple identities Another solution to the problem of identity in the metaverse is to allow each person to have multiple identities. This would mean that each person could have one or more identities that they use for different purposes. One of the main advantages of this approach is that it would allow people to maintain their privacy and security. Each person could choose which identity to use for each situation, and they would not have to worry about their entire identity being exposed. In addition, this approach would be more resilient to attack, as it would be much harder to take down multiple identities than a single one. The limitations of such an approach would be that it could be difficult to keep track of all the different identities, and there would be no guarantee that different platforms and services would accept all of them. In addition, multiple identities could lead to confusion and could make it more difficult for people to build trust with others. Option 3: A decentralized identity system A third solution to the problem of identity in the metaverse is to create a decentralized identity system. This would be an identity system that is not controlled by any one centralized authority but rather is distributed among many different nodes. This might seem like the ideal approach, since decentralization is a common theme in the metaverse. However, there are still some challenges that need to be overcome. For instance, it would need to be ensured that all the different nodes in the system are properly synchronized and that the system as a whole is secure. In addition, it might be difficult to get people to adopt such a system if they are used to the more traditional centralized approach. One solution would be to get the nodes in the system to be run by different organizations. This would help to decentralize the system and make it more secure. Another advantage of this approach is that it would allow different organizations to offer their own identity services, which could be more tailored to their needs. Another would be to incorporate an edge computing solution into the system. This would allow for more decentralized processing of data and could help to improve performance. It would also make the system more resilient to attack since there would be no centralized point of failure. The best solution for the future of identity in the metaverse is likely to be a combination of these approaches. A centralized system might be necessary to provide a basic level of identity services, but it should be supplemented by a decentralized system that is more secure and resilient. Ultimately, the goal should be to create an identity system that is both easy to use and secure. The ideal identity standards of the metaverse Now that we have explored the various options for identity in the metaverse, we can start to identify the ideal standards that should be met by any future global identity system. It is no easy task to create a global identity system that meets all of the criteria, but it is important to strive for an ideal solution. After all, the metaverse is still in its early stages, and the decisions made now will have a lasting impact on its future. Current iterations of the metaverse have used very traditional approaches to identity, but it is time to start thinking outside the box. The ideal solution will be one that is secure, private, decentralized, and easy to use. It will be a solution that allows people to maintain their privacy while still being able to interact with others in the metaverse. Most importantly, it will be a solution that can be accepted and used by everyone. Only then can we hope to create a truly global identity system for the metaverse. The bottom line on identity in the metaverse The question of identity in the metaverse is a complex one, but it is an important issue that needs to be addressed. The challenges associated with creating an implementation that is secure, private and decentralized are significant, but they are not insurmountable. For one, it will be important to get buy-in from organizations that have a vested interest in the metaverse. These organizations can help to promote and support the adoption of identity standards. It is also important to keep in mind that the metaverse is still evolving, and the solution that is ideal today might not be ideal tomorrow. As such, it will be critical to have a flexible identity system that can adapt as the needs of the metaverse change. Ultimately, the goal should be to create an identity system that is both easy to use and secure. Only then can we hope to create a truly global identity system for the metaverse. Daniel Saito is CEO and cofounder of StrongNode DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,703
2,022
"Web3 ecosystem lost $3.9 billion to crypto fraud in 2022  | VentureBeat"
"https://venturebeat.com/security/web3-crypto-fraud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Web3 ecosystem lost $3.9 billion to crypto fraud in 2022 Share on Facebook Share on X Share on LinkedIn Bitcoin lost more than 60% of its value in 2022. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New technologies create new risks. Ever since cryptocurrency rose to prominence after the release of Bitcoin in 2008, cybercriminals have been looking for ways to separate users from their hard-earned money. Now as the Web3 ecosystem grows, fraud is becoming an even bigger threat. Today, Web3 bug bounty provider Immunefi released new research calculating that $3,948,856,037 in crypto funds was lost across the Web3 ecosystem to hacks and scams in 2022. The report also found the two most targeted blockchains last year were BNB Chain and Ethereum , with 65 and 49 unique security incidents each. The good news is that while crypto fraud across the space remains common, the overall losses decreased 51.2% from the 2021 total of $8,088,338,239. In any case, this latest research highlights that organizations interacting with the Web3 ecosystem need to implement a highly-developed security strategy to address these new threats, or they risk leaving their data exposed. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Web3 and the risk of ‘novelty’ attacks The report comes as researchers anticipate the Web3 market will grow from $3.2 billion in 2021 to $81.5 billion in 2030, increasing at a compound annual growth rate of 43.7%. Inevitably, as the value of this market increases, more and more cybercriminals will innovate new scams and threats to try and capitalize on its popularity and steal users’ funds. This raises novel challenges, as the nature of these attacks in digital spaces will be unlike those faced in the traditional Web2 sphere. “Web3 is still a brand new world, full of unknown paths,” said Mitchell Amador, founder and CEO at Immunefi. “That novelty, by definition, brings about a level of inexperience and danger to the game. Furthermore, due to the very nature of the Web3 ecosystem, where smart contract code holds huge amounts of capital, the environment is far more adversarial compared to traditional Web2 applications.” Users who are just finding their feet and experimenting with Web3 solutions are also vulnerable to emerging scams. “In Web3, users are still adjusting to the technology and many barely even know how to properly use wallets and sign for transactions,” Amador said. “With all the new projects and technology coming out by the week, it’s no surprise that bad actors are able to exploit the inexperience and naivety of new users.” As a result, Amador recommends that CISOs and security leaders interacting with these technologies invest in security education — not just on phishing threats, but also how to use infrastructure like wallets, private keys and common DeFi applications. Going forward, leaders and researchers in the space have a critical role to play in supporting users and keeping them up to speed on the techniques scammers are using to steal their data. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,704
2,022
"Decentralization and KYC compliance: Critical concepts in sovereign policy | VentureBeat"
"https://venturebeat.com/security/decentralization-and-kyc-compliance-critical-concepts-in-sovereign-policy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Decentralization and KYC compliance: Critical concepts in sovereign policy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The decentralized nature of Web3 projects has made it a challenge for traditional regulatory organizations to govern them. For a long time, the community saw this as a positive because it meant that these projects were outside of government control. However, as these projects have grown in popularity, there has been an increased push by regulators to find ways to govern them. One area where this is most apparent is Know Your Customer (KYC) and Anti-Money Laundering (AML) compliance. KYC has had very negative connotations in the Web3 community. People see it as an infringement on their privacy and a way for the government to control them. They also see it as the antithesis of blockchain technology, which is supposed to be decentralized and anonymous. In this article, we will attempt to answer the question: Does KYC really encroach on decentralization? We will look at the arguments for and against KYC compliance and try to come to a conclusion about whether Web3 projects should consider it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Wild West of Web3 For the longest time, the decentralized nature of Web3 projects meant that there were no rules or regulations governing them. This was seen as a good thing by many because it meant that these projects were outside government control. This dates back to the early days of Bitcoin, when the anonymous creator Satoshi Nakamoto said that the cryptocurrency was designed to be “a peer-to-peer electronic cash system” that didn’t need “any trusted third party.” This meant that there was no central authority controlling Bitcoin, and it was up to the users to decide how to use it. Naturally, this lack of regulation also meant that there were no rules against things like money laundering or terrorist financing. This led to Bitcoin being used for a variety of illegal activities on the dark web, which furthered negative associations that it was used for criminal activity. The way onboarding used to work for crypto projects: Users would go to their website, download the software, then send them some money. There was no KYC or AML compliance because there was no way to know to whom money was being sent. This all changed when crypto ecosystems started to grow and attract more mainstream users. As more people started buying crypto, the exchanges that they were using began to implement KYC and AML compliance measures. Early pushback against big players This was a necessary evil in order to continue growing ecosystems and attract more users. But it also led to a lot of friction within the community because many people thought it as a way for governments to control them. The tension came to a head in 2017 when the Chinese government cracked down on Initial Coin Offerings (ICOs). This led to a mass exodus of crypto projects from China to more friendly jurisdictions like Hong Kong and Singapore. However, even in these more crypto-friendly jurisdictions, KYC and AML compliance was still necessary to comply with the law. This led to a lot of projects doing KYC-AML compliance in a way that the community considered too intrusive. For example, Binance, one of the largest crypto exchanges in the world, was accused of doing too much KYC on its users — but then the U.S. Securities and Exchange Commission (SEC) pushed Binance to actually increase its KYC standards. This suggested that having users upload their IDs and selfies was simply not enough. Most users are understandably not comfortable with that. This led to a lot of criticism from the community because it was seen as an invasion of privacy; but Binance has not relented and still maintains a thorough KYC policy. Dissatisfaction with strict policies indicates that there is a delicate balance that needs to be struck when it comes to KYC and AML compliance. On the one hand, you need to do enough to comply with the law and prevent your platform from being used for illicit activities. On the other hand, you don’t want to do too much and risk alienating your user base. The current state of KYC in the crypto world In the current crypto world , most exchanges and wallets have some form of KYC, but there is still a lot of variation in how much information is required from users. Some exchanges, like Coinbase, only require users to submit their name and email address. Other exchanges, like Binance, allow multiple verification tiers with varying degrees of required information. There are also a few exchanges that have implemented KYC-less protocols. This means that users don’t need to submit any personal information to use the platform. The main downside of this approach is that it makes it more difficult to comply with anti-money laundering regulations. This is why most exchanges still require some form of KYC from their users. Lessons in sovereign policy The push and pull between regulation and decentralization is not unique to the crypto world. All sovereign nations have to deal with it when it comes to their own policymaking. Historically, United States laws have sought to regulate the internet — and have been met with a lot of resistance. The most famous example is the Communications Decency Act , which the Supreme Court struck down in 1997. The act was passed in an attempt to regulate online pornography, but it was quickly met with criticism from the tech industry. The main problem with the act was that it was too broad and would have ended up censoring a lot of non-pornographic content. The court ultimately struck down the act, but the case highlights the tension between regulation and decentralization. The U.S. has since taken a more hands-off approach to regulating the internet, which has allowed the tech industry to flourish — but has also enabled the prevalence of harmful content. Lack of regulation is why big banks still have a leg up over DeFi When interviewed about the potential success of the crypto industry in replacing legacy banking players, hedge fund manager Kenneth C. Griffin mentioned that the perpetual flaw of crypto is that, unlike with banks, very little can be done when users need their financial provider to do right by them. Charlie Munger, legendary investor from Berkshire Hathaway, also mentioned that crypto was “ rat poison ” and cited the prevalence of illicit activity for why he would personally never consider it a viable asset class. These statements, while inflammatory, get to the heart of one of crypto’s big problems: The lack of regulation. Unlike with banks and other financial institutions, there is no government body that oversees the crypto industry. This means that there are no guaranteed protections for users if something goes wrong. If a user gets hacked and loses all of their crypto, there is no government insurance that will cover the loss. The same lack of regulation also makes it difficult for exchanges and other crypto businesses to get traditional banking services. This is one of the reasons why the DeFi industry has been such a big deal in the crypto world, since it can fulfill many of the services of traditional banks such as lending and borrowing with interest accrual, and asset investments, without the same regulatory requirements. By using decentralized protocols, users can bypass the need for traditional financial institutions. However, the lack of regulation also makes DeFi protocols more vulnerable to hacks and other security problems. KYC, decentralization and digital identity So with all that said — does KYC violate Web3’s tenets of decentralization and privacy? It does not. To better understand why you have to look at it from a two-sided approach. First, let’s look at it from the perspective of exchanges and other businesses that require KYC. For these businesses, KYC is a way to comply with anti-money laundering regulations. By requiring users to submit personal information, businesses can help prevent criminals from using their platforms to launder money. This is a good thing for both businesses and users. It is also worth noting that KYC does not have to be a violation of privacy. When done properly, businesses can collect the necessary information without sacrificing the privacy of their users. Second, it is worth noting that decentralization works hand in hand with another important element of Web3 — digital identity. For decentralization to work, users need to be able to prove their identity. Otherwise, there would be no way to prevent bad actors from taking advantage of the system. Decentralization without digital identity is not the kind of decentralization that we are striving for. Furthermore, a self-sovereign identity system would give users complete control over their personal information, further easing the worry about centralization. This means that users could choose to share their information with only the businesses and organizations that they trust. They would no longer have to worry about their information being mishandled or stolen by central authorities. KYC is one way to establish a digital identity. By requiring users to submit personal information, businesses can help ensure that everyone using their platform is who they say they are. Why KYC is a necessary first step for crypto exchanges With all of the above points in mind, it is clear that KYC is the necessary first step for Web3 projects. Without some form of KYC, it would be very difficult for exchanges to operate in a compliance-friendly manner. Users should not think of it as their data being centralized — but rather their legitimacy being verified. Once a user’s KYC information has been verified, they can go about their business on the platform without having to worry about being flagged for suspicious activity. In conclusion, it is evident that KYC is a necessary first step for exchanges and other Web3 projects. Without some form of compliance, it would be very difficult for these projects to operate in a legal and safe manner. In our next segment , we will talk about the role DeFi plays in the inclusive economics behind Web3: How it allows participation by those who have been left out of the traditional financial system, and what advantages it has compared to the current system. Daniel Saito is CEO and cofounder of StrongNode. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,705
2,023
"Building a better metaverse through diversity, equity and inclusion | VentureBeat"
"https://venturebeat.com/virtual/building-a-better-metaverse-through-diversity-equity-and-inclusion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Building a better metaverse through diversity, equity and inclusion Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We’ve evolved from sharing files and connecting through email to the norm of real-time data sharing and multi-person video calls spanning continents. With the recent past offering a wealth of new lessons, forward-thinking leaders are learning from the recent pivot to hybrid work environments to inform their next step: The metaverse. Last summer, KPMG U.S. and KPMG Canada launched the first KPMG metaverse collaboration hub , a virtual space where employees, clients and communities can explore opportunities for growth across industries and sectors. We know that the metaverse — and the journey toward broad adoption — provide ample opportunities. As we move into this future, leaders should incorporate accessibility and diversity, equity and inclusion (DEI) to build a responsible and inclusive metaverse environment. Prioritize accessibility from the beginning In the metaverse , much like the real world, the audience is wide-ranging. It spans all ages — from teenagers to adults — so efforts must account for different communication styles, materials and restrictions, as well as responsibilities around communicating with young adults, coworkers and elders. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Our recent U.S. survey found that one-third of those in Gen Z, millennials and Gen X are already participating (or are likely to participate) in the metaverse. Another third are open to participating. The metaverse’s possible use cases aren’t restricted to any one generation or type of interaction, either. At least half of adults are interested in virtual interactions for personal or business meetings, telemedicine, shopping, virtual training for work or school or participation in a government meeting. These opportunities underscore the need to prioritize accessibility. Part of this consideration is affordability: The greatest plurality of respondents in our survey (38%) agreed that providing more access to affordable metaverse technologies was the top factor in ensuring an inclusive and equitable experience. After all, the digital divide remains a significant barrier, with virtual reality (VR) and augmented reality (AR) devices still expensive and not widely in use. As we are bringing people into the new virtual world, we must strategize bypassing a price barrier that disproportionality affects communities of underrepresented identities. Another component of accessibility is the range of options provided by metaverse platforms. Consider how alternative text and image captions will evolve for concerts, shopping and education in the metaverse, and the range of options for self-expression. More than a third (36%) of KPMG survey respondents cited avatar customization as one of the top factors for creating diverse, equitable and inclusive metaverse experiences. Representation in the metaverse should reflect our world, with countless options for self-expression, including but not limited to customizable avatars. Build purposefully The metaverse reflects the real world, so it also reflects real-world issues when it comes to DEI principles. While the metaverse and VR might eliminate some barriers to in-person meetings, don’t take accessibility in technology for granted. When KPMG designed its collaboration hub, we first tested different technologies with internal users. This experiment found that over 30% experienced motion sickness while using a headset, and this conclusion informed decisions around what technologies to purchase, how to build the collaboration hub and how to guide users effectively through the virtual world. In this vein, Diego Mariscal, founder and CEO of 2Gether-International , points out that there is still work to do in technology development to mitigate a new set of accessibility challenges. After participating in a panel on the metaverse taking place within the metaverse, Diego noted that, “as a disabled entrepreneur with cerebral palsy, being in the metaverse eliminated worries about whether or not outlets were available in the room to plug in my wheelchair, or whether or not the space and accommodations were wheelchair accessible.” He continued, “while it was a relief to not worry about the constant physical barriers, I experienced new concerns. Putting on the required headset, maintaining visibility of the virtual screen, and navigating controllers through muscle spasticity were all new challenges born from metaverse technology.” He added, “we need to look at both sides — that there might be many barriers eliminated, but also new ones to navigate.” He points toward organizations making headway against these barriers, like the XR Access Initiative and A11yVR. Further, diverse stakeholders must also be in the room making these decisions. Diverse developer and deployer talent pipelines must be established and nurtured. As the metaverse begins to grow, so too must DEI-focused talent recruitment and retention alongside investment in diverse talent pipelines. Diego noted: “’Nothing about us without us’ has long been a mantra of the disability rights movement. The strongest way to combat accessibility challenges posed by VR is to include disabled people in the process of building the metaverse and the technology enabling it.” Expand the seats at your table For the metaverse, as with all cutting-edge technologies, the name of the game is innovation. Innovation and creative planning are two reasons to not limit seats at the table. For investments like these, it is critical to build an expansive and inclusive team. And if the expertise isn’t available in house, find partners who can keep you accountable to the world you are hoping to build. We have the opportunity to embed DEI principles and create a culture of belonging and inclusion as the metaverse evolves and scales. How will your advertising — along with your community, education, and organization — embody your principles in the metaverse? All our social communities will have a space in the metaverse, and we have the unique opportunity to instill DEI values in the infrastructure of this new virtual world. Leverage your multifaceted team and collaborate across stakeholders, experts, and subject matter experts to ensure that as the metaverse evolves, it is as inclusive as possible. Anu Puvvada is KPMG U.S. studio leader and interim metaverse COE Leader. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,706
2,022
"Augmented reality, superhuman abilities and the future of medicine | VentureBeat"
"https://venturebeat.com/datadecisionmakers/augmented-reality-superhuman-abilities-and-the-future-of-medicine"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Augmented reality, superhuman abilities and the future of medicine Share on Facebook Share on X Share on LinkedIn Concept of medical doctors fighting against global pandemic virus . Abstract silhouette portrait of young hero woman with super person red cape protect city from corona virus outbreak. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Earlier this month, I participated as a panelist at the Digital Orthopedics Conference in San Francisco ( DOCSF 2022) where a major theme was to imagine the medical profession in the year 2037. In preparation for the event, a small group of us reviewed the latest research on the clinical uses of virtual and augmented reality and critically assessed the current state of the field. I have to admit, I was deeply impressed by how far augmented reality (AR) has progressed over the last eighteen months for use in medicine. So much so, that I don’t expect we’ll need to wait until 2037 for AR to have a major impact on the field. In fact, I predict that by the end of this decade augmented reality will become a common tool for surgeons, radiologists, and many other medical professionals. And by the early 2030s, many of us will go to the family doctor and be examined by a physician wearing AR glasses. The reason is simple: Augmented reality will give doctors superpowers. I’m talking about superhuman capabilities for visualizing medical images, patient data, and other clinical content. The costs associated with these new capabilities are already quite reasonable and will decrease rapidly as augmented reality hardware gets produced in higher volumes in the coming years. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The first superpower is x-ray vision. Augmented reality will give doctors the ability to peer directly into a patient and see evidence of trauma or disease at the exact location in their body where it resides. Of course, the ability to look under the skin already exists with tools like CT and MRI scanning, but currently, doctors view these images on flat screens and need to imagine how the images relate to the patient on the table. This type of mental transformation is an impressive skill, but it takes time and cognitive effort, and is not nearly as informative as it would be if doctors could simply gaze into the human body. With AR headsets and new techniques for registering 3D medical images to a patient’s real body, the superpower of x-ray vision is now a reality. In an impressive study from Teikyo University School of Medicine in Japan, an experimental emergency room was tested with the ability to capture whole-body CT scans of trauma patients and immediately allow the medical team, all wearing AR headsets, to peer into the patient on the exam table and see the trauma in the exact location where it resides. This allowed the team to discuss the injuries and plan treatment without needing to refer back and forth to flat screens, saving time, reducing distraction, and eliminating the need for mental transformations. In other words, AR technology takes medical images off the screen and places them in 3D space at the exact location where it’s most useful to doctors – perfectly aligned with the patient’s body. Such a capability is so natural and intuitive, that I predict it will be rapidly adopted across medical applications. In fact, I expect that in the early 2030s doctors will look back at the old way of doing things, glancing back and forth at flat screens, as awkward and primitive. Going beyond x-ray vision, the technology of augmented reality will provide doctors with assistive content overlaid onto (and into) the patient’s body to help them with clinical tasks. For example, surgeons performing a delicate procedure will be provided with navigational cues projected on the patient in real-time, showing the exact location where interventions must be performed with precision. The objective is to increase accuracy, reduce mental effort, and speed up the procedure. The potential value for surgery is extreme, from minimally invasive procedures such as laparoscopy and endoscopy to freehand surgical efforts such as placing orthopedic implants. The concept of augmented surgery has been an aspiration of AR researchers since the core technologies were first invented. In fact, it goes back to the first AR system ( the Virtual Fixtures platform ) developed at Air Force Research Laboratory (AFRL) in the early 1990s. The goal of that project was to show that AR could boost human dexterity in precision tasks such as surgery. As someone who was involved in that early work, I must say that the progress the field has made over the decades since is remarkable. Consider this – when testing that first AR system with human subjects in 1992, we required users to move metal pegs between holes spaced two feet apart in order to quantify if virtual overlays could enhance manual performance. Now, thirty years later a team at Johns Hopkins, Thomas Jefferson University Hospital, and Washington University, performed delicate spinal surgery on 28 patients using AR to assist in the placement of metal screws with precision under 2-mm. As published in a recent study , the screw-placement system achieved such accurate registration between the real patient and the virtual overlays, surgeons scored 98% on standard performance metrics. Looking forward, we can expect augmented reality to impact all aspects of medicine as the precision has reached clinically viable levels. In addition, major breakthroughs are in the works that will make it faster and easier to use AR in medical settings. As described above, the biggest challenge for any precision augmented reality application is accurate registration of the real world and the virtual world. In medicine, this currently means attaching physical markers to the patient, which takes time and effort. In a recent study from Imperial College London and University of Pisa, researchers tested a “markerless” AR system for surgeons that uses cameras and AI to accurately align the real and virtual worlds. Their method was faster and cheaper, but not quite as accurate. But this is early days – in the coming years, this technology will make AR-supported surgery viable without the need for costly markers. In addition, camera-based registration techniques will take AR out of highly controlled environments like operating rooms and bring them to a wider range of medical applications. In fact, I predict that by 2030 general practitioners will commonly see patients with the benefit of AR headsets. This brings me to another superpower I expect doctors to have in the near future – the ability to peer back in time. That’s because physicians will be able to capture 3D images of their patients using AR headsets and later view those images aligned with their patient’s bodies. For example, a doctor could quickly assess the healing progress of a skin lesion by examining the patient through AR glasses, interactively peering back and forth in time to compare the current view with what the lesion looked like during prior visits. Overall, the progress being made by researchers on medical uses of virtual and augmented reality is impressive and exciting, having significant implications to both medical education and medical practice. To quote Dr. Stefano Bini of UCSF Department of Orthopaedic Surgery , “the beneficial role of AR and VR in the upskilling of the healthcare workforce cannot be overstated.” I agree with Dr. Bini and would go even further, as I see augmented reality impacting the workforce far beyond healthcare. After all, the superpowers of x-ray vision, navigational cues, dexterity support, and the ability to peer back in time will be useful for everything from construction and auto repair to engineering, manufacture, agriculture, and of course education. And with AR glasses being developed by some of the largest companies in the world, from Microsoft and Apple , to Meta , Google , Magic Leap , HTC and Snap , these superpowers will almost certainly come to mainstream consumers within the next five to ten years, enhancing all aspects of our daily life. Louis Rosenberg, PhD is CEO and chief scientist of Unanimous AI and has been awarded more than 300 patents for his work in VR, AR and AI. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,707
2,022
"The metaverse will thrive when the 'economy flows through it' | VentureBeat"
"https://venturebeat.com/virtual/metabeat-activating-in-the-metaverse-when-every-dollar-counts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The metaverse will thrive when the ‘economy flows through it’ Share on Facebook Share on X Share on LinkedIn MetaBeat , the metaverse event for enterprise decision-makers, kicked off with a keynote introduction by Sami Khan, CEO and cofounder of Atlas Earth , a mobile game experience where users can purchase virtual real estate and brand partners can offer promotions tied to the company’s in-app currency. Khan told the MetaBeat audience that no one can fully explain what the metaverse is, in all its iterations. While the definition may not be clear, Khan did have some advice for those developing products for the metaverse: “My humble advice is to use the metaverse to enhance people’s experience in the real world,” he said. “I don’t want my daughter growing up in a world where the future is escaping this world to live in a virtual reality world — that sounds pretty depressing.” Creating an enhanced experience in the real world, he explained, is different for every company. Nvidia, for example, is using digital twins as a metaverse to help teach future self-driving cars in a 3D world. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! An enhanced real-world experience is the goal of Atlas Earth, he added — that is, using your mobile devices to buy virtual land where you live, work and play. You can then sell it for real dollars within the mobile gaming experience. (Atlas Earth, a metaverse-based representation of the planet, lets you buy land in virtual 900-square-foot parcels. For example, in its mobile game you could buy property in your neighborhood and sell the land, in real dollars, to your neighbors if they became members of Atlas Earth.) The enhanced real-world gaming experience, Khan envisions, also enables companies to take local marketing to a national scale. “It offers the ability for innovative brands like Sonic to drive a quarter-million in revenue from metaverse gamers to their (physical) doors.” For example, by buying at Sonic in the real world, you can earn “Atlas Bucks” to buy virtual land in Atlas Earth. 7-Eleven subsidiary Speedway is the most successful metaverse case study “you’ve never heard of,” with $2 million in measurable sales from Atlas Earth to Speedway. “When you do this authentically, people are excited about it,” he said. After Khan’s welcoming remarks, he brought up Ethan Chuang, VP of loyalty solutions at Mastercard Advisors and Mike Paley, EVP of business development at Atlas Earth, up for a Fireside Chat about activating the metaverse at a time when every dollar counts. Paley, who was previously at popular browser extension Honey, which was acquired by PayPal for $4 billion, said that in his previous role he learned how important it is to deliver incremental value to your customers. “Even leveraging a simple piece of technology, I learned how incredibly transformative that could be to your user community,” he said. Marketing in the metaverse Mastercard, which links Atlas Earth to its loyalty program, is all about helping its business customers – merchants, CPGs, financial institutions — grow their businesses, Chuang said. “Consumers are moving away from traditional media — what you’re doing in the metaverse, from the standpoint of extending reach to segments of the audience that a lot of retailers and marketers prize, is authentic dialogue,” Chuang said. “At the end of the day, the goal is to bring people into stores, into ecommerce and drive sales with a virtual circle.” Building a metaverse that drive sales means building an ecosystem that builds positive value, Khan added. “My hope is that everyone walks away today thinking about how marketers think about the metaverse,” Khan said. “This ecosystem will continue to thrive when the economy flows through it. Understanding how they spend money is so important.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,708
2,022
"Migration to the metaverse: We need guaranteed basic Immersive Rights | VentureBeat"
"https://venturebeat.com/virtual/metaverse-we-need-guaranteed-basic-immersive-rights"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Migration to the metaverse: We need guaranteed basic Immersive Rights Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the coming years, consumers will spend a significant portion of their lives in virtual and augmented worlds. This migration into the metaverse could be a magical transformation, expanding what it means to be human. Or it could be a deeply oppressive turn that gives corporations unprecedented control over humanity. I don’t make this warning lightly. I’ve been a champion of virtual and augmented reality for over 30 years, starting as a researcher at Stanford, NASA and the United States Air Force and founding a number of VR and AR companies. Having survived multiple hype cycles , I believe we’re finally here — the metaverse will happen and will significantly impact society over the next five years. Unfortunately, the lack of regulatory protections has me deeply concerned. That’s because metaverse providers will have unprecedented power to profile and influence their users. While consumers are aware that social media platforms track where they click and who their friends are, metaverse platforms (virtual and augmented) will have much deeper capabilities, monitoring where users go, what they do, who they’re with, what they look at and even how long their gaze lingers. Platforms will also be able to track user posture, gait, facial expressions, vocal inflections and vital signs. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Invasive monitoring is a privacy concern , but the dangers expand greatly when we consider that targeted advertising in the metaverse will transition from flat media to immersive experiences that will soon become indistinguishable from authentic encounters. For these reasons, it’s important for policymakers to consider the extreme power that metaverse platforms could wield over society and work towards guaranteeing a set of basic “immersive rights.” Many safeguards are needed, but as a starting point I propose the following three fundamental protections: 1. The right to experiential authenticity Promotional content pervades the physical and digital worlds, but most adults can easily identify advertisements. This allows individuals to view the material in the proper context — as paid messaging — and bring healthy skepticism when considering the information. In the metaverse, advertisers could subvert our ability to contextualize messaging by subtly altering the world around us, injecting targeted promotional experiences that are indistinguishable from authentic encounters. For example, imagine walking down the street in a virtual or augmented world. You notice a parked car you’ve never seen before. As you pass, you overhear the owner telling a friend how much they love the car, a notion that subtly influences your thinking consciously or subconsciously. What you don’t realize is that the encounter was entirely promotional, placed there so you’d see the car and hear the interaction. It was also targeted — only you saw the exchange, chosen based on your profile and customized for maximum impact, from the color of the car to the gender, voice and clothing of the virtual spokespeople used. While this type of covert advertising might seem benign, merely influencing opinions about a new car, the same tools and techniques could be used to drive political propaganda, misinformation and outright lies. To protect consumers, immersive tactics such as Virtual Product Placements and Virtual Spokespeople should be regulated. At the least, regulations should protect the basic right to authentic immersive experiences. This could be achieved by requiring that promotional artifacts and promotional people be visually and audibly distinct in an overt way, enabling users to perceive them in the proper context. This would protect consumers from mistaking promotionally altered experiences as authentic. 2. The right to emotional privacy We humans evolved the ability to express emotions on our faces and in our voices, posture and gestures. It’s a basic form of communication that supplements verbal language. Recently, machine learning has enabled software to identify human emotions in real time from faces, voices and posture and from vital signs such as respiration rate, heart rate and blood pressure. While this enables computers to engage in non-verbal language with humans, it can easily cross the line into predatory violations of privacy. That’s because computers can detect emotions from cues that are not perceptible to humans. For example, a human observer cannot easily detect heart rate, respiration rate and blood pressure, which means those cues can reveal emotions that the observed individual did not intend to convey. Computers can also detect “ micro-expressions ” on faces, expressions that are too brief or subtle for humans to perceive, again revealing emotions that the observed had not intended. Computers can even detect emotions from subtle blood flow patterns in human faces that people cannot see, again revealing emotions that were not intended to be expressed. At a minimum, consumers should have the right not to be emotionally assessed at levels that exceed human abilities. This means not allowing vital signs and micro-expressions to be used. In addition, regulators should consider a ban on emotional analysis for promotional purposes. Personally, I don’t want to be targeted by an AI-driven conversational agent that adjusts its promotional tactics based on emotions determined by my blood pressure and respiration rate, both of which can now be tracked by consumer level technologies. 3. The right to behavioral privacy In both virtual and augmented worlds, tracking location, posture, gait and line-of-sight is necessary to simulate immersive experiences. While this is extensive information, the data is only needed in real time. There is no need to store this information for extended periods. This is important because stored behavioral data can be used to create detailed behavioral profiles that document the daily actions of users in extreme granularity. With machine learning, this data can be used to predict how individuals will act and react in a wide range of circumstances during their daily life. And because platforms will have the ability to alter environments for persuasive purposes, predictive algorithms could be used by paying sponsors to preemptively manipulate user behaviors. For these reasons, policymakers should consider banning the storage of immersive data over time, thereby preventing platforms from generating behavioral profiles. In addition, metaverse platforms should not be allowed to correlate emotional data with behavioral data, as that would allow them to impart promotionally altered experiences that don’t just influence what users do in immersive worlds but skillfully manipulate how they feel while doing it. Immersive rights are necessary and urgent The metaverse is coming. While many of the impacts will be positive , we must protect consumers against the dangers with basic immersive rights. Policymakers should consider guaranteeing basic rights in immersive worlds. At a minimum, everyone should have the right to trust the authenticity of their experiences without worrying that third parties are promotionally altering their surroundings without their knowledge and consent. Without such basic regulation , the metaverse may not be a safe or trusted place for anyone. Whether you’re looking forward to the metaverse or not, it could be the most significant change in how society interacts with information since the invention of the internet. We cannot wait until the industry matures to put guardrails in place. Waiting too long could make it impossible to undo the problems, for they’ll be built into the core business practices of major platforms. For those interested in a safe metaverse, I point you towards an international community effort in December 2022 called Metaverse Safety Week. I sincerely hope this becomes an annual tradition and that people around the world focus on making our immersive future safe and magical. Louis Rosenberg, PhD is an early pioneer in the fields of virtual and augmented reality. His work began over 30 years ago in labs at Stanford and NASA. In 1992 he developed the first immersive augmented reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corporation (public on Nasdaq). In 2004 he founded the early AR company Outland Research. He earned his PhD from Stanford University, has been awarded over 300 patents for VR, AR, and AI technologies and was a professor at California State University. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,709
2,021
"Agora: 90% of Gen Z now using apps with interactive live video | VentureBeat"
"https://venturebeat.com/business/90-of-gen-z-now-using-apps-with-interactive-live-video"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 90% of Gen Z now using apps with interactive live video Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 90% of Gen Z is now using apps with interactive live video according to a new study by Agora , which polled over 1,000 Gen Z U.S. consumers on their use of “real-time engagement” — commonly referred to as RTE — technology over the last year. RTE refers to digital experiences that are interactive, collaborative and shared, through live video, live audio and extended reality (AR or VR). For example, students in an education app want to see classmates; users in a dating app want to see potential partners, and buyers in a shopping app want to talk to sellers. Whether it be Twitch or TikTok , Agora’s study found that Gen Z are increasingly relying on RTE video or audio features in the apps they use. In fact, 87% are using more apps with built-in interactive live video streaming or calling. Meanwhile, 62% have tried apps with interactive live audio streaming, capturing the growing popularity of services like Twitter Spaces and Clubhouse. Agora also found RTE technology was more important for certain categories of apps. When asked if interactive video or audio were important for their gaming apps, for example, 69% — more than two-thirds — agreed. Beyond gaming, Gen Z wants RTE integrated into their ecommerce and retail apps. Seventy percent said they would prefer retailers to offer AR and VR so that they can test and try products at home before buying. This survey was conducted by developer platform Agora in August, 2021 and included over 1,000 Gen Z respondents in the U.S. The findings were released ahead of real-time engagement and developer conference RTE2021. Read the full report by Agora. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,710
2,019
"Amazon and L'Oréal let you digitally try on makeup | VentureBeat"
"https://venturebeat.com/mobile/amazon-and-loreal-let-you-digitally-try-on-makeup"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon and L’Oréal let you digitally try on makeup Share on Facebook Share on X Share on LinkedIn L'Oréal's ModiFace makeup preview feature integrated with Amazon product listings. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apps that let you digitally try on makeup aren’t anything new, but L’Oréal hopes to bring them to a wider audience by integrating ModiFace , its AI-powered augmented reality (AR) platform, with Amazon’s ever-growing product catalog. Starting this week, Amazon shoppers on mobile will be able to test out different shades of lipstick on live pics and videos of themselves. The rollout comes a year after L’Oréal teamed up with Facebook to let the social network’s users try on virtual makeup samples through the Facebook app. “We are excited to team up with ModiFace to make shopping for cosmetics online even easier by offering customers the ability to virtually try-on before they buy. With this new AI-powered virtual experience, Amazon customers can now … purchase with greater confidence — wherever they are, whenever they want, with products delivered right to their doorstep,” said head of Amazon Beauty Nicolas Le Bourgeois. “This launch is another important milestone in our vision to be the best possible place for customers to discover and buy beauty products online.” In select beauty listings, shoppers will soon be able to virtually try out products courtesy their phone’s front-facing camera, or preview those items on model photos. It’s all powered by ModiFace’s AR simulation, which leverages analyses of data provided by makeup brands along with product images and descriptions from social media. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! For Amazon, it’s yet another step toward an AR-powered fashion future. Two years ago, the retailer debuted the Echo Look , a connected camera that combines human and machine intelligence to recommend styles, color-filter clothes, compare two outfits, and keep track of what’s in personal wardrobes. The Echo Look ties into Prime Wardrobe, a program akin to those offered by Stitch Fix and Trunk Club that lets users try on clothes and send back what they don’t want to buy. In a development that’s undoubtedly related, Amazon recently unveiled a collection of makeup products under its in-house apparel label. A 2016 survey published by A.T. Kearny found that 69% of American women who shop online for beauty products start their searches at Amazon, and a report from Edge by Ascential showed that sales of health and personal care items on the platform totaled $1.9 billion in the second quarter of 2018, while sales for beauty products were up 26% at $950 million. “We are delighted to team up with Amazon to provide its customers an AR makeup try-on that offers highly realistic results and makes online shopping even more comfortable,” said ModiFace CEO Parham Aarabi. “Thanks to a precise color rendering, enabled by our unique AI-powered technology, shoppers can easily try on thousands of lipsticks available on Amazon and purchase the shades that fit them best.” In 2018 L’Oréal acquired Toronto-based ModiFace, which had a hand in creating custom AR beauty apps from Sephora, Estée Lauder, and well over 80 others. Prior to the purchase, ModiFace collaborated with L’Oréal on the launch of its Style My Hair mobile app, which lets users preview different hairstyles; with retail cosmetics chain Mac on in-store electronic makeup mirrors; and with Benefit Cosmetics on an eyebrow try-on tool. ModiFace employs roughly 70 engineers, researchers, and scientists currently who have collectively submitted more than 200 scientific publications and registered over 30 patents. ModiFace competes with Perfect’s YouCam, which leverages 3D face scanning to enable virtual makeovers of lips, eyes, eyebrows, hairstyles, and cheeks. There’s also ManiMatch, which virtually overlays nail products atop fingernails, and Meitu’s MakeupPlus, in addition to Samsung’s Bixby Vision, which taps ModiFace’s platform to let users try on makeup from Sephora, CoverGirl, and Laneige. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,711
2,023
"PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023  | VentureBeat"
"https://venturebeat.com/security/pwc-highlights-11-chatgpt-and-generative-ai-security-trends-to-watch-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Are ChatGPT and generative AI a blessing or a curse for security teams? While artificial intelligence (AI)’s ability to generate malicious code and phishing emails presents new challenges for organizations, it’s also opened the door to a range of defensive use cases, from threat detection and remediation guidance, to securing Kubernetes and cloud environments. Recently, VentureBeat reached out to some of PWC’s top analysts, who shared their thoughts on how generative AI and tools like ChatGPT will impact the threat landscape and what use cases will emerge for defenders. >>Follow VentureBeat’s ongoing generative AI coverage<< Overall, the analysts were optimistic that defensive use cases will rise to combat malicious uses of AI over the long term. Predictions on how generative AI will impact cybersecurity in the future include: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Malicious AI usage The need to protect AI training and output Setting generative AI usage policies Modernizing security auditing Greater focus on data hygiene and assessing bias Keeping up with expanding risks and mastering the basics Creating new jobs and responsibilities Leveraging AI to optimize cyber investments Enhancing threat intelligence Threat prevention and managing compliance risk Implementing a digital trust strategy Below is an edited transcript of their responses. 1. Malicious AI usage “We are at an inflection point when it comes to the way in which we can leverage AI, and this paradigm shift impacts everyone and everything. When AI is in the hands of citizens and consumers, great things can happen. “At the same time, it can be used by malicious threat actors for nefarious purposes, such as malware and sophisticated phishing emails. “Given the many unknowns about AI’s future capabilities and potential, it’s critical that organizations develop strong processes to build up resilience against cyberattacks. “There’s also a need for regulation underpinned by societal values that stipulates this technology be used ethically. In the meantime, we need to become smart users of this tool, and consider what safeguards are needed in order for AI to provide maximum value while minimizing risks.” Sean Joyce, global cybersecurity and privacy leader, U.S. cyber, risk and regulatory leader, PwC U.S. 2. The need to protect AI training and output “Now that generative AI has reached a point where it can help companies transform their business, it’s important for leaders to work with firms with deep understanding of how to navigate the growing security and privacy considerations. “The reason is twofold. First, companies must protect how they train the AI as the unique knowledge they gain from fine-tuning the models will be critical in how they run their business, deliver better products and services, and engage with their employees, customers and ecosystem. “Second, companies must also protect the prompts and responses they get from a generative AI solution, as they reflect what the company’s customers and employees are doing with the technology.” Mohamed Kande, vice chair — U.S. consulting solutions co-leader and global advisory leader, PwC U.S. 3. Setting generative AI usage policies “Many of the interesting business use cases emerge when you consider that you can further train (fine-tune) generative AI models with your own content, documentation and assets so it can operate on the unique capabilities of your business, in your context. In this way, a business can extend generative AI in the ways they work with their unique IP and knowledge. “This is where security and privacy become important. For a business, the ways you prompt generative AI to generate content should be private for your business. Fortunately, most generative AI platforms have considered this from the start and are designed to enable the security and privacy of prompts, outputs and fine-tuning content. “However, now all users understand this. So, it is important for any business to set policies for the use of generative AI to avoid confidential and private data from going into public systems, and to establish safe and secure environments for generative AI within their business.” Bret Greenstein, partner, data, analytics and AI, PwC U.S. 4. Modernizing security auditing “Using generative AI to innovate the audit has amazing possibilities! Sophisticated generative AI has the ability to create responses that take into account certain situations while being written in simple, easy-to-understand language. “What this technology offers is a single point to access information and guidance while also supporting document automation and analyzing data in response to specific queries — and it’s efficient. That’s a win-win. “It’s not hard to see how such a capability could provide a significantly better experience for our people. Plus, a better experience for our people provides a better experience for our clients, too.” Kathryn Kaminsky, vice chair — U.S. trust solutions co-leader 5. Greater focus on data hygiene and assessing bias “Any data input into an AI system is at risk for potential theft or misuse. To start, identifying the appropriate data to input into the system will help reduce the risk of losing confidential and private information to an attack. “Additionally, it’s important to exercise proper data collection to develop detailed and targeted prompts that are fed into the system, so you can get more valuable outputs. “Once you have your outputs, review them with a fine-tooth comb for any inherent biases within the system. For this process, engage a diverse team of professionals to help assess any bias. “Unlike a coded or scripted solution, generative AI is based on models that are trained, and therefore the responses they provide are not 100% predictable. The most trusted output from generative AI requires collaboration between the tech behind the scenes and the people leveraging it.” Jacky Wagner, principal, cybersecurity, risk and regulatory, PwC U.S. 6. Keeping up with expanding risks and mastering the basics “Now that generative AI is reaching widescale adoption, implementing robust security measures is a must to protect against threat actors. The capabilities of this technology make it possible for cybercriminals to create deep fakes and execute malware and ransomware attacks more easily, and companies need to prepare for these challenges. “The most effective cybermeasures continue to receive the least focus: By keeping up with basic cyberhygiene and condensing sprawling legacy systems, companies can reduce the attack surface for cybercriminals. “Consolidating operating environments can reduce costs, allowing companies to maximize efficiencies and focus on improving their cybersecurity measures.” Joe Nocera, PwC partner leader, cyber, risk and regulatory marketing 7. Creating new jobs and responsibilities “Overall, I’d suggest companies consider embracing generative AI instead of creating firewalls and resisting — but with the appropriate safeguards and risk mitigations in place. Generative AI has some really interesting potential for how work gets done; it can actually help to free up time for human analysis and creativity. “The emergence of generative AI could potentially lead to new jobs and responsibilities related to the technology itself — and creates a responsibility for making sure AI is being used ethically and responsibly. “It also will require employees who utilize this information to develop a new skill — being able to assess and identify whether the content created is accurate. “Much like how a calculator is used for doing simple math-related tasks, there are still many human skills that will need to be applied in the day-to-day use of generative AI, such as critical thinking and customization for purpose — in order to unlock the full power of generative AI. “So, while on the surface it may seem to pose a threat in its ability to automate manual tasks, it can also unlock creativity and provide assistance, upskilling and treating opportunities to help people excel in their jobs.” Julia Lamm, workforce strategy partner, PwC U.S. 8. Leveraging AI to optimize cyber investments “Even amidst economic uncertainty, companies aren’t actively looking to reduce cybersecurity spend in 2023; however, CISOs must be economical with their investment decisions. “They are facing pressure to do more with less, leading them to invest in technology that replaces overly manual risk prevention and mitigation processes with automated alternatives. “While generative AI is not perfect, it is very fast, productive and consistent, with rapidly improving skills. By implementing the right risk technology — such as machine learning mechanisms designed for greater risk coverage and detection — organizations can save money, time and headcount, and are better able to navigate and withstand any uncertainty that lies ahead.” Elizabeth McNichol, enterprise technology solutions leader, cyber, risk and regulatory, PwC U.S. 9. Enhancing threat intelligence “While companies releasing generative AI capabilities are focused on protections to prevent the creation and distribution of malware, misinformation or disinformation , we need to assume generative AI will be used by bad actors for these purposes and stay ahead of these considerations. “In 2023, we fully expect to see further enhancements in threat intelligence and other defensive capabilities to leverage generative AI for good. Generative AI will allow for radical advancements in efficiency and real-time trust decisions; for example, forming real-time conclusions on access to systems and information with a much higher level of confidence than currently deployed access and identity models. “It is certain generative AI will have far-reaching implications on how every industry and company within that industry operates; PwC believes these collective advancements will continue to be human led and technology powered, with 2023 showing the most accelerated advancements that set the direction for the decades ahead.” Matt Hobbs, Microsoft practice leader, PwC U.S. 10. Threat prevention and managing compliance risk “As the threat landscape continues to evolve, the health sector — an industry ripe with personal information — continues to find itself in threat actors’ crosshairs. “Health industry executives are increasing their cyber budgets and investing in automation technologies that can not only help prevent against cyberattacks, but also manage compliance risks, better protect patient and staff data, reduce healthcare costs, eliminate process inefficiencies and much more. “As generative AI continues to evolve, so do associated risks and opportunities to secure healthcare systems, underscoring the importance for the health industry to embrace this new technology while simultaneously building up their cyberdefenses and resilience.” Tiffany Gallagher, health industries risk and regulatory leader, PwC U.S. 11. Implementing a digital trust strategy “The velocity of technological innovation, such as generative AI, combined with an evolving patchwork of regulation and erosion of trust in institutions requires a more strategic approach. “By pursuing a digital trust strategy, organizations can better harmonize across traditionally siloed functions such as cybersecurity, privacy and data governance in a way that allows them to anticipate risks while also unlocking value for the business. “At its core, a digital trust framework identifies solutions above and beyond compliance — instead prioritizing the trust and value exchange between organizations and customers.” Toby Spry, principal, data risk and privacy, PwC U.S. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,712
2,022
"Adding sentiment analysis to natural language understanding, Deepgram brings in $47M | VentureBeat"
"https://venturebeat.com/ai/adding-sentiment-analysis-to-natural-language-understanding-deepgram-brings-in-47m-%EF%BF%BC"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adding sentiment analysis to natural language understanding, Deepgram brings in $47M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Seven years ago, Scott Stephenson was working as a postdoctoral researcher building detectors designed to detect dark matter, deployed deep under the surface of the Earth. With the detectors the goal was to pull signals out of noise to help solve the mysteries of the universe. As part of the process, there was technology built to better understand sounds using machine learning techniques. It’s an approach that Stephenson figured had broader applicability for pulling meaning out of human speech, which led him to start up Deepgram in 2015. Deepgram is taking a somewhat nuanced approach to building natural language processing (NLP ) capabilities with its own foundation model that can execute transcription functions as well as summation and sentiment analysis from audio. “We have our own foundation model, where this model can be used to achieve several goals from audio,” Stephenson said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Those goals include building out customized models for specific use cases and industry verticals. To help Deepgram achieve those goals, the company today announced that it has raised $47 million in funding to help continue to build out its technology and go to market efforts. How Deepgram builds it NLP technology The market for NLP and voice transcription technologies today is increasingly crowded with consumer services like Otter and large vendors including AWS, Google and IBM all providing services. Stephenson said his company’s technology is built with a series of deep learning techniques including convolutional neural networks (CNN), recurrent neural networks (RNN) and transformers. The models that Deepgram have built are trained on audio waveforms to pull meaning from the spoken word. Deepgram has also built out its own data labeling technologies and workflow for being able to identify what is being said in an audio file and how it can be classified. From a continuous innovation perspective, Deepgram is taking a self-supervised approach to reinforcement learning to help its NLP models improve over time. “The model is aware of when it doesn’t know something, but it still will give you an answer,” Stephenson said. Those answers that the model isn’t entirely confident about get logged. The Deepgram platform includes both automated elements as well as human data scientists that will review the uncertain item to suggest further training within a specific vertical or area of expertise to help update the model. Sentiment analysis might still struggle with sarcasm A key challenge that faces transcription and NLP tools is the capability to actually understand the tone of the speaker with sentiment analysis. A common way that sentiment analysis is done today is purely with text. For example if negative words are used in a review, the overall sentiment is not considered to be positive. With the spoken word, negative sentiment isn’t just about words, it’s also about tone. “The easy version of supporting sentiment is to only look at the words but, of course, as humans with a couple of microphones in our head, we know that tone matters,” Stephenson said. Being able to understand users’ frustration is important for accurate sentiment analysis. The Deepgram system uses what Stephenson referred to as “acoustic cues” in order to understand the sentiment of the speaker and it is a different model than what would be used for just text-based sentiment analysis. While the Deepgram system can better determine sentiment than text-based methods alone, detecting sarcasm can be a little trickier. “If you ask an American to figure out if somebody is being sarcastic or not, we can usually do a pretty good job,” Stephenson said. “The models are not tuned for that yet; I wouldn’t say that’s because of the expressiveness of the models, though, that really just has to do with the data labeling and the demand of customers asking for it.” Stephenson said that if there were enough users that wanted to be able to more accurately detect sarcasm and would be willing to pay for it, the technology would likely be developed faster. Either way, he expects that NLPs ability to detect sarcasm accurately is likely to come within the next five years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,713
2,022
"Amazon digs into ambient and generalizable intelligence at re:MARS | VentureBeat"
"https://venturebeat.com/ai/amazon-digs-into-ambient-and-generalizable-intelligence-at-remars"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon digs into ambient and generalizable intelligence at re:MARS Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many, if not most, AI experts maintain that artificial general intelligence (AGI) is still many decades away , if not longer. And the AGI debate has been heating up over the past couple of months. However, according to Amazon, the route to “generalizable intelligence” begins with ambient intelligence. And it says that future is unfurling now. “We are living in the golden area of AI, where dreams and science fiction are becoming a reality,” said Rohit Prasad, senior vice president and head scientist for Alexa at Amazon. Prasad spoke on the potential evolution from ambient intelligence to generalizable intelligence (GI) today at re:MARS, Amazon’s conference on machine learning (ML), automation, robotics and space. Prasad made clear that his definition of generalizable intelligence is not an all-knowing, human-like AI His definition is that GI agents should have three key attributes: They should have the ability to accomplish multiple tasks, rapidly evolve to ever-changing environments and learn new concepts and actions with minimal external input from humans. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ambient intelligence , Prasad said, is when underlying AI is available everywhere, assists people when they need it – and also learns to anticipate needs – then fades into the background when it’s not needed. A prime example and a significant step toward GI, Prasad said, is Amazon’s Alexa , which he described as a “personal assistant, advisor, companion.” The virtual assistant is equipped with 30 ML systems that process various sensory signals, he explained. It gets more than 1 billion requests a week in 17 languages in dozens of countries. It will also, he said, be headed to the moon as part of the uncrewed Artemis 1 mission set to launch in August. “One thing that surprised me the most about Alexa,” Prasad said, “is the companionship relationship we have with it. Human attributes of empathy and affect are key for building trust.” He added that these attributes have been even more important due the COVID-19 pandemic when so many of us have lost loved-ones. “While AI can’t eliminate that pain of loss,” Prasad said, “it can definitely make their memories last.” As an example of creating those last personal relationships, a future Alexa feature will be able to synthesize short audio clips into longer speech. As an example, Prasad showed a video of a deceased grandmother reading a grandson a bedtime story. “This required inventions where we had to learn to produce a high-quality voice with less than a minute of recording versus hours of recording,” he said. He added that it involved framing the problem “as a voice conversion task and not a speech generation path,” he said. Ambient intelligence reactive, proactive, predictive As Prasad explained, ambient intelligence is both reactive (responding to direct requests) as well as proactive (anticipating needs). This it accomplishes through the use of numerous sensing technologies: vision, sound, ultrasound, depth, mechanical and atmospheric sensors. These are then acted on. All told, this capability requires deep learning capabilities, as well as natural language processing (NLP). Ambient intelligence “agents” are also self-supervising and self-learning, which allow them to generalize what they learn and apply that to new contexts. Alexa’s self-learning mechanism, for instance, automatically corrects tens of millions of defects a week, he said – both customer errors as well as errors in its own natural language understanding (NLU) models. He described this as the “most practical” route to GI, or the ability for AI entities to understand and learn any intellectual task that humans can. Ultimately, “that’s why the ambient-intelligence path leads to generalized intelligence,” Prasad said. What do GI agents actually do? GI requires a significant dose of common sense, Prasad said, claiming that Alexa already exhibits this: If a user asks to set a reminder for the Super Bowl, for example, it will identify the date of the big game while also converting it to their time zone, then remind them before it starts. It also suggests routines and detects anomalies through its “hunches” feature. Still, he emphasized, GI isn’t an “all-knowing, all-capable” technology that can accomplish any task. “We humans are still the best example of generalization,” he said, “and the standard for AI to aspire to.” GI is already being realized, he pointed out: Foundational transformer-based large language models trained with self-supervision are powering many tasks with far less manually labeled data than ever before. An example of this is Amazon’s Alexa Teacher Model, which gleans knowledge from NLU, speech recognition, dialogue prediction and visual scene understanding. The goal is to take automated reasoning to new heights, with the first goal being the “pervasive use” of commonsense knowledge in conversational AI, he said. In working towards this, Amazon has released a dataset for commonsense knowledge with more than 11,000 newly collected dialogues to aid research in open-domain conversation. The company has also invented a generative approach that it deems “think-before-you-speak.” This involves the AI agent learning to externalize implicit commonsense knowledge (“think”) and using a large language model (such as the freely available semantic network ConceptNet) combined with a commonsense knowledge graph. It then uses that knowledge to generate responses (“speak”). Amazon is also training Alexa to answer complex queries requiring multiple inference steps, and is also enabling “conversational explorations” on ambient devices so that users don’t have to pull out their phones or laptops to explore the web. Prasad said that this capability has required dialogue-flow prediction through deep learning, web-scale neural information retrieval, and automated summarization that can distill information from multiple sources. The Alexa Conversations dialogue manager helps Alexa decide what actions it should take based on interaction, dialogue history, current inputs and queries, query-guided and self-attention mechanisms. Neural information retrieval pulls information from different modalities and languages based on billions of data points. Transformer-based models – trained using a multistage paradigm optimized for diverse data sources – help to semantically match queries with relevant information. Deep learning models distill information for users while holding onto critical information. Prasad described the technology as multitasking, multilingual and multimodal, allowing for “more natural, human-like conversations.” The ultimate goal is to not only make AI useful for customers in their daily lives, but also simple. It’s intuitive, they want to use it, and even come to rely on it. It’s AI that thinks before it speaks, is equipped with common sense knowledge graphs, and can generate responses through explainability – in other words, have the capability to process questions and answers that are not always straightforward. Ultimately, GI is becoming more and more realizable by the day, as “AI can generalize better than before,” Prasad said. For retail, AI learns to let customers just walk out Amazon is also using ML and AI to “reinvent” physical retail through such capabilities as futuristic palm scanning and smart carts in its Amazon Go stores. This enables the “just walk out” ability, explained Dilip Kumar, vice president for physical retail and technology. The company opened the first of its physical stores in January 2018. These have evolved from 1,800 square foot convenience style to 40,000 square foot grocery style, Kumar said. The company advanced these with its Dash Cart in summer 2020, and with Amazon One in fall 2020. Advanced computer vision capabilities and ML algorithms allow people to scan their palms upon entry to a store, pick up items, add them to their carts, then walk out. Palm scanning was selected because the gesture had to be intentional and intuitive, Kumar explained. Palms are associated with the customer’s credit or debit card information, and accuracy is achieved in part through subsurface images of vein information. This allows for accuracy at “a greater order of magnitude than what face recognition can do,” Kumar said. Carts, meanwhile, are equipped with weight sensors that identify specific items and the number of items. Advanced algorithms can also handle the increased complexity of “picks and returns” – or when a customer changes their mind about an item – and can eliminate ambient noise. These algorithms are run locally in-store, in the cloud, and on the edge, Kumar explained. “We can mix and match depending on the environment,” he said. The goal is to “make this technology entirely recede into the background,” Kumar said, so that customers can focus on shopping. “We hid all of this complexity from customers,” he said, so that they can be “immersed in their shopping experience, their mission.” Similarly, the company opened its first Amazon Style store in May 2022. Upon entry to the store, customers can scan items on the shop floor that are automatically sent to fitting rooms or pick-up desks. They are also offered suggestions on additional buys. Ultimately, Kumar said, “we’re very early in our exploration, our pushing the boundaries of ML. We have a whole lot of innovation ahead of us.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,714
2,022
"Large language model expands natural language understanding, moves beyond English | VentureBeat"
"https://venturebeat.com/ai/large-language-model-expands-natural-language-understanding-moves-beyond-english"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Large language model expands natural language understanding, moves beyond English Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the primary use cases for artificial intelligence (AI) is to help organizations process text data. It’s an area where natural language processing and natural language understanding (NLP/NLU) is a foundational technology. One such foundational large language model (LLM) technology comes from OpenAI rival, Cohere , which launched its commercial platform in 2021. The Toronto-based startup’s founders benefitted from machine learning (ML)-research efforts at the University of Toronto, as well as the Google Brain research effort in Toronto led by Geoffrey Hinton , which explored deep learning neural network approaches. Cohere’s goal is to go beyond research to bring the benefits of LLM to enterprise users. “We had this vision of creating large language models and then giving access to businesses so that they could build cool stuff with this tech that they couldn’t build in-house,” Nick Frosst, cofounder at Cohere, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To date, Cohere’s models have been based on the English language, but that is now changing. Today, the company announced the release of a multilingual text-understanding LLM that can understand and work with more than 100 different languages. It’s a multilingual world and now AI lives in it, too Cohere is not the first LLM to venture beyond the confines of the English language to support multilingual capabilities. BLOOM (which is an acronym for BigScience Large Open-science Open-access Multilingual Language Model) was officially launched in July. The BLOOM effort is backed by a series of organizations including HuggingFace and CNRS , the French National Research Agency. The Cohere multilingual approach is a bit different than BLOOM and is initially focused on understanding languages to help support different natural language use cases. Cohere’s model does not yet actually generate multilingual text like BLOOM, but that is a capability that Frosst said will be coming in the future. Nils Reimers, director of machine learning at Cohere, explained to VentureBeat that among the core use cases for Cohere’s multilingual approach is enabling semantic search across languages. The model is also useful for enabling content moderation across languages and aggregating customer feedback. “Cohere first focused on just English models, but we thought maybe it’s a bit boring just to focus on English models because a large majority of the population on the Earth is non-English speaking,” Reimers said. How Cohere’s LLM uses natural language understanding to become multilingual Training an LLM to be multilingual is not a trivial task. Reimers explained that first, Cohere built out a large corpus of question-and-answer pairs that included hundreds of millions of data points in English and non-English languages. The training looked to help determine when the same content was being presented in different languages. For example, if there is a line of text in English, matching that same line in Arabic or any other language, then aligning that as a mathematical vector such that the ML system understands the two pieces of text are similar. As such, for a content moderation use case, a line of hateful text, for example, can be identified, regardless of the language it is written in. The natural language training also enables semantic search such that similar pieces of news written in different languages can be identified. “Creating models like this takes a fair bit of compute, and it takes compute not only in processing all of the data, but also in training the model,” Frosst said. Looking forward, the goal for Cohere is to continue to build out its capabilities to better understand increasingly larger volumes of text in any language. “Generally, what’s next for Cohere at large is continuing to make amazing language models and make them accessible and useful to people,” Frosst said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,715
2,022
"Purpose, potential and pitfalls of customer-facing voice AI | VentureBeat"
"https://venturebeat.com/ai/purpose-potential-and-pitfalls-of-customer-facing-voice-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Purpose, potential and pitfalls of customer-facing voice AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Back in 2018, Google’s CEO, Sundar Pichai, demoed the Google Duplex assistant at the company’s developer conference. The assistant mimicked realistic and nuanced human speech patterns (complete with “ums” and “ahhs”) as it made an appointment for a haircut and booked a table at a restaurant while in fluent conversation with a real person. Although the audience erupted in rapturous applause at the achievement, in the Twittersphere and beyond, observers were quick to question what they were hearing. Some called the likeness “scary,” and others felt like a deception was at play — with the human on the other end of the line completely unaware that they were speaking with a bot. In the end, the whole episode wasn’t great PR for artificial intelligence or for sophisticated voice technology. But that’s unfortunate because the truth of the matter is that voice AI has tremendous potential to empower consumers and deliver value to the businesses that deploy it — provided there is a clear understanding of its purpose and of its limitations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Voice AI in the wild One of the best examples is food ordering. Sky-high inflation has been pushing costs up for restaurant owners, while labor shortages have left them struggling to keep up with customer demand (which has been slow to abate post – lockdown ). Some smaller restaurants have been letting the phone ring out, while some larger ones have even been forced to keep drive-through customers waiting , leading to frustration. So they’re turning to voice technology in increasing numbers to pick up the slack. It makes complete sense. So long as the voice technology is sophisticated enough — and you might be surprised how smart it is right now — having voice AI take an order allows employees to get on with the important work of making tasty food and ensuring dine-in customers have a great experience. In this scenario, no one is deceived — this kind of voice AI tends to declare its nonhuman status if it isn’t already obvious. Customers are happy, and service industry professionals are supported, not undermined. Good service, not servants So how about this idea: Rather than each of us having our own personal humanoid Jeeves (as in the Google Duplex scenario), what if different brands and businesses had their own assistants that formed a broad ecosystem of voice helpers? This way, businesses could assert their own brand identity and cultivate one-on-one relationships with their customers without an intermediary. For their part, customers could deal with a voice AI that truly knows the goods or services the company has to offer, rather than an Alexa-style assistant that attempts to fumble its way through. Restaurant voice assistants, for example, become familiar with the menu. They learn favorite combinations; they can make changes and suggestions; they learn how to upsell. Why couldn’t that be replicated throughout the rest of hospitality, or retail , or even professional services? The answer is: It could, and it’s starting to happen. Rather than thinking about the creation of sentient AI servants, we should start thinking of voice assistants as functional tools that we can reorient in this way. In the “real world,” most of us don’t have servants or envoys to negotiate for us — but we do rely on knowledgeable, pleasant and efficient frontline staff. Why not replicate systems that work rather than outmoded ones? I believe that’s what we’ll start to do, and the experiences of brands and customers alike will become more vivid and fruitful because of it. Importantly, this isn’t about replacing staff with an army of voice assistants. It’s about giving employees the time and space they need to focus on critical tasks, streamlining clunky ordering systems, and helping businesses grow sales. And it’s also about enabling us as customers to step away from screens and devices to order in the most natural and human way we know how — with our voices. Zubin Irani is CRO of SoundHound. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,716
2,022
"How to design branded voice experiences that engage and benefit customers long-term | VentureBeat"
"https://venturebeat.com/automation/how-to-design-branded-voice-experiences-engage-benefit-customers-long-term"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to design branded voice experiences that engage and benefit customers long-term Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nearly every aspiring brand is looking to build and monetize long-term, sticky relationships with customers. In 2022, however, competing for their attention can feel like a daunting, almost insurmountable task. As demand has increased for decreasingly available attention, some desperate tactics have emerged: Hong Kong redesigning its traffic lights to try to catch the attention of pedestrians who are staring at their phones; or an increasing number of brands relying on “dark patterns” in an attempt to access more data and secure more eyeballs. The good news is, there’s a simpler way emerging to productively engage with an increasingly distracted populace. More than a quarter (27%) of the global online population is using the voice search feature on their mobile devices, and 500 million people use Siri every day. High-quality voice experiences are a particularly promising medium for engaging with consumers in a meaningful, responsive and consistent way, where the value exchange between brand and customer is re-balanced. We’ve learned — by necessity — how to use our fingers to do the talking. Now a much more intuitive and natural form of communication, voice experiences, allows people to do things they already need and want to do — but with the ease and simplicity of talking to a friend. Direct, immediate bridges For brands, voice experiences can create a direct and immediate bridge between consumer needs and a brand’s products and services, without friction. It’s this speed of access that’s already changing consumer behaviors today. Erica, the virtual assistant that Bank of America brought to market in 2018, has been used over one billion times by customers wanting information about transactions, refunds and charges. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Voice experiences also allow brands to engage with consumers when screens are cumbersome (that is, while cooking, driving or running on a treadmill), or even when multitasking (such as while going for a walk or listening to a podcast). Today’s leading brands are already recognizing commerce’s shift from screens to voice, and are moving first to maintain and grow market share in this new channel. Juniper Research forecasts that the value of ecommerce transaction via voice assistants will top $19.4 billion in 2023. We’ve seen the likes of SONOS, Disney, Samsung and Bank of America take leading positions in the evolution of branded voice services. And as Wikipedia project sponsor Wikimedia Foundation has said : “When a virtual voice assistant answers a question using Wikimedia knowledge, people don’t always know where the information comes from.” That’s the justification for its effort to design a new sound logo that identifies Wikimedia content, and it’s a testament to increasing market demand for voice. The number of users of voice assistants multiplied from 544.1 million in 2015 to 2.6 billion in 2021, the foundation says. Designing new journeys from the ground up To unlock the value that voice can provide, brands must design branded voice experiences that are truly additive to their users’ daily lives. As studies have shown , “A 20% increase in simplicity results in a 96% increase in customer loyalty. It can result in consumers being 86% more likely to purchase brands and 115% more likely to recommend those brands to others.” However, it’s not enough to simply add generic voice commands on top of existing screen-based experiences while continuing to drive attention and interaction to screens to execute the final command. Nor is this a matter of moving everything inside an app to voice. To win customer loyalty and maintain brand trust, entirely new journeys must be designed from the ground up — journeys that are optimized for their context of use and, often, that move seamlessly between voice and visual interfaces. So what should brands keep in mind when setting out to build impactful voice experiences for the first time? Here are five best practices we’ve come back to time and again: Prioritize simple use cases The reduction of cognitive load is what gives consumers a sense of relief when using voice technology, and that simple sense of relief is what fundamentally makes voice experiences so valuable. Instead of investing in complex, multi-turn use cases that sound impressive, prioritize implementing simple use cases that allow you to deliver a voice experience that will reduce time-to-value for users and make consumers’ lives easier. Single-minded, simpler use cases are easier to learn and increase the likelihood of meeting users’ expectations. Quality is everything One of the most common complaints about voice experiences isn’t the lack of complex, advanced interactions, but rather the frequency of misunderstanding of requests. Tolerance for latency is wafer-thin. This means ensuring that every connection point in the voice assistant — from device connection to automatic speech recognition (ASR) and natural language processing (NLP) (that is, reliably parsing, tagging and delivering meaning from utterances) — is successfully being delivered before moving on to anything more advanced. Be mindful of when voice is the most efficient experience (and when it’s not) While a branded voice assistant can be viewed through the same lens as a branded app — as a container of use cases that are tethered to a brand and separated from others — creating a voice assistant is not a matter of replicating an existing app’s functionality via audio channels. We’ve already learned that good apps don’t simply replicate good websites, and the same logic holds true in the move from apps to voice assistants. When consumers know and can articulate what they want, voice can work beautifully to fulfill their needs easily and quickly. For many interactions, including when consumers are unsure precisely what they want or where the range of decisions is complex, screens will likely remain optimal. And as more and more visual interfaces are embedded in our lives (and virtual lives), voice will increasingly complement these rich visual experiences as part of the same product or service experience. Build continuity across a variety of devices To create a seamless voice experience, you must understand the variety of devices that consumers are accessing throughout the day, then orchestrate an experience across those devices. For example, a user may want to order a pair of sneakers in the morning while in their kitchen via their smart speakers, and then check in on their order during their evening commute via smart earbuds, so your voice experience needs to support that. Furthermore, interaction patterns, audio cues, language and tone should be consistent to build familiarity and trust with your brand over time. Own your brand experience Just as apps and websites have become central materializations of a brand’s personality and what the brand stands for, voice experiences can and should evolve to be the same. From your use case to wake work, voice, content and performance, every element must be considered and combined to create the brand experience your customers will enjoy. Creating a direct relationship with consumers through a branded voice assistant is the most effective way to own your brand and capture the signals from customers to rapidly improve your service experience. Enabling a ‘head-up’ culture Designing a voice-based multimedia user experience from the ground up can feel like an overwhelming undertaking. Brands should remember that the best voice experiences simply address existing user needs and make them markedly faster and easier to meet. The goal is not to inundate consumers with novel technology they only use once. Rather, the goal is to build everyday usage and monetize relationships with consumers in those places where voice can make their lives easier. If we do, the hope is that we can avoid redesigning our cities for distracted citizens and instead enable a “head-up” future where everyone is just a little bit more present. John Goscha is the founder and CEO of Native Voice. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,717
2,022
"What is computer vision (or machine vision)? | VentureBeat"
"https://venturebeat.com/2022/06/17/what-is-computer-vision-or-machine-vision"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is computer vision (or machine vision)? Share on Facebook Share on X Share on LinkedIn Close up of Pacific Islander woman's eye with mechanical lens Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents Key areas of computer vision Best applications for computer vision How established players are tackling computer vision Machine vision startup scene What machine vision can’t do The process of identifying objects and understanding the world through the images collected from digital cameras is often referred to as “computer vision” or “machine vision.” It remains one of the most complicated and challenging areas of artificial intelligence (AI), in part because of the complexity of many scenes captured from the real world. The area relies upon a mixture of geometry, statistics, optics, machine learning and sometimes lighting to construct a digital version of the area seen by the camera. Many algorithms deliberately focus on a very narrow and focused goal, such as identifying and reading license plates. Key areas of computer vision AI scientists often focus on particular goals, and these particular challenges have evolved into important subdisciplines. Often, this focus leads to better performance because the algorithms have a more clearly defined task. The general goal of machine vision may be insurmountable, but it may be feasible to answer simple questions like, say, reading every license plate going past a toll booth. Some important areas are: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Face recognition: Locating faces in images and identifying the people using ratios of the distances between facial features can help organize collections of photos and videos. In some cases, it can provide an accurate enough identification to provide security. Object recognition: Finding the boundaries between objects helps segment images, inventory the world, and guide automation. Sometimes the algorithms are strong enough to accurately identify objects, animals or plants, a talent that forms the foundation for applications in industrial plants, farms and other areas. Structured recognition: When the setting is predictable and easily simplified, something that often happens on an assembly line or an industrial plant, the algorithms can be more accurate. Computer vision algorithms provide a good way to ensure quality control and improve safety, especially for repetitive tasks. Structured lighting: Some algorithms use special patterns of light, often generated by lasers, to simplify the work and provide more precise answers than can be generated from a scene with diffuse lighting from many, often unpredictable, sources. Statistical analysis: In some cases, statistics about the scene can help track objects of people. For example, tracking the speed and length of a person’s steps can identify the person. Color analysis: A careful analysis of the colors in an image can answer questions. For instance, a person’s heart rate can be measured by tracking the slightly redder wave that sweeps across the skin with each beat. Many bird species can be identified by the distribution of colors. Some algorithms rely upon sensors that can detect light frequencies outside the range of human vision. Best applications for computer vision While the challenge of teaching computers to see the world remains large, some narrow applications are understood well enough to be deployed. They may not offer perfect answers but they are right enough to be useful. They achieve a level of trustworthiness that is good enough for the users. Facial recognition: Many websites and software packages for organizing photos offer some mechanism for sorting images by the people inside them. They might, say, make it possible to find all images with a particular face. The algorithms are accurate enough for this task, in part because the users don’t require perfect accuracy and misclassified photos have little consequence. The algorithms are finding some application in areas of law enforcement and security, but many worry that their accuracy is not certain enough to support criminal prosecution. 3D object reconstruction: Scanning objects to create three-dimensional models is a common practice for manufacturers, game designers and artists. When the lighting is controlled, often by using a laser, the results are precise enough to accurately reproduce many smooth objects. Some feed the model into a 3D printer, sometimes with some editing, to effectively create a three-dimensional reproduction. The results from reconstructions without controlled lighting vary widely. Mapping and modeling: Some are using images from planes, drones and automobiles to construct accurate models of roads, buildings and other parts of the world. The precision depends upon the accuracy of the camera sensors and the lighting on the day it was captured. Digital maps are already precise enough for planning travel and they are continually refined, but often require human editing for complex scenes. The models of buildings are often accurate enough for the construction and remodeling of buildings. Roofers, for example, often bid jobs based on measurements from automatically constructed digital models. Autonomous vehicles: Cars that can follow lanes and maintain a good following distance are common. Capturing enough detail to accurately track all objects in the shifting and unpredictable lighting of the streets, though, has led many to use structured lighting, which is more expensive, bigger and more elaborate. Automated retail: Store owners and mall operators commonly use machine vision algorithms to track shopping patterns. Some are experimenting with automatically charging customers who pick up an item and don’t put it back. Robots with mounted scanners also track inventory to measure loss. [Related: Researchers find that labels in computer vision datasets poorly capture racial diversity ] How established players are tackling computer vision The large technology companies all offer products with some machine vision algorithms, but these are largely focused on narrow and very applied tasks like sorting collections of photos or moderating social media posts. Some, like Microsoft, maintain a large research staff that is exploring new topics. Google, Microsoft and Apple, for example, offer photography websites for their customers that store and catalog the users’ photos. Using facial recognition software to sort collections is a valuable feature that makes finding particular photos easier. Some of these features are sold directly as APIs for other companies to implement. Microsoft also offers a database of celebrity facial features that can be used for organizing images collected by the news media over the years. People looking for their “celebrity twin” can also find the closest match in the collection. Some of these tools offer more elaborate details. Microsoft’s API, for instance, offers a “describe image” feature that will search multiple databases for recognizable details in the image like the appearance of a major landmark. The algorithm will also return descriptions of the objects as well as a confidence score measuring how accurate the description might be. Google’s Cloud Platform offers users the option of either training their own models or relying on a large collection of pretrained models. There’s also a prebuilt system focused on delivering visual product search for companies organizing their catalog. The Rekognition service from AWS is focused on classifying images with facial metrics and trained object models. It also offers celebrity tagging and content moderation options for social media applications. One prebuilt application is designed to enforce workplace safety rules by watching video footage to ensure that every visible employee is wearing personal protective equipment (PPE). The major computing companies are also heavily involved in exploring autonomous travel, a challenge that relies upon several AI algorithms, but especially machine vision algorithms. Google and Apple, for instance, are widely reported to be developing cars that use multiple cameras to plan a route and avoid obstacles. They rely on a mixture of traditional cameras as well some that use structured lighting such as lasers. Machine vision startup scene Many of the machine vision startups are concentrating on applying the topic to building autonomous vehicles. Startups like Waymo , Pony AI , Wayve , Aeye , Cruise Automation and Argo are a few of the startups with significant funding who are building the software and sensor systems that will allow cars and other platforms to navigate themselves through the streets. Some are applying the algorithms to helping manufacturers enhance their production line by guiding robotic assembly or scrutinizing parts for errors. Saccade Vision , for instance, creates three-dimensional scans of products to look for defects. Veo Robotics created a visual system for monitoring “workcells” to watch for dangerous interactions between humans and robotic apparatuses. Tracking humans as they move through the world is a big opportunity whether it be for reasons of safety, security or compliance. VergeSense , for instance, is building a “workplace analytics” solution that hopes to optimize how companies use shared offices and hot desks. Kairos builds privacy-savvy facial recognition tools that help companies know their customers and enhance the experience with options like more aware kiosks. AiCure identifies patients by their face, dispenses the correct drugs and watches them to make sure they take the drug. Trueface watches customers and employees to detect high temperatures and enforce mask requirements. Other machine vision companies are focusing on smaller chores. Remini , for example, offers an “AI Photo Enhancer” as an online service that will add detail to enhance images by increasing their apparent resolution. What machine vision can’t do The gap between AI and human ability is, perhaps, greater for machine vision algorithms than some other areas like voice recognition. The algorithms succeed when they are asked to recognize objects that are largely unchanging. People’s faces, for instance, are largely fixed and the collection of ratios of distances between major features like the nose and corners of eyes rarely change very much. So image recognition algorithms are adept at searching vast collections of photos for faces that display the same ratios. But even basic concepts like understanding what a chair might be are confounded by the variation. There are thousands of different types of objects where people might sit, and maybe even millions of examples. Some are building databases that look for exact replicas of known objects but it is often difficult for machines to correctly classify new objects. A particular challenge comes from the quality of sensors. The human eye can work in an expansive range of light, but digital cameras have trouble matching performance when the light is lower. On the other hand, there are some sensors that can detect colors outside the range of the rods and cones in human eyes. An active area of research is exploiting this wider ability to allow machine vision algorithms to detect things that are literally invisible to the human eye. Read more: How will AI be used ethically in the future? AI Responsibility Lab has a plan VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,718
2,020
"Driving into the future from autonomous to AI | VentureBeat"
"https://venturebeat.com/ai/driving-into-the-future-from-autonomous-to-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights Driving into the future from autonomous to AI Share on Facebook Share on X Share on LinkedIn This article is part of a Technology and Innovation Insights series paid for by Samsung. With new connective technology, autonomous systems, and innovative business models, the transportation industry is on the cusp of a transformation that could expand the market by more than a trillion dollars over the next decade and drastically reduce road injuries, one of the top 10 causes of death worldwide. Mobileye is the global leader in the development of Advanced Driver Assistance Systems (ADAS) and the artificial intelligence (AI) that is critical in developing autonomous driving. This technology is deployed by more than 25 global automakers across 60 million vehicles worldwide and counting. The Co-Founder, CEO, and President of Mobileye, Professor Amnon Shashua, believes that new transportation technology is going to profoundly transform our society, an idea he explored with Young Sohn, President and Chief Strategy Officer of Samsung Electronics, in the latest episode of The Next Wave with Young Sohn. Sophistication and accuracy Up until now, Shashua explains, there have been two distinct categories of the products we rely on. The first category is complex and highly sophisticated products where occasional flaws can’t be avoided, and are therefore tolerated –for example, smart phones or computers, which can process and produce incredible amounts of information, but are also vulnerable to glitches, hacking, and viruses. The second category includes products that are less complex but that must perform tasks precisely and reliably. Airplanes, for example, do one thing very well with almost no room for error. Self-driving cars represent an unprecedented combination of both categories. Autonomous driving is based on cutting-edge software, data analytics, AI, and hardware. But like an airplane, they must function without fail. Bringing these two characteristics together is a major challenge that the automotive industry must tackle. Autonomous vehicles must make decisions fast and reliably Against this background, Shashua explains the various obstacles engineers must overcome when developing self-driving cars. First and foremost, the criteria for the decision-making process of robotic engines needs to be standardized and regulators need to agree on clear definitions for recklessness and caution. After all, a robotic engine can only understand caution based on clear rules that it will be able to follow consistently. Another point that needs to be clarified is how robotic engines will detect the environment around them and how to process data quickly enough. To do that, Mobileye uses two separate fully self-driving, redundant systems: one based on cameras alone and one based only on radar and LiDAR (Light Detection and Ranging) sensors. These two subsystems will eventually be combined into an AV that essentially has two fully self-driving systems within it, ensuring a very low chance of failure at any given moment. The next step: Robotaxis While the development of autonomous vehicles has made great progress so far, there are still important steps needed to move away from a niche market and towards a mass market. Shashua believes that robotaxis are an attractive next step in order to become a mass consumer product, for three good reasons: First, the tolerance of robotaxi costs are high. To add a fully self-driving system to consumer vehicles would add considerable costs, but if self-driving systems are first introduced through ride-hailing or public transit networks, that cost becomes more feasible. For example, a transit network company can re-coup the investment in self-driving technology in the long run — they won’t need to employ as many drivers, and can use data-driven insights to optimize fleet use based on demand. Secondly, the service is geographically scalable. A robotaxi does not necessarily have to be driven everywhere. The business can also work if that service is only available in a specific location. Finally, from a regulatory point of view, it’s easier to regulate only one particular fleet instead of a consumer product that is available everywhere, on the way to regulation that is ready for consumer AVs. Computer vision adds value to other branches Mobileye also develops computer vision, which forms the technological basis for autonomous driving. But these technological advances have other uses as well, for example, supporting people who are blind or visually impaired. Shashua realized this very early on and founded OrCam in addition to Mobileye ten years ago. OrCam develops smart portable mini cameras which can read printed and digital texts from every surface in real-time, as well as recognize faces, products, and banknotes. These technologies fall at the intersection of business value, consumer interest, and public good. As Shashua discusses with Sohn, there is incredible potential to improve lives with these innovations, as long as we have the persistence to pursue them and the wisdom to use them in the right way. Catch up on all the episodes of The Next Wave including conversations with VMWare CEO Pat Gelsinger, the CRO & CMO of Factory Berlin, the CEO of Solarisbank, the CEO of Axel Springer, the CEO of wefox, and Rafaèle Tordjman, President and Founder of Jeito Capital. VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,719
2,022
"Tapping into the pulse of marketing with data visualization | VentureBeat"
"https://venturebeat.com/datadecisionmakers/tapping-into-the-pulse-of-marketing-with-data-visualization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Tapping into the pulse of marketing with data visualization Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Chances are you’ve heard the phrase “a picture is worth a thousand words.” What you may not know is that depending on the context, this can be somewhat of a misleading statement. Hear us out. The human brain is hardwired to ingest images 60,000 times faster than text, accounting for 90% of the information we process every day being visual. These numbers make a convincing case as to why a picture deserves a little more credit than just a thousand words. But we didn’t dig up a century-old proverb to nitpick on its statistical shortcomings. Instead, we wanted to highlight how the sentiment behind the phrase has never been more apropos for marketers who are left to stay afloat in an expanding sea of raw data every passing day. Refining raw data with visualization Clive Humby was onto something when he proposed data as the new oil to his fellow C-suite executives at the 2006 Association of National Advertisers (ANA) Master of Marketing summit. A decade and a half later, his prediction came to fruition as data completely superseded introspection and guesswork as a bottom line for marketing success. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What makes Humby’s foresight truly impressive, however, isn’t the eventual rise of data as king in advertising. It’s more so the fact that data, just like crude, is practically useless in its rawest form. To elaborate, oil goes through a refining process before hitting the pumps. The same goes for raw data. It needs contextualizing and must be broken down first into something more structured and ultimately actionable. This is where visualization comes into the picture. Once the datasets have been cleaned and standardized, visualization steps in as the last critical step of the refining process to remodel them into intelligible graphics that put actionable insights on full display. Harnessing the power of data visualization Take a second to absorb the contrast between left and right. The difference should be rather stark unless you happen to be a secret mathematical mastermind. Seriously, to the eyes of an average Joe, the table on the left appears as a random concoction of numbers that tell nothing substantive. On the other hand, the scatter plots make plain the positive correlation binding the variables together right from the get-go. That’s the power of visualization. It harnesses the ability to unlock hidden patterns, making it possible to connect the dots between disparate data points at once. For marketers who must repeatedly ask loaded questions such as which acquisition funnels lead to conversion, which time of the day are prospects most active and the like, visualization can help cut through the pile of raw data standing in the way of getting those questions answered. And the best part? Visualization knows no boundaries. Whether it’s your team, board members or external stakeholders, presenting the data through graphics primes even the most boring of datasets to be readily processed and utilized regardless of who’s on the receiving end. Choosing the right graphics for data visualization As wonderful as data visualization is, figuring out which type of visual aid would best represent the dataset can get tricky. And going with a suboptimal choice is hardly an option when doing so carries the risk of confusion or, worse, misinterpretation. Thanks to Dr. Andrew Abela, who put forward a comprehensive diagram on picking the right chart for different data types, choosing a visual can be boiled down into four basic criteria: Comparison. Drawing a comparison between datasets over a specified period to pinpoint highs and lows. E.g., website traffic breakdown by source. Relationship. Establishing a correlation to see whether given variables positively or negatively influence one another. E.g., regional influence on sales growth. Distribution. Gauging the range of a dataset to better understand how variables interact while checking for outliers. E.g., fluctuation in average monthly lead conversion rate across a fiscal year. Composition. Charting out how individual parts make up a whole to create hierarchies within a given dataset. E.g., breakdown of marketing expenditures by strategic priorities. With these criteria in mind, use the following overview as further guidelines to single out the visual aid that’d best serve your needs: Column chart A column chart refers to a graphical display in which vertical bars – the height of each proportionate to the category it represents – run across the chart horizontally. Nine times out of ten, a column chart will do the trick if you’re looking for a side-by-side comparison of 10 or fewer items. Line chart What if you have more than ten datasets to be stacked against another? A line chart is your best bet. Unlike the column chart, a line chart runs a line through a series of dots. While it’s best known for highlighting the ups and downs across various data points, a line chart can also effectively compare the trends between different metrics by plotting multiple lines in a single chart. Scatter plot A scatter plot is all about mapping out the correlation between two datasets. Also known as the cause-and-effect diagram, a scatter plot can help you see whether a set variable influences the other and which direction (positive or negative) the correlation is running towards. Pie chart A pie chart is used to deal with categorical variables to see how the total amount is split amongst them. It provides a general sense of the part-to-whole relationship that comes in handy when you want to find out the most and least effective channels for driving visitors to your website. Word cloud Perhaps the newest addition to the data visualization stack, a word cloud refers to a cluster of words displayed in different colors and sizes. It’s a nifty tool to visualize how the audience thinks about a given topic and discover the best and worst keywords when it comes to traffic generation. Making headway with data visualization All said and done, visualization is the present and future of marketing analytics. The good news is, with all you’ve seen and read so far, you’re ready to get the most mileage out of visualization. But if there’s anything I hope you’ve learned from this piece, it’s that images speak much louder than words. It’s time to take your marketing data visual. Sophie Eom is cofounder and CEO of Adriel.com. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,720
2,022
"Why our digital future hinges on identity and rebuilding trust | VentureBeat"
"https://venturebeat.com/security/why-our-digital-future-hinges-on-identity-and-rebuilding-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why our digital future hinges on identity and rebuilding trust Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The adoption of a password-free future is hyped by some of the biggest tech companies, with Apple, Google, and Microsoft committing to support the FIDO standard this past May. Along with the Digital ID Bill reintroduced to Congress this past July, we’re poised to take a giant leap away from the password to a seemingly more secure digital future. But as we approach a post-password world, we still have a long way to go in ensuring the security of our digital lives. As companies continue developing solutions to bridge us to a passwordless world, many have prioritized convenience over security. Methods of two-factor authentication (2FA) and multi-factor authentication (MFA) such as SMS or email verification — or even the use of biometrics — have emerged as leading alternatives to the traditional username/password. But here’s the catch: Most of these companies are validating devices alone and aren’t properly leveraging this technology, leaving the door open for bad actors. The blind spots of biometrics Companies employing biometrics claim to use biometric data to secure and simplify account access, but there is an underlying question. Are they tying an account holder’s biometrics to the account itself or the account holder? In many cases, the answer is they use a combination of both biometric data and legacy technology. This exposes account holders to account takeovers and other fraudulent activities. Another issue is that some verification companies use a one-time scan of the account holder’s ID or other government-issued documents. They then link that data to an existing account that still utilizes a username/password , which the company holds. Security experts don’t recommend this, as static credentials create a false sense of trust. If a breach occurs, a user’s account is still susceptible to impersonation and fraud. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And then there is the shortcoming of facial recognition technology, which hasn’t advanced to the point that it can consistently log you into accounts. In recent years, studies have shown that the facial recognition technology behind many verification solutions frequently fail to recognize women and people of color, unfairly prolonging the time it takes to process login requests and potentially blocking people’s access to critical resources. Verify people, not devices Today’s security realm uses the approach of validating devices. Biometrics and other security layers —such as 2FA/MFA — were never intended to identify the actual person behind the screen, which is a shortfall. We know that these methods for online security are only effective when you know who is using the device. Suppose someone claims to be you and links their fingerprint to your account, for instance. In that case, it’s convenient for the bad actor but a disaster for everyone else. However, a competing philosophy is emerging: We should validate people and not strictly devices. Powering this new security philosophy is Multi-Factor Identity ( MFI ). MFI fulfills the vision of a secure and passwordless future by knowing the real identity of someone online — the missing link to keeping accounts protected and reducing fraud. While biometrics and 2FA/MFA are important steps, the future of account security does not rely solely on them, but on technology that eliminates these problems by verifying people, not devices. The most effective approach will be pairing real-time authentication measures with a government-issued ID to verify users. A more human and safe internet There’s a larger vision here regarding online security, which MFI is helping reach. It’s the idea that we can build a more human, safer internet through identity verification — and eventually, a more trusting overall digital experience. Today’s online world lacks trust. Going back to the early days of the internet and computing, it was a smaller group and more trusting community where networked computers came together, operated by known people. You could more easily know who someone was and where a password could reasonably protect an account and the user. But as the internet has grown, that trust has virtually disappeared. And it’s difficult to gain that trust back, whether online or over the phone, without knowing the identity of others. Trust is the paramount issue today, especially if we are to fulfill the promise of emerging digital spaces, such as NFTs, the metaverse, and more. Our digital world is massive and growing so rapidly that the metaverse could push it to a breaking point without more trusted ways to identify each other. We’re excited to see increased adoption of technology that solves the problem of helping companies trust the identity of their users and unlocking faster, more secure account access. MFI can help us get there, rebuilding the trust that helped start the internet and now ensuring that it is sustainable. Aaron Painter is CEO and founder of Nametag. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,721
2,023
"How edge devices and infrastructure will shape the metaverse experience | VentureBeat"
"https://venturebeat.com/virtual/how-edge-devices-and-infrastructure-will-shape-the-metaverse-experience"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How edge devices and infrastructure will shape the metaverse experience Share on Facebook Share on X Share on LinkedIn If it lives up to its billing, the metaverse will offer a unique opportunity for users to immerse themselves in a virtual environment. For this to be achieved, there must be seamless interaction between the participant and the virtual environment in real time. That means interaction must be highly responsive, ensuring that all graphical elements update rapidly to create a sense of presence in the virtual world. In this, edge data centers and devices are set to play a crucial role. To ensure a successful experience, the user must be able to view the rendered virtual environment with utmost clarity, and the system must respond in real time to any gestures or actions by the user. Edge computing, networking and related advancements are required for the metaverse to prove out estimates like Accenture’s. That consultancy expects the metaverse to fuel a $1 trillion revenue opportunity by the end of 2025. Clearly, building the metaverse’s foundation requires layer upon layer of behind-the-scenes technology, such as data centers and network infrastructure. The latency requirements of the metaverse are near zero, and this necessitates the need for data centers that are in close proximity to users, with blazing-fast network speeds. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! As we delve further into a new era of virtual reality , decentralized edge data centers and devices are set to play a crucial role. By providing seamless access to the metaverse for users from all corners of the globe, these technologies hold the key to transforming our digital experiences in the near future. Edge computing can revolutionize the way we experience the virtual world. The essence of edge computing in the metaverse One of the potential problems that can arise in both traditional and cloud data center architectures is latency, which can create slow response times and delays, ultimately leading to a suboptimal user experience. In addition, poor reliability or availability can deter visitors, while high bandwidth costs can significantly impact the organization’s metaverse budgets. As the demand for metaverse applications continues to grow, these potential challenges could be a significant barrier to adoption. Therefore, it is essential to overcome these obstacles to ensure that the metaverse offers an immersive and seamless experience for users while also being cost-effective and reliable for organizations. Cloud-native edge infrastructure can address these shortcomings and provide optimized service chaining. It can handle a tremendous amount of data processing while delivering cost-effective, terabit-scale performance and reduced power consumption. In doing so, edge computing can move past closed networking models to meet the demanding data processing requirements of the metaverse. “Edge computing allows data to be processed at or near the data source, implying that commands and processes will occur promptly. As the metaverse will require massive data simultaneously, processing data quickly and seamlessly depends on proximity,” Prasad Joshi, SVP and head of emerging technology solutions at Infosys , told VentureBeat. “Edge computing offers the ability to process such information on a headset or on the device, thereby making that immersive experience much more effective.” The emergence of the metaverse has also led to a significant surge in data generation, and the data centers that handle these workloads generate enormous amounts of heat due to the massive power consumption required to enable AR/VR (augmented reality/virtual reality) applications. The power, space and cooling limitations of legacy architecture further exacerbate this data surge. While these challenges impact consumer-based metaverse applications, the stakes are much higher for enterprise use cases. Joshi believes that the real-time immersive nature of the metaverse depends on the expansive use of the cloud and says that digital twins will have a significant impact through edge technologies. “As more and more AI applications experience extreme growth, it will be critical to employ edge computing to ensure cloud stability. Hyperscalers must adapt to acknowledge the emerging shift toward sustainability in cloud and edge computing to keep up with increasing demand,” said Joshi. As metaverse applications continue to proliferate, the need for service function chaining to automate network traffic flows becomes more critical. Unfortunately, service chaining based on virtual machines still suffers from duplication, latency, high costs and resource inefficiencies. Ranny Haiby, CTO of networking, edge/IoT and access at Linux Foundation , believes that end devices will always be constrained by power, size and weight. Therefore a lot of the processing can be offloaded to the edge. “Based on the processing, storage and network latency requirements of different metaverse applications and protocols, there will likely be a continuum of processing starting from the end-user devices, the network edge and the cloud. The edge emerges as the sweet spot to fulfill all requirements for a multimodal and tactile user experience,” said Haiby. Haiby explained that as the metaverse calls for ubiquitous high-bandwidth connectivity, there also needs to be a high degree of automation in how edge resources are deployed. “Edge automation and orchestration technologies will play a key role in the success of metaverse use cases. In an ideal world, the edge services supporting the metaverse apps should seamlessly follow the end users as they move between cell sites, wireless access points and fixed access points,” added Haiby. Shaping a reliable virtual future through the edge Achieving optimal latency control in the metaverse requires more than just edge computing; it also involves edge connectivity, which relies on consumer broadband. While faster broadband does offer lower latency, there are other factors to consider when it comes to latency control beyond just speed. To effectively manage latency, it is critical to minimize the hops or devices between the user navigating the metaverse and the software that interprets their actions and translates it to what the user “sees” and what others see. By reducing the handling, organizations can ensure minimal delay between the user’s actions and the system’s response, resulting in a seamless user experience. Fortunately, edge computing technology can aid in reducing latency and improving the quality of service for users in the metaverse. With edge computing, data-gathering clients can be scattered densely, which allows for real-time analytics and big data processing. This means that data is processed quickly, and the results can be used to improve the overall user experience. David Treat, senior managing director and co-lead of Accenture’s Metaverse Continuum business group , says that as the metaverse advances, edge computing will have an increasingly crucial role in shaping its future while simultaneously improving performance and efficiency and shaping new business models. “Technical performance is crucial to metaverse applications and their real-time experiences. The faster, smoother experience offered by edge computing can translate into increased revenue,” Treat told VentureBeat. “The cost efficiency of distributing processing power and storage across devices through edge computing can help save on infrastructure costs while reducing the risk of a single point of failure.” Treat says edge technologies, such as edge AI, caching and devices, will play a critical role in creating a reliable, latency-free immersive experience for users. He explained that the evolution of edge technologies would benefit a range of applications — from the safety of industrial equipment to supporting a customer’s individual needs and enabling virtual to exist as a true extension of real-world experiences. “A network of nodes can be connected through edge devices and servers, bringing data processing closer to where it is generated. This could be as simple as edge caching, making content on a VR device smoother, with fewer delays and glitches,” said Treat. “As this advances, AI can translate real-time data into insights, evolving to discern preferences in decision-making and personalizing user experiences in response to real-time data.” Likewise, Micah White, VP of research and development at enterprise software solutions firm CGS , says that edge computing devices can be used to process and store data from AR/VR devices to provide more immersive remote-training experiences. “Today, edge devices are used to track and process the data from sensors and cameras on a headset, which could be used to display a simulated environment to trainees in global locations. Such use cases unlock the ability to instantly provide immersive step-by-step guidance, coaching and collaboration to anyone with a device. This will be a major game changer for adopting metaverse in the enterprise — specifically for frontline workers, but the use cases across industry and roles are much broader,” White told VentureBeat. But White says that there are still several challenges for companies looking to integrate edge devices and infrastructure into their metaverse strategies, and companies should consider doing the following before moving ahead: Security — Edge devices and infrastructure can be vulnerable to cyberattacks, malware, and other malicious attacks. Companies must ensure that their edge devices are secure and well protected. Accessibility — Edge devices can be challenging to access and require specific hardware. Companies must ensure that their systems can be accessed from different locations and by various devices. Cost — Setting up and maintaining an edge infrastructure can be costly. Companies must carefully consider their budget when planning for edge integration. Interoperability — It is crucial to ensure that edge devices and infrastructure are compatible with other technologies and services the company uses. To overcome these challenges, companies should invest in robust security measures, provide easy accessibility and promote device interoperability. Additionally, companies should look into cost-effective and cloud-based solutions that are tailored to their needs and can be easily scaled. “One big challenge facing companies that are looking to leverage edge technology for metaverse applications is that there is not one unified global solution out there to take advantage of, but rather a plethora of different technologies, platforms and providers, and many differences in terms of the level of development and availability in different regions of the world,” said Charles Tolman, CTO of music metaverse platform Pixelynx. “All things considered, edge computing is still in its infancy.” Tolman said it is essential first to educate oneself as much as possible and then take the plunge and start building. “Edge computing will continue to advance and grow to meet the needs of our business and the users we serve, but only if we step forward to take up the challenge and start figuring out how best to use it,” added Tolman. What’s next for edge in the metaverse? Mouna Elkhatib, CEO, CTO, and cofounder of edge AI processor manufacturing and solutions firm AONDevices , predicts that using edge technologies for the metaverse will generate new use cases, business models and revenue streams for emerging businesses. “Computer vision, sound recognition and motion sensing/detection at the edge will allow the metaverse to securely collect more data about the user and the user’s environment, enabling systems to use that data to offer personalized experiences and features. Doing so can significantly enhance the user’s experience for applications such as gaming, education, remote work and more,” said Elkhatib. She said such use cases would provide emerging businesses with new revenue streams, including personalized content streaming and targeted advertising. In addition, new business models such as subscription-based services for premium content, revenue for user-generated content, and pay-per-use models may also emerge. For his part, Pixelynx’s Tolman says that edge technologies can be leveraged to advance the Web3 vision of putting data ownership into the hands of the users and offering new business models that emphasize property rights and interoperability. “We can offer creators more and better ways to self-publish, more opportunities to connect with their audience, see greater earnings and have full transparency into how revenue is generated and royalties are paid,” Tolman said. Furthermore, he added that as the metaverse grows, and with it, the demand for new content and interactive experiences, technologies that enable greater scalability and faster data processing and transaction speeds will become increasingly critical. “Edge computing technology is one such innovation that will support the development of a sustainable metaverse at mass scale,” he said. “It is critical to that vision, so its growth is inevitable.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,722
2,023
"How startup founders looking for funding should approach the metaverse | VentureBeat"
"https://venturebeat.com/virtual/how-startup-founders-looking-for-funding-should-approach-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How startup founders looking for funding should approach the metaverse Share on Facebook Share on X Share on LinkedIn Metaverse City Concept Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Although it’s difficult to pinpoint precisely how big an emerging market like the metaverse will be, Citigroup believes it could reach somewhere between $8 trillion and $13 trillion by 2030. And, if the investment banking and financial services giant is correct, in less than a decade, there will be nearly five billion people working, playing, socializing, shopping (and more) within the quickly evolving frontier. With a potential population and economy bigger than that of China, the metaverse presents a massive opportunity for smart, innovative entrepreneurs. However, despite its vastness, blazing trails into this new frontier isn’t going to be easy. Although Citigroup’s outlook on the metaverse is extremely bullish, the market faces some significant challenges. The biggest is the fact that the market is still in its infancy and has yet to be truly defined. As such, the market is currently extremely speculative from an investment point of view. For startup founders embarking on VC roadshows, this is the primary objection that will need to be overcome. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Insights from a metaverse-savvy VC But, as Sir Winston Churchill famously said, “A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.” And that’s precisely what successful entrepreneurs are good at: Seeing the opportunity through the chaos. So, when pitching your metaverse startup to a room full of pragmatic VCs, it’s important to show them exactly what the business opportunity is. However, when dealing with fledgling markets, that’s often easier said than done. To help startup founders looking for funding better understand how to approach this unique market, I recently sat down with Dmitry Volkov, Ph.D. , founder and general partner of Social Discovery Group. The global technology company, investment fund and venture studio is investing heavily in the metaverse and constantly evaluating who the winners in the race to metaverse greatness might be. The company recently committed an initial $20 million to metaverse-related ventures focused on social life 3.0 applications. However, as Volkov explained, the company is also evaluating broader opportunities within the market. As startup founders prepare to approach VCs, Volkov offers these important tips and insights. Tip #1: Follow the industry’s standards as they develop “One of the biggest challenges the metaverse faces is a lack of cohesion,” said Volkov. “Even though the space is exploding, in terms of hype and the number of new projects being launched, the market remains extremely fragmented. Unlike Web 2.0, no gateways lead to a broader metaverse. There are some great worlds like Decentraland, Sandbox, Roblox and others to explore and monetize within, but none of them are interconnected in any practical manner.” But the good news, he says, is that headway is being made. Organizations like Microsoft, Epic Games, Adobe, Nvidia, the Khronos Group and others have joined the new Metaverse Standards Forum to address the lack of interoperability that currently plagues the space. According to its website, the organization is focusing in on interactive 3D assets and photorealistic rendering; human interface and interaction paradigms (including AR, VR and XR); user-created content; avatars; identity management; privacy; financial transactions; IoT and digital twins; and geospatial systems. Not only is this list a nice breadcrumb trail for entrepreneurs to follow, it highlights the industry’s top priorities. “Anyone looking to innovate within the metaverse should consider joining the Metaverse Standards Forum and other industry alliances,” said Volkov. “It’s always better to know which way the current flows before jumping in. It’s also good to know who all the players are, and where your venture fits in.” Tip #2: Focus on a minimum value product “There’s no doubt that startup founders need to be visionary leaders,” said Volkov. “However, they must also be pragmatic. It’s great for founders to share their vision of how their company will become the next Roblox.” Volkov also noted that it’s important for startups and investors to remember that there is already a Roblox. And, it took the company nearly 15 years to become a success. The point is: Before challenging the industry heavyweights, a founder’s primary focus needs to be on identifying an untapped market opportunity and quickly developing a minimum viable product that the market will respond positively to. Once revenue is coming in, it’s all about scaling and growing the business. And, this is exactly where startups need to be to attract early-stage VCs. It’s much easier to sell a VC a small reality than a giant dream. Tip #3: It’s better to sell picks and shovels than mine for gold If the California Gold Rush has taught entrepreneurs anything, it’s that when you chase gold, the chances of a big payoff are slim to none. Most prospectors headed west in 1848 believed mining gold would be easy money. But the reality was that many endured extreme hardships and returned home broke. However, if those same miners had instead sold picks and shovels, they would have made money off of every prospector that came through town. “In metaverse terms, let’s say you’ve created a technology that can automatically generate personalized NPCs (non-player characters),” said Volkov. “Would the better short-term market opportunity be to create your own metaverse? Or, would it be better to license the technology as a service to metaverse platforms that need an easy way to populate new worlds?” Inworld AI went for the latter and it paid off significantly. The company recently announced the closing of a $50 million Series A backed by some of the biggest names in VC. They understood that selling picks and shovels was a much better path than mining for gold. Tip #4: Keep an eye on government regulation “All the right signs are saying that the metaverse is indeed the next big frontier for entrepreneurs,” said Volkov. “But as with all big markets, the metaverse comes with some potential downsides. Experts are warning consumers about privacy issues, mental health concerns, addiction and more.” Last year, whistleblower Frances Haugen told Congress that the metaverse would be highly addictive and rob people of even more personal information. This has many questioning how involved the government will be in regulating the metaverse. Adding more fear, the Federal Trade Commission is working to block Facebook’s recent VR acquisition. But, the U.S. government has its sights on more than just Facebook. Congress is currently pushing to pass its first major effort to regulate big tech since the inception of the internet. Whether you like Big Tech or not, this is something to keep an eye on, as it could make it harder to make an exit down the road. Tip #5: Beware of the philosophical arguments The metaverse has much momentum in its favor. And, it offers many benefits for entrepreneurs and businesses alike. But is it good for customers and society? “The thought of life in the metaverse brings many questions,” said Volkov. “Will we be more connected but feel lonelier? Will we live an illusionary life? Will we create an extended reality only to become an extension of vast, complex and intelligent machines? For many, The Matrix sounds like an argument against the metaverse.” Another pessimistic argument was proposed by American philosopher Robert Nozik. In his thought experiment, Nozik introduced an experience machine where people could live out fantasies like marrying their favorite Hollywood star or winning the lottery. Nozik believed this way of fulfilling desires would not be fulfilling and would prevent us from grasping a deeper reality. Will the metaverse fail because of this? Closing thoughts In Volkov’s view, life in the metaverse can be as real as it gets. Unlike Nozik’s experience machine, there will be real interaction between real people in the metaverse. And unlike The Matrix , visitors can make informed choices and exercise free will. They will have the ability to join and leave the virtual world at their discretion. “If we as entrepreneurs and creators build the metaverse right, users will have a positive and fulfilling experience,” said Volkov. “And that should be our goal. I’m confident the industry can work through its growing pains and make the metaverse a safe and profitable place for all who venture within it.” Jay T. Ripton is a business consultant and freelance writer. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,723
2,020
"CES 2020: The best ideas and products of tech's biggest show | VentureBeat"
"https://venturebeat.com/2020/01/10/ces-2020-the-best-ideas-and-products-of-techs-biggest-show/view-all"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CES 2020: The best ideas and products of tech’s biggest show Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. I have just about wrapped up my trip to CES 2020 , the big tech trade show in Las Vegas. I walked more than 37.45 miles (over 84,385 steps) this week to scout for the best ideas and products of the show. In the name of consumerism, I took in as much of the 2.9 million square feet and as many of the 4,500 exhibitors as I could manage. These winners captured my imagination by demonstrating what might be possible with technology or what is practical for products in the near future. Some of the products won’t ship in the next year, and some may never ship. Others are already generating revenue. Three of the winners were back from last year’s list as they made interesting advances in their efforts to change the world, or my face. P&G Ventures’ Opte, Zero Mass Water, and Impossible Foods have taken impressive strides in the last 12 months. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! I’ve expanded the group to include a few more fresh winners, so we’ve got 14 on the list this year. I’m glad I traveled the extra distance to see some of these products because they make me a believer again. It’s easy to get jaded and dismiss everything. It’s tempting to skip the show. But I always find that my curiosity takes me back to Las Vegas in search of the magical products that will change our lives. Some of these picks may not seem impressive on first glance. But they all caught my attention and stayed in my mind throughout the week. I feel like I got another glimpse into our future at CES, just as I have for the last 23 years or so. I’ve included videos where I have them, and I hope you enjoy the ride. Bzigo indoor mosquito detector To our great agitation, mosquitoes are adept at avoiding human vision and then attacking us when we aren’t looking. But now we can fight back with computer vision and a laser pointer. Bzigo spent a few years researching how to spot buzzing mosquitoes, and it showed up at CES with a prototype. Bzigo’s infrared camera uses unique optics and has a processor running computer vision algorithms to actively scan the room, differentiating between a mosquito and other pixel-sized signals, such as dust or sensor noise. Bzigo trained its AI to recognize mosquitos and their erratic flight patterns. Then it follows them until they get tired and land. Ben Resnick, head of marketing, said in an interview that the product then shines a laser pointer on the spot and it notifies you by a text message that it has found a mosquito. You can then go and kill it. The team figured it can’t use the laser to kill the mosquito because it’s dangerous to have a high-powered laser zapping things in your home. But the tough part is finding where the mosquito lands. Delta Airlines/Sarcos Guardian XO Exoskeleton Above: Guardian XO is an exoskeleton for Delta’s baggage handlers The mechs are coming. Delta Airlines CEO Ed Bastian showed up to give a keynote speech at CES 2020, and much to our surprise, he brought technology with him. Bastian and Sarcos Robotics showed off Guardian XO, an exoskeleton that operates just like a mech suit from movies like Aliens. But instead of killing Xenomorphs, this exoskeleton lets luggage handlers save their backs. It enables them to pick up objects that are 20 times heavier than normal. An employee could lift 200 pounds continuously for eight hours. Delta could implement the tech for baggage handlers as early as this year. MedWand MedWand aims to fulfill the potential of telemedicine, inspired by the Star Trek Tricorder, a magical device carried by Dr. Leonard “Bones” McCoy in the 1960s television show that could diagnose and heal injured people, said Samir Qamar, CEO of MedWand. MedWand has 10 different medical tools, such as a stethoscope, all built into a small handheld device that has software and a camera. It doesn’t do any healing yet, but it can broadcast test results from the patient to the doctor, who can then conduct an office visit remotely. In real time, the doctor can collect a lot of vital signs and detect and follow numerous medical conditions from across town or around the world. As a physician, Qamar is biased toward allowing doctors to do the diagnosis themselves, rather than having the MedWand use AI to figure out what’s wrong. The device is expected to be $400, and approval from the Food and Drug Administration is pending. Hyundai S-A1 flying taxi Above: Hyundai’s prototype for an Uber flying taxi. Hyundai and Uber showed off a prototype for a flying car, an innovation that inventors have dreamed about for more than 100 years. During its press event, Hyundai showed a small-scale model and a VR trailer of what a future city could look like, with ground transportation hubs where automated shuttles could deliver people. The passengers could then hop aboard a four-seat taxi with a couple of vertical take-off and landing engines. At Hyundai’s booth, the company displayed a full-scale prototype. The companies still have a long way to go in solving problems like regulatory approval, battery life, and other details. The aircraft has a range of 60 miles, could go 180 miles per hour, and fly 1,000 to 2,000 feet above ground. Let’s hope these guys will get us where we need to go safely. Toyota’s Woven City This connected city of the future is purely an idea at this point. But Akio Toyoda, CEO of Toyota , gushed with excitement about it, and he said the automotive company is quite serious about building the town on the site of a former car factory in Japan. Toyota Woven City will combine a wide array of technologies, including robotics, smart homes, autonomous vehicles, the internet of things, digital health, and sustainable energy. It will be built on 175 acres of land. It will have streets with three types of lanes: pedestrians, bicycles and scooters, and autonomous vehicles. The idea is to test how to create a city from the ground up. The company hopes to house Toyota researchers and employees, families, retirees, retailers, students, and more. The company will build the city as a virtual world first, to test ideas and learn from mistakes, Toyoda said. 4moms Mamaroo Sleep Bassinet Parents of a newborn lose about six weeks of sleep in the first year alone. Those days are a distant memory for me. But plenty of new parents in the thick of this battle are in need of restoring balance and rhythm to their lives. They need to establish a sustainable cycle for their baby’s sleep patterns. 4moms, maker of the Mamaroo baby rocker, recognizes that parents aren’t invincible. So the company has followed up on its Mamaroo infant seat and created a 4moms Mamaroo Sleep Bassinet, which can sway and soothe a baby ( see video ) using motions that mimic what parents do, said CEO Gary Waters, in an interview with VentureBeat. It can also play noises that put the baby to sleep. This made me think of a father watching a football game or playing a video game, while trying to cosset a baby. Waters said that we all have these moments where we need a break, and it’s OK to use something like a soothing bassinet to keep a baby happy. The Bluetooth-enabled bassinet has five rocking, swaying, or vibrating motions, as well as four different white noise options to get your baby to sleep. Nvidia G-Sync 360Hz esports display http:/https://youtu.be/3w2PofkHFVQ/ Many TVs and monitors refresh their screens at 60 frames per second, or 60 hertz (Hz). But that’s too slow for gamers, who over the years have been the first to adopt screens that have refresh rates of 120Hz or 240Hz. But at CES, Nvidia and Asus came out with a new G-Sync gaming display technology that can run at 360Hz. At 360 Hz, the screen is refreshed 360 times a second, or about once every 2.8 milliseconds. Most TVs operate at 60Hz. In a paper published at Siggraph Asia, Nvidia studied how much better esports players could perform with faster displays. A 60Hz display with a 100 millisecond refresh was the baseline. In Counter-Strike: Global Offensive, flick shot performance improved 28% with 120Hz displays at 54.7 milliseconds and 33% with 240Hz displays at 34.5 millisecond refresh rates. And with 360Hz operating at 20 milliseconds, the performance improvement was 37%. I played with the new monitor at Nvidia’s place at the show, and I was shocked at how much better I could play a round of Counter-Strike: Global Offensive. When it came to both reaction timing and accuracy, I was able to improve my play greatly. In one test on a 60Hz screen, I couldn’t hit any targets with a sniper scope. But at 360Hz, I hit four targets. On the accuracy test, I hit seven targets in 45 seconds on the CS:GO test on the 60Hz monitor. On the 360Hz monitor, I hit 13 targets. I also saw another journalist who was better than me run circles around my scores, and he also did much better with 360Hz, as you can see in the video. The 24.5-inch display will debut later this year. RollBot Charmin’s vision for a better bathroom includes RollBot. When summoned with a smartphone using Bluetooth, it delivers a fresh roll of toilet paper to you so you won’t be stranded. Its futuristic design uses self-balancing technology, with a bear-like look to the front panel. I saw it working in Procter & Gamble’s booth at CES. The robots were fairly small, but sturdy enough to deliver a roll of ultra-soft to someone in need. Charmin notes that people use the bathroom six or seven times a day , and spend on average 156 hours a year on the toilet. But the tech is largely the same as it was a hundred years ago. Lora DiCarlo Osé Family Above: The Osé Onda uses microrobotics to stimulate G-spot orgasms. I can’t say I’ve tried this orgasm giver-outer. But I did get to see a demo that showed those microrobotics in action at the Pepcom party. Looks like fun to me, and more importantly, it showed that female entrepreneurs such as Lora Dicarlo CEO Lora Haddock could bring diversity and sensitivity to a market that had long been dominated by male-oriented sex companies. And while there is competition, Lora DiCarlo is the company whose ideas about microrobotics sex toys paved the way for sex-positive products to be on display at CES. Along with other pioneers in the sex tech business such as Lioness, Lora DiCarlo helped convince the Consumer Technology Association that female-friendly technology had as much right to be at the show as male-oriented companies that didn’t have nearly as much technology. After CTA evaluated this, it restored an award for Lora DiCarlo and let the sex-positive companies back into the CES show floor. More than a dozen sex-tech companies showed up in the health and wellness section of CES this year. Lora DiCarlo brought two new sex toys to the show, demonstrating a commitment to moving the tech forward. The borders of sex tech and porn are still messy. But I believe that the ideas that Lora DiCarlo brought to CES have contributed to a healthy conversation about health and wellness. Luple Digital Caffeine Above: Luple’s founders Namsu Lee (right) and Yongduck Kim. Luple is a South Korean startup that spun out of Samsung to create “digital caffeine.” The team led by CEO Yongduck Kim and CTO Namsu Lee figured out that certain wavelengths of light could have different effects on people. One kind of light-emitting diode (LED) light could make people feel more energetic, and another could make people feel more relaxed so they could go to sleep, Lee said in an interview with me. If you are studying and want to be able to concentrate, Luple will show you one kind of light. If you are sleepy and want to take a nap, it will shine a different light. The company is studying what kind of wavelength is healthy for the human body, and it says this “human-centric lighting” keeps you more alert than a cup of coffee. And Luple is one of a new group of companies that is leading the way with non-drug treatments for physical problems such as drowsiness or sleeplessness. Zero Mass Water Above: Cody Friesen is CEO and inventor of Zero Mass Water’s Hydropanels that pull water from air.. As I noted last year, Zero Mass Water is one of those companies that will change the world. The technology is like something out of Frank Herbert’s classic science fiction novel Dune. It comes as a time when Australia is in flames and Syria’s ghastly civil war was caused in large measure by a severe drought. Zero Mass Water created the Source Hydropanel, which can extract water from air and electricity. It does so by using air and solar panels to create the right conditions to speed the process of condensing water from even arid air. That is magic, as it requires no electrical input, pipes, or public utility infrastructure. It is a way of using the resources that are already in abundance around us — the water that exists in our air. It is the brainchild of Cody Friesen, associate professor of materials science at Arizona State University. And now the company has some new innovations. The Source Rexi Hydropanel is half the size of the previous version, is easier to install, and can produce more water on a daily basis. The company is also introducing its new “water-as-a-service” business model. Friesen said in an interview it has deployed multiple Source Fields, or large-scale hydropanel arrays that create an ongoing supply of water for communities and businesses. In places like Dubai and Australia, cities are purchasing large amounts of Source Field water. Friesen said his own home in Arizona now operates on a couple of Zero Mass Water’s Source Hydropanels, which produce more than 600 bottles of water a month, or more than enough for his family of four. And he lives in the dry air of Arizona. Once again, I had a chance to drink Zero Mass Water’s water at CES, as I downed a cool cup of the company’s water from a rooftop Source Rexi Hydropanel. Impossible Pork Above: Impossible Foods’ meatless pork. The Impossible Burger 2.0 is on a mission to save the lives of a lot of cows. And now the company, Impossible Foods, is going to save some pigs too, as our porcine friends are the most popular food in the world. Last year, Impossible Foods made its debut at CES 2019 and showed off version 2.0 of the Impossible Burger. It tasted much better than its predecessor and other veggie burgers because it was spiced with heme, an iron-containing molecule in blood that carries oxygen and is found in living organisms. It was founded nine years ago by scientist Patrick Brown, a Stanford professor who felt that our habit of eating meat wasn’t sustainable. Brown figured out that if he could generate a lot of heme from plants, he could recreate the taste of animal meat. Chains such as Burger King have adopted the meatless burger. And now Impossible Foods has created a meatless version of pork. I tried it out and it tasted good. There’s a lot of tech in this food as well, and let’s hope it will save a lot of pigs. The company’s goal is to eliminate animal farming by 2035. Opté Precision Skincare Wand Procter & Gamble showed up at CES 2019 with the Opté Precision Skincare Wand. And it came back this year with an even better version. It scans your skin with a blue LED light to find your age spots. A microprocessor analyzes the age spots and customizes a serum/makeup combination to instantly to apply to them. It uses inkjet technology to deposit the customized serum to cover each imperfection, careful to avoid putting the serum on non-blemished skin. I played guinea pig once again, and Thomas Rabe, the inventor of the Opté (funded by P&G Ventures), waved the wand over my face, magically covering up my age spots. It’s only temporary, and lasts about a day, akin to makeup. But the serum can help improve your skin over time. It took 10 years of development and over 40 patents to bring it to market. Opté, with proprietary algorithms and printing technology, can treat just about any kind of skin. It doesn’t use expensive lasers, lightening creams, or makeup. P&G expects it to come out this year as a premium product, but it hasn’t set the price yet, Rabe said in an interview with VentureBeat. Honorable mentions Joué Music Instrument, Gillette Treo shaver for caregivers, Neon, SmellSense, Samsung Ballie, Segway S-Pod, and the Hydraloop Water Recycler were also compelling. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,724
2,023
"Aska A5 is a flying electric car that can take off vertically | VentureBeat"
"https://venturebeat.com/business/aska-a5-is-a-flying-electric-car-that-can-take-off-vertically"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aska A5 is a flying electric car that can take off vertically Share on Facebook Share on X Share on LinkedIn Aska has created a flying car. The prototype of the Aska A5 electric flying car, which can do a vertical takeoff and landing (eVTOL), made its debut at CES 2023. The size of a big SUV, the Aska A5 is the world’s first four-seater electric vehicle that can travel by road, and up to 250 miles by air on a single charge. The company is also announcing the Aska On-Demand ride service (expected to launch in 2026) that will feature a fleet of Aska vehicles, operating on-demand in major cities and their surroundings. Reinventing Transportation Or highways were once innovations, but now they’ve become congested. People are spending too much of their lives in traffic. There needs to be a “step change,” Aska said. A new transportation solution that addresses the biggest challenges we face – cost of living, climate change, affordable housing, effective infrastructure, and quality of life. It’s something new that was predicted long ago but is finally becoming a reality: the flying car. “Our unveil at CES represents something that has never been accomplished in the world, but which humans have dreamed of for decades: a fully functional, full-scale prototype of a Drive & Fly electric Vertical Takeoff and Landing, a real flying car. We’re making history with Aska and defining the next 100 years of transportation,” said Guy Kaplinsky, CEO of Aska, in a statement. “Aska is positioned as a new generation vehicle that combines the convenience of an automobile with the ease and efficiency of VTOL and STOL flight. Aska is a vehicle that addresses not only consumers, there is also significant business potential in emergency response use, military use, as well as on-demand ride-sharing mobility services.” Aska requires minimum updates to the current infrastructure. To perform a vertical take-off or landing, Aska requires only a compact space, such as a helipad or vertiport. The vehicle fits in existing parking spaces, it can be charged at home and EV charging stations, and the range extender engine runs on premium gasoline purchased at existing automotive gas stations. Flying Car as a Service Aska A5 is available for pre-order. The company is developing an affordable on-demand ride-sharing service that utilizes its eVTOL vehicles. Targeting availability in major cities and surrounding areas by 2026, Aska’s ride-sharing program will have certified pilots pick rideshare customers up at their homes and fly them to their destinations. The company has launched early-bird sign-up free registration for Aska On-Demand. Innovative engineering firsts At first glance, the Aska A5 doesn’t look like any vehicle you’ve ever seen before. It bridges modern automotive and aviation design. Powering the Aska A5 is a proprietary power system that features lithium-ion battery packs and a gasoline engine that acts as an onboard range extender. This dual energy source delivers a 250-mile flight range and drastically increases power source reliability. In drive mode, Aska packs in-wheel motor technology, allowing all four wheels to be placed outside the fuselage for all-wheel-drive traction, better aerodynamics, and maximized interior space to comfortably seat four passengers. In flying mode, the vehicle’s wings with six rotors unfold, allowing the vehicle to either take off vertically, or do conventional runway takeoffs. The large wing is optimized for gliding, smooth landings, and efficient energy consumption, while each tilt rotor is utilized for vehicle control. Aska can take off vertically from a compact space like a helipad. It can also use a conventional runway takeoff and landing which can improve the vehicle’s energy consumption efficiency. “In the U.S. alone, there are around 15,000 airfields with runways,” said Maki Kaplinsky, chair and chief operating officer of Aska, in a statement. “Our innovative engineering enables Aska to take off from a runway super fast using our unique in-wheel motor technology. This is a revolution in aviation, enabling Aska to take off in less than five seconds with a runway of 250 feet which brings the closest experience to a F-18 Super Hornet fighter jet taking off from an aircraft carrier for our customers.” He said pilots will have plenty of options for how and where to fly the vehicle as a form of “last mile” transportation. Safety features The big bummer about flying cars has always been the safety thing. Aska said it designed its vehicle for the highest safety standards. For example, Aska has large wings and, in the event of an emergency, the large wings can glide the craft to a safe landing. Aska is equipped with dual energy sources, both batteries and an engine. The six propellers, one on each wing, ensure better redundancy for safe landings. The best in class hybrid propulsion system provides a minimum of 30 minutes reserve flight time, which is a critical requirement by the Federal Aviation Administration today. Aska also includes a ballistic parachute in case of emergency to save the whole aircraft. In 2020, Aska signed a Space Act Agreement with NASA to advance its participation in the Advanced Air Mobility National Campaign, jointly organized by the FAA. In 2022 the FAA accepted Aska through their intake board and the company is progressing towards Aska’s type certification. Full-scale flight testing will start after the CES. The Aska A5 is targeted for commercialization in 2026. Pre-order reservations are now being accepted at www.askafly.com. Based in Los Altos and Mountain View, California, Aska was founded by serial entrepreneurs Guy and Maki Kaplinsky in 2018. Their previous startup, IQP Corporation, was a pioneer in the internet of things and developed a code-free application environment. IQP was acquired by GE Digital in 2017. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,725
2,019
"Procter & Gamble shows off surprisingly cool tech in ordinary products | VentureBeat"
"https://venturebeat.com/business/procter-gamble-shows-off-surprisingly-cool-tech-in-ordinary-products"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Procter & Gamble shows off surprisingly cool tech in ordinary products Share on Facebook Share on X Share on LinkedIn Gillette's heated razor Procter & Gamble (P&G) is 182 years old, but this week marks its first appearance at CES , the big tech trade show in Las Vegas. The company said its new focus is on leading disruption through technology, and it showed some very cool tech in what would otherwise be very ordinary products. It’s part of the internet of things trend, and P&G is focused on putting sensors and artificial intelligence into things like skin advisers, razors, and blemish removers. I was impressed with the company’s effort to put technology into ordinary products in such a way that it just fades into the woodwork. P&G said it is using technology to transform every aspect of the experience consumers have with its product lines, which range from Gillette to Olay. The products that make up P&G’s LifeLab concept include: SK-II Future X Smart Store is a traveling learning lab and pop-up store that uses facial recognition, smart sensors, and computer vision technology to provide next-generation smart skincare counseling. Olay Skin Advisor is an online beauty tool that uses artificial intelligence to analyze your selfies and make custom skincare recommendations. It is trained on 50,000 algorithms that can do things like calculate your age and come up with tailored advice. This launched last year and has already drawn more than 5 million visitors. It’s now getting an upgrade with even better tech. Aria is a connected home fragrance device that can distribute custom levels of scent throughout your home, managed conveniently through a mobile app. Funai Electric provided the scent-jet technology used to distribute the fragrance. Above: P&G’s smart toothbrush. Image Credit: Dean Takahashi Oral-B Genius X with artificial intelligence is a toothbrush that combines the knowledge of thousands of human brushing behaviors to assess individual brushing styles and coach users to achieve better brushing habits. The AI technology tracks where people are brushing in their mouth and offers personalized feedback on the areas that require additional attention. Above: P&G’s Opte Image Credit: Dean Takahashi Opté. After 10 years of development and over 40 patents, P&G Ventures, a startup studio within P&G, is introducing Opté, which combines proprietary algorithms and printing technology with skincare to scan, detect, and correct imperfections with precision application for visibly flawless skin tone. When you move the blue light device over your face, it zeroes in the on blemishes and treats those areas only, not the skin that surrounds it. Funai Electric provided the inkjet microfluidics technology used in the Opté. Above: Gillette’s heated razor turns orange when it is on. Image Credit: Dean Takahashi Gillette self-heating razor. This razor heats itself so you can enjoy a shave with a warm, comfortable blade. It has sensors that shut it down if the heat gets too high. I felt it and it was surprisingly warm, even though it looks like any other fancy metal razor. DS3 is an engineered soap that can clean with a lot less water. In fact, P&G estimates it can save 800 million gallons by eliminating the water wasted each day. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,726
2,023
"Qualcomm launches Snapdragon Satellite for two-way messaging for smartphones | VentureBeat"
"https://venturebeat.com/data-infrastructure/qualcomm-launches-snapdragon-satellite-for-two-way-messaging-for-smartphones"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm launches Snapdragon Satellite for two-way messaging for smartphones Share on Facebook Share on X Share on LinkedIn The Iridium constellation. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Qualcomm introduced its Snapdragon Satellite iplatform to support premium smartphones with two-way satellite messaging. Qualcomm and Iridium entered into an agreement to bring satellite-based connectivity to next-generation premium Android smartphones; Garmin looks forward to collaborating with support for emergency messaging. Apple recently launched something similar, where people who are in wilderness areas can call for emergency support via satellite connections when they are out of wireless range. Snapdragon Satellite offers truly global coverage from pole to pole and can support two-way messaging for emergency use, SMS texting, and other messaging applications – for a variety of purposes such as emergencies or recreation in remote, rural and offshore locations. This industry leading solution also provides the opportunity to expand emergency and two-way satellite messaging beyond smartphones to other devices needing global messaging capabilities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Qualcomm made the announcement at CES 2023, the big tech trade show in Las Vegas this week. Snapdragon Satellite will provide global connectivity using mobile messaging from around the world, starting with devices based on the flagship Snapdragon 8 Gen 2 Mobile Platform. Powered by Snapdragon 5G Modem-RF Systems and supported by the fully operational Iridium satellite constellation, Snapdragon Satellite will enable OEMs and other service providers to offer truly global coverage. The solution for smartphones utilizes Iridium’s weather-resilient L-band spectrum for uplink and downlink. Emergency messaging on Snapdragon Satellite is planned to be available on next-generation smartphones, launched in select regions starting in the second half of 2023. “Robust and reliable connectivity is at the heart of premium experiences. Snapdragon Satellite showcases our history of leadership in enabling global satellite communications and our ability to bring superior innovations to mobile devices at scale,” said Durga Malladi, senior vice president and general manager, cellular modems and infrastructure, Qualcomm Technologies, in a statement. “Kicking off in premium smartphones later this year, this new addition to our Snapdragon platform strongly positions us to enable satellite communication capabilities and service offerings across multiple device categories.” Beyond smartphones, Snapdragon Satellite can expand to other devices, including laptops, tablets, vehicles and IoT. As the Snapdragon Satellite ecosystem grows, OEMs and app developers can differentiate and offer unique branded services taking advantage of satellite connectivity. Snapdragon Satellite is planned to support 5G Non-Terrestrial Networks (NTN), as NTN satellite infrastructure and constellations become available. “Iridium is proud to be the satellite network that supports Snapdragon Satellite for premium smartphones,” said Matt Desch, CEO of Iridium, in a statement. “Our network is tailored for this service – our advanced, LEO satellites cover every part of the globe and support the lower-power, low-latency connections ideal for the satellite-powered services enabled by the industry-leading Snapdragon Satellite. Millions depend on our connections every day, and we look forward to the many millions more connecting through smartphones powered by Snapdragon Satellite.” Garmin also offered words of support. “Garmin welcomes the opportunity to expand our proven satellite emergency response services to millions of new smartphone users globally,” said Brad Trenkle, vice president of Garmin’s outdoor segment, in a statement. “Garmin Response supports thousands of SOS incidents each year and has likely saved many lives in the process, and we are looking forward to collaborating with Qualcomm. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,727
2,023
"CES 2023 tech trends to watch | VentureBeat"
"https://venturebeat.com/games/ces-2023-tech-trends-to-watch"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture CES 2023 tech trends to watch Share on Facebook Share on X Share on LinkedIn Steve Koenig, vice president of research at the CTA, talks about CES trends for 2023. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. CES will present us with a tsunami of products, but Steve Koenig, vice president of research at the Consumer Technology Association, the group that puts on the big tech trade show in Las Vegas, helped sort it out by pointing out key trends to watch. Among the things he foresees: Enterprise technology will drive innovation forward and help pull us out of a recession. We’ll turn to robotics, AI, and the metaverse to deal with shortages of skilled workers and other things that have become scarce in the post-pandemic world. Koenig foresees metaverse as a service and the metaverse of things. It’s the ultimate way to combine buzzwords together. Koenig said that innovation in enterprise tech comes from both big and small companies. But a recovery is vulnerable in part because we aren’t sure what impact the global economic downturn will have on the technology supply chain, he said. “It remains vulnerable,” he said. “You need only look at China (and its struggles with a new wave of COVID) to see how vulnerable it is.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Yesterday, the CTA said that U.S. technology retail revenues will fall 2.4% to $485 billion in 2023 from $497 in 2022. This new data puts revenues slightly above pre-pandemic levels, following a three-year surge in consumer technology spending that peaked at a record-breaking $512 billion in 2021. While CTA anticipates a looming recession and inflation will weigh against consumer spending in the coming year, consumer technology industry revenues will remain roughly $50 billion above pre-pandemic levels. Koenig noted that shipping costs are coming down and shortages from the pandemic-induced supply chain maybe be abating. He said that semiconductor demand is softening and chip inventories are rising. “The bad news is we are moving from a chip shortage to potentially an oversupply,” Koenig said. “The downside risk of oversupply is we might see chip architectures deferred as we work through this inventory.” He noted there is a shortage of 10 million skilled workers in the U.S., and enterprises can’t hire enough workers. “Across the global economy, businesses are struggling to find workers. There are layoffs but humans are nice to have. And still there is stubborn inflation and rising interest rates. The Federal Reserve raised rates six times in 2022. He said 60% of economists surveyed believe the U.S. will slip into a recession in 2023. But he pointed to an optimistic paralell. He noted that in the great recesssion of 2008 and 2009 saw consumer innovations materialize like 4G technology, smartphones, tablets, netbooks and mobile broadband. And Koenig believes that we’ll see similar waves of innovation this year with 5G industrial and internet of things applications, connected intelligence, autonomous systems, and quantum computing. He thinks that across this decade we can expect to see digital transformation with cloud, AI, cybersecurity, software-as-a-service, supply chain, retailing on the consumer side and changes in the enterprie with 5G enterprise and industrial, Web3, smart factories, autonomous systems, and the metaverse. In fact, he suggested we’ll see the “metaverse of things,” whatever that means. He said enterprises will deploy these technologies that will underpin the “entire global economy.” Venturing out on a limb, Koenig said, “The metaverse is closer than you think.” He acknowledged the metaveres is a speculative term, but he said it is a real trend, even if it has been greeted with skepticism. He noted the driver on the enterprise side is digital twins, like the designs of virtual factories that precede the building of the factories in real life for companies like BMW and Mercedes. Koenig also said that electric vehicles and the charging system that goes will them will also galvanize economic activity. As autonomous vehicles become real, they will help offset other trends. For instance, self-driving trucks can offset the shortage of truck drivers in the U.S. Koenig also noted that entertaining people in cars and the growth of screens and subscription services in cars will be a catalyst for growth. Koenig also foresees growth in digital health mental wellness, and virtual reality therapeutic applications. AI tech is spreading rapidly. He noted how John Deere is adopting AI for autonomous tractors that can operate 24/7 and help us deal with the need to feed the ever-growing population of the world. And he noted that gaming will fuel growth, as the U.S. now has 164 million gamers ages 13 to 64. He said three quarters of the population plays games, and the average time played per week has gone up from 16 hours a week in 2019 to 24 hours a week now. Mobile gaming has been key to that, as has the trend around gaming as a way to connect and socialize. Lastly, he said that the services economy continues to grow, with things like grocery delivery services catching on. About 31 cents of every dollar generated in the tech economy is now about services. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,728
2,015
"Withings rolls out $150 version of its handsome Activite health watch | VentureBeat"
"https://venturebeat.com/mobile/withings-rolls-out-150-version-of-its-handsome-activite-health-watch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Withings rolls out $150 version of its handsome Activite health watch Share on Facebook Share on X Share on LinkedIn The Withings Activite Pop in Azure, Shark Grey, and Sand colors. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. LAS VEGAS — The health tech and wearables company Withings announced a new version of its handsome Activité health watch today, the Activite Pop. The new watch is much less expensive that its forerunner, selling for $150 versus the Activité’s $450. The new Pop also comes in more colors — Azure, Shark Grey, and Sand. The Pop is an analog watch that features a lightweight design, like its predecessor. But it uses slightly less deluxe materials — a PVD-coated watchcase and the smooth silicone strap — to get the price down. With the new Pop, Withings may be answering a market demand for fitness wearables at lower price points. But, it hopes, the cool design of the device will keep people wearing it. “The activity tracker category has a huge problem with abandonment, and so consumers don’t really get to see the benefit of long-term data and the impact it can have on their health,” Withings CEO Cédric Hutchings said in a statement. “It is time wearables step up to what they claim to be!” he added. Like the original Activité, the Pop has two hand dials, one showing the time and a sub dial showing percentage progress of specific activity goals. The step count goal is set in the accompanying Withings app. At night, it monitors sleep quality and wakes the user up with a gentle vibration. A single standard watch battery powers it for up to 8 months. It’s water resistant up to 30 meters. The new watch will be available in limited quantities at bestbuy.com on January 5th, 2015 for $149.95. It’ll become available at Best Buy stores nationwide and online in March 2015. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,729
2,022
"The DeanBeat: The best of CES 2022 as seen from afar | VentureBeat"
"https://venturebeat.com/technology/the-deanbeat-the-best-of-ces-2022-as-seen-from-afar"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: The best of CES 2022 as seen from afar Share on Facebook Share on X Share on LinkedIn BMW's iX E Ink can change its color. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Oh, my butt is sore from sitting so much. That was one of the hazards of CES 2022 , which I covered for a second year in a row from afar, in the comfort of my own home. At CES 2020 , I walked more than 37.45 miles (over 84,385 steps) to scout for the best ideas and products of the show. At CES 2021, I walked about 20 feet to the fridge and 20 more feet to the bathroom and back to my home office repeatedly. And I did that again for CES 2022. Such is the consequence of playing it safe, for myself and those around me, while the Omicron variant spreads. When I saw ghost town pictures of CES events (like CES Unveiled, with a much smaller number of journalists eating free food), I had some severe FOMO because nobody had to wait in lines. This year featured 2,200 exhibitors, up from 1,900 in 2021 and 4,000 (in-person) in 2020. I missed dragging my roller bag all over the place and catching Ubers in Las Vegas to go to nightclubs where I was too tired to enjoy anything. But here are the things that caught my eye. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! 1) BMW’s car that changes its color Above: BMW’s SUV can change its colors on the fly. I really would like to see some jaws drop when I change the color of my BMW from blue to pink. That day isn’t here yet, but BMW unveiled a car that can change its color because its exterior has a coating of E Ink. E Ink is the same kind of tech used in Amazon Kindle ebook readers, and can now cover the surface of an entire automobile. Videos of the BMW iX Flow featuring E Ink showed it could instantly change the color of an SUV from black to white or vice versa. Together with its My Modes interior decorations for lighting the inside of your car with digital art, this represents a way to customize your car to how you feel at any given moment. A white surface reflects a lot more sunlight than a black one. By implication, heating of the vehicle and passenger compartment as a result of strong sunlight and high outside temperatures can be reduced by changing the exterior to a light color. In cooler weather, a dark outer skin will help the vehicle to absorb noticeably more warmth from the sun. E Ink technology itself is extremely energy efficien t. Unlike displays or projectors, the electrophoretic technology needs no energy to keep the chosen color state constant. It’s the stuff of science fiction. I want some of this coating for my house too. 2) Samsung energy-harvesting remote control Above: Samsung’s new remote control has multiple ways to collect energy. Last year, Samsung debuted a remote control that could tap solar energy for its power. And the new Samsung energy-harvesting remote can now get power from the radio frequencies generated by things in your home such as Wi-Fi routers. It’s a lot like how radio frequency identification (RFID) gets its power from connecting with radio waves. The new remote has a solar cell to gain energy from your lights or sunlight coming through the windows. But it can also collect RF energy and convert it into electric energy to charge the remote. And so this remote will never need new batteries, and that is good for the planet. 3) John Deere’s self-driving tractor Above: John Deere has a self-driving tractor. Self-driving cars aren’t really tearing up our roads yet. But farms may be another story. John Deere showed off its self-driving tractor at CES 2022. It retrofitted its 8R tractor with cameras and sensors, so it can make its way through fields without a human driver. The tractor can spot obstructions and stop or otherwise let the farmer know what’s happening through a smartphone app. The task of navigating a field can be tricky, but it’s not nearly as hard as driving down roads with other humans. It uses GPS to help with automatic steering, and it doesn’t get tired. Still, it’s not completely displacing the human because it needs an overseer. Drivers have to refuel it and take it from one field to another. And so far it is focusing on tillage, or preparing soil for planting. The fully autonomous tractor will hit the fields later this year. 4) GAF Energy’s nailable solar shingles Above: You can nail solar tiles to your roof with GAF Energy’s product. Solar roofing provider GAF Energy debuted the Timberline Solar Energy Shingle, which integrates into roofing materials so you can nail it to the roof. One of the big time sinks and costs associated with solar roofs has been the need to house solar tiles on heavy platforms that have to be attached to roofs, making a solar roof installation more complicated than putting on a traditional roof. The company said the Timberline Solar is reliable, durable, cost-effective, easy to install, and aesthetically superior. It is less than a quarter-inch thick, and you can nail it just like you’re putting on a regular roof. Over five million new roofs are installed on U.S. homes each year. Let’s hope this brings down the cost of installation so that solar roofs become even more affordable and easier to get a return on investment on clean energy. 5) Massage Robotics Model MR-01 Every year at CES, I would see dozens of people waiting in lines to get massages in chairs. But those automated massages couldn’t give you the kind of relief a human can. But Massage Robotics is taking up that challenge with its massage robot table that has two robotic arms. It uses 6-axis collaborative robotic arms and cloud artificial intelligence. You can speak to it in English or Mandarin Chinese, and tell it where to massage you. It uses machine learning to recommend a massage routine for you, or you can personalize it. You might not trust a robot to be gentle enough, but it may be more comfortable than a stranger touching your body, the company said. Massage Robotics says it’s safe, and it won’t engage in sexual misconduct without you either, the company said. You can create a personalized massage and share it with friends. It only costs $310,000. 6) Sony PlayStation VR 2 headset Above: Sony’s PlayStation VR 2 headset. Sony unveiled its second-generation PlayStation VR virtual reality headset. That’s going to be a shot in the arm for the VR games market, which is already picking up thanks to Facebook’s Meta Quest 2 (formerly the Oculus Quest 2) and the need for people to escape from the pandemic. Sony’s new device will use OLED displays with a resolution of 2000 x 2040 pixels per eye. It has a refresh rate of 120 hertz and a 110-degree field of view. It will feature eye-tracking technology. That means when you look up, the camera will detect your eye movement and make something happen in a game. It uses hand controllers with haptic feedback. It doesn’t have pricing or a date yet. The visuals will be better with 4K resolution and foveated rendering, which saves on computing by sharply rendering on the things you can see. Sony’s got an interesting game for it coming: Horizon: Call of the Mountains. 7) Sony Vision-S 02 electric vehicle I always wondered what a car would look like if Sony designed it. Hoping to stay ahead of other dreamy things like an Apple car, Sony unveiled its Vision-S concept for an electric car. Sony didn’t say all that much about it, except it uses the same EV/cloud platform as its Vision-S 01 prototype. This car is an SUV that looks more like a four-door. It has big entertainment screens in a variety of positions in the seven-seat vehicle. And Sony can pack in all of the electronics it wants into this car, which might be its reason for existing. I’d like to get one with a PlayStation 5 in it, preferably in the back seat. Sony hasn’t said when it’s coming or the price. 8) Samsung Freestyle Projector Above: Freestyle projector Samsung showed off a Freestyle projector that casts anything you want onto walls, or down from a light fixture onto a table. The two-pound device can project a 100-inch image on a wall, and you can send content to it with a Samsung Galaxy phone by tapping it. You can even personalize a wall with a fake window. And you can manage content with your voice. It’s like a video projector, smart speaker, and lighting system all in a single device. It can rotate so that you can easily cast images onto any surface. You can put things on the wall that suit your mood. 9) DeepOptics 32ºN reading sunglasses Above: DeepOptics 32ºN reading sunglasses I can’t get my hands on Mojo Vision’s augmented reality contact lenses yet. But my traditional glasses could sure use a replacement. And Deep Optics has come up with touch-sensitive LCD sunglasses that you can electronically adjust to suit your eyes. The 32ºN sunglasses let you switch over from seeing objects that are far away to reading glasses. You just swipe your finger to switch modes. You can use them as ordinary UV-blocking sunglasses. But when you want to see something upfront, you can slide a finger along the touch-sensitive arm of the glasses. You can program in your own prescription. These glasses don’t weigh you down, as they weigh less than 50 grams. One charge should get you through a full day. If the battery runs out, it reverts to distance mode. It will have a retail price of $449 and will be out later this year. 10) Kohler Stillness Bath Above: The Kohler Stillness Bath is based on the Japanese forest bath. This one was on my list last year, and it’s actually shipping this year. It’s a square tub that dazzles you with a combination light, fog, and aromatherapy. It’s based on the practice of Japanese forest bathing, or shinrin-yoku. I had plenty of this experience in Sony’s Ghost of Tsushima video game, but I could use this spa-like experience in my home in real life. When the water overflows, it spills out the side into a wooden moat. Kohler has products that I think are for supremely lazy people, but the company sees things like its automated bath filler as giving people back time in their days. I can imagine at the end of the day getting some relief from the world in a bath like this. It costs $8,000, and it will be available in the first quarter of 2022. 11) Noveto N1 headphones without headphones Above: Noveto N1 invisible headphones Noveto showed up with some invisible headphones. I haven’t tried these yet, but from afar it looks just like a soundbar. You pair it with your computer or smartphone via Bluetooth. Then it beams audio only to your ears. It emits ultrasound to your ears and converges into an audible pocket in your ears. No one else can hear what the soundbar is emitting except you. This way, you can get on a call without wearing a headset, and no one will be able to hear you the way they would with a speakerphone. It doesn’t have a price yet and is expected to ship later this year. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,730
2,023
"A 5-step framework for organizations to successfully achieve net-zero | VentureBeat"
"https://venturebeat.com/enterprise-analytics/a-5-step-framework-for-organizations-to-successfully-achieve-net-zero"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest A 5-step framework for organizations to successfully achieve net-zero Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. From devastating droughts to blazing wildfires, the effects of climate change are seen and felt daily. As these events intensify and become more frequent, climate change has become an increasingly urgent issue for lawmakers, individuals, and businesses to address. To avoid the most harmful impacts of climate change, global greenhouse gas emissions must be cut in half by 2030. By 2050, they must reach net-zero. With mere decades to spare, achieving net-zero has become imperative. Fortunately, organizations are catching on. In 2019, only 16% of the global economy had made a net-zero pledge. Just two years later, that number jumped to nearly 70%. What it means to be net-zero The term “ net-zero ” describes a state where the amount of greenhouse gas emissions going into the atmosphere are reduced to such an extent (or offset by removals) that a “net zero” balance is created, preventing further global warming. For many organizations, net-zero means making deep cuts to the emissions they’re responsible for across their entire value chain, including emissions produced by their own processes, those purchased through electricity and heat, and those generated by their suppliers and end-users — which are estimated to account for 65% to 95% of a company’s carbon footprint. Some organizations choose to go further by finding ways to mitigate emissions beyond their own value chain. For example, they may educate their customer base and suppliers on how to reduce their own emissions or allow customers to purchase carbon offsets at checkout. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While most organizations are ambitious about achieving net-zero, few lack an effective action plan to do so, according to a recent survey of senior executives from 900 global organizations that have set net-zero targets. In fact, roughly half of organizations use their emissions data solely for mandatory reporting, with no impact on their business decision-making. This is despite the fact that 85% of organizations recognize the business value that emissions measurement provides, such as the ability to explore sustainable business models, avoid financial risk, and reduce operational inefficiencies. How can organizations leverage data to successfully achieve net-zero? To successfully reach their net-zero goals — and reach them faster — it is essential that organizations leverage emissions data in their business planning and decision-making, and not just for reporting. Of those using emissions data in their decision-making, over half (53%) have experienced an acceleration in their net zero journeys, and on average, these organizations have also achieved a 4.6% reduction in emissions per year. The following five-step framework can help businesses reach net-zero faster: Establish an organization-wide framework for net-zero governance When it comes to initiatives as large as climate change, it often requires a major shift in mindset. Everyone at the company must start thinking differently about their individual carbon footprint, as well as the footprint of the projects they work on, and the team and vendors they work with. Management support is critical for success. If the C-suite views net-zero initiatives as a critical priority, it drives urgency and funding. With management at the helm, organizations must develop a data strategy and roadmap to support their net-zero goals and set up a governing body to oversee progress. Establish a robust data management foundation Data quickly becomes unwieldy and unactionable without the appropriate tools to measure, collect and synthesize it. Organizations will need to ensure that they have a robust data management foundation that automates emissions data collection at scale from multiple external and internal data sources. Collaboration with the wider ecosystem is also critical to source reliable, verifiable emissions data across the value chain. Next, organizations must build out storage, processing, and analytics capabilities for their emissions data. This enables them to consolidate this data into a single source of truth, automatically calculate their emissions footprint and generate predictive insights. Finally, they must invest in visualization and data reporting tools so that their stakeholders can act on those insights. Drive usage of emissions data across business functions To enable employees to act on emissions data insight, organizations must establish an internal carbon pricing system and invest in upskilling initiatives across the business. Internal carbon pricing systems help teams evaluate the carbon cost of their business decisions, and weigh them in their decision-making. According to a recent survey , nearly 4 in ten organizations plan to set up such systems, but only 12% have already done so. While most organizations are eager to achieve net-zero, a majority of employees do not yet know what that means. To address this knowledge gap, organizations should develop net-zero awareness programs to bring current leadership and employees up-to-date and train them in key skills, such as carbon accounting. Moving forward, these training programs should be incorporated into onboarding for new employees. Establish mechanisms to ensure accountability for decarbonization across the organization For an organization to reach its net-zero targets, each team must be engaged in reducing their share of emissions. To prioritize emission reduction, organizations must define clear targets and carbon KPIs for each team, and they should be included when evaluating the performance of internal functions. Companies may go so far as to link compensation with carbon KPIs, for example, and award bonuses or adjust compensation for business leaders depending on their team’s ability to achieve emissions reduction targets. Collaborate with the wider ecosystem to expand access to reliable emissions data Just as achieving net-zero requires the effort of an entire organization, reaching it on a global scale will require the whole world. Organizations can contribute to the global cause by working to mitigate greenhouse gas emissions beyond their own operations. For example, they can participate in global campaigns and alliances to raise the level of net-zero ambition. They can collaborate with their wider ecosystem — including their competitors — to establish industry-wide methodologies for emissions measurement. They can help their suppliers measure emissions by providing them with carbon accounting tools, training, and support. Finally, organizations can participate in data sharing partnerships, where they partner with external entities such as their competitors, suppliers, and customers to share and act on emissions data. Building a net-zero future Net-zero goals are ambitious but achievable. With leadership buy-in, a clearly defined strategy, and a strong foundation for collecting and acting on data, businesses can reach their net-zero goals at an accelerated pace and make a positive impact on the global ecosystem. Prasad Shyam leads Capgemini’s insights and data practice for manufacturing, automotive, life sciences, oil and gas, and energy and utilities industries in North America. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,731
2,022
"From DALL-E 2 to ChatGPT, covering AI's wild year | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/from-dall-e-2-to-chatgpt-covering-ais-wild-year-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages From DALL-E 2 to ChatGPT, covering AI’s wild year | The AI Beat Share on Facebook Share on X Share on LinkedIn Sharon Goldman/Lensa app Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It was my first week at VentureBeat, in mid-April. OpenAI had just released the new iteration of its text-to-image generator, DALL-E 2; our lead AI writer, Kyle Wiggers, had moved to TechCrunch before I could pick his brain; and I was panicking. I scrolled frantically through Twitter images of avocado chairs and astronauts riding horses on the moon, wondering what all the fuss was about. I had written about AI trends for over a decade, but it was at a sky-high level — think tips for the C-suite. Now, I belatedly realized how little I understood about the past decade of progress in AI, from machine learning (ML) and computer vision to natural language processing (NLP). Every beat I’ve ever covered has had a learning curve, of course. But the AI beat felt like Mount Everest. “Give me six months,” I told everyone who reached out. “I’ll know a lot more in six months. For now, I really need you to start from the very beginning.” A year of fast and furious AI development Nine months and over 120 stories later, I can look back and see that not only was I climbing a steep learning curve, but the pace of news in the AI space was faster than in any industry I had covered before. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Who could possibly keep up with this wild ride? I did what I could: I covered the biggest AI model news, from DALL-E 2 to Google’s Imagen, Meta’s Galactica to ChatGPT. But I completely dropped the ball on DeepMind’s AlphaFold. I managed to dig deep into DeepMind’s AlphaTensor, with its ability to create faster novel matrix multiplication algorithms, but was late to covering Stable Diffusion and its immediate open-source impact. I tackled AI legislation and regulation: There were the AI hiring tools under scrutiny, the new AI Bill of Rights and the upcoming EU AI Act. There were so many trends to cover, from emotion AI and the possibility of crippling AI cyberattacks to deepfakes and MLOps. There were the big brand efforts in AI, including Walmart , Coca-Cola and John Deere. There were the autumn big tech announcements and AI layoffs. And who could forget the summer focus on AI “sentience” ? Finally, I had the opportunity to interview top AI leaders, including Geoffrey Hinton, Yann LeCun and Fei-Fei Li, about the 10th anniversary of the so-called deep learning “revolution.” It was a lot. But it only made me curious and eager to do more in 2023. Thanks and a few humble resolutions So many colleagues and industry leaders have helped me this year as I got my sea legs covering the AI beat. There are far too many to name, but I certainly want to shout-out to VentureBeat managing editor Dan Muse, whom I was also lucky enough to work with when he was at CIO.com, as well as VentureBeat founder and CEO Matt Marshall — both gave me the chance to make the AI beat my own. Thank you to everyone I’ve spoken to on the vendor side, at the research labs, in the academic community, at the major consulting firms, within the largest enterprise companies — you’ve all been patient, supportive and helpful. To my fellow journalists covering AI — including top writers like Will Douglas Heaven, Melissa Heikkilä, Kate Kaye, Will Knight, Cade Metz, Kevin Roose, Khari Johnson and, of course, Kyle Wiggers — you may not know me, but your coverage has taught me so much. I learn from you every day! As for my New Year’s resolutions, I humbly promise to do my best to abide by Ben Shneiderman’s guidelines for journalists and editors about reporting on robots, AI and computers. I also resolve to always keep a beginner’s mind , ask the best questions I can, and, unlik e ChatGPT , always admit when I just don’t know. A happy holiday season and New Year to all! Feel free to reach out at [email protected] or on Twitter: @sharongoldman VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,732
2,023
"Glüxkind unveils smart stroller Ella which uses AI for safer movement | VentureBeat"
"https://venturebeat.com/ai/gluxkind-unveils-smart-stroller-ella-which-uses-ai-for-safer-movement"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Glüxkind unveils smart stroller Ella which uses AI for safer movement Share on Facebook Share on X Share on LinkedIn Glüxkind Technologies showed off its AI-based smart stroller Ella at CES 2023. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Glüxkind Technologies showed off its AI-based smart stroller Ella at the CES 2023 tech trade show in Las Vegas. Vancouver, Canada-based Glüxkind Technologies created Ella to support new parents on their daily adventures, be more inclusive and enable families to spend quality time together. It’s another example of tech — and AI in particular — infiltrating everyday products that normally don’t have much tech. I have to say I never expected to see a baby stroller with AI. Ella, Glüxkind’s AI stroller is designed and optimized for daily life, not the showroom. With Ella’s adaptive push and brake assistance, parents and caregivers alike can enjoy effortless walks regardless of terrain; uphill, downhill, and even when fully loaded with groceries and toys. All that stuff will be a walk in the park, the company said. When the child is not inside the stroller because they need a hug or a toddler wants to walk for a bit, parents can activate Ella’s intelligent hands free strolling. Glüxkind Ella’s advanced parent assist technology empowers parents to be present and focus on their kids without compromise or distracting multitasking, the company said. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Glüxkind’s AI stroller offers baby soothing features like Rock-My-Baby mode to help the little ones stay asleep or built in white noise playback. Glüxkind CEO Kevin Huang said in a statement, “With Glüxkind’s Ella, we aim to make parenting easier, starting with the key piece of parenting equipment, the baby stroller. At Glüxkind, we believe in empowering our families with safe, convenient, and seamless products.” Huang added, “We want to embolden parents to explore and create their own paths on their parenting journey and be the best parents they can be.” Glüxkind is a Canadian baby technology startup founded in 2020 by Anne Hunger and Kevin Huang shortly after they became parents for the first time. “We’ve put a lot of hard work into this product and are excited to get it into more customers’ hands in 2023. The development has been driven by our own experience as new parents,” said Hunger, chief product officer, in a statement. “Supporting the next generations of parents with an incredible product is what motivates us every day. Getting this recognition not only validates our effort but also enables us to reach more families who are looking for better products.” The name Glüxkind is inspired by the German word Glückskind. “Glück” means lucky and “Kind” translates to child. The word Glückskind is especially common in fairytales. The startup wants parents to experience just as many magical moments with their little ones while they are out and about. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,733
2,022
"How MIT is training AI language models in an era of quality data scarcity | VentureBeat"
"https://venturebeat.com/ai/how-mit-is-training-ai-language-models-in-an-era-of-quality-data-scarcity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How MIT is training AI language models in an era of quality data scarcity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Improving the robustness of machine learning (ML) models for natural language tasks has become a major artificial intelligence (AI) topic in recent years. Large language models (LLMs) have always been one of the most trending areas in AI research, backed by the rise of generative AI and companies racing to release architectures that can create impressively readable content, even computer code. Language models have traditionally been trained using online texts from sources such as Wikipedia, news stories, scientific papers and novels. However, in recent years, the tendency has been to train these models on increasing amounts of data in order to improve their accuracy and versatility. But, according to a team of AI forecasters, there is a concern on the horizon: we may run out of data to train them on. Researchers from Epoch emphasize in a study that high-quality data generally used for training language models may be depleted as early as 2026. As developers create more sophisticated models with superior capabilities, they must gather more texts to train them on, and LLM researchers are now increasingly concerned about running out of quality data. Kalyan Veeramachaneni , a principal research scientist in the MIT Information and Decision Systems laboratory and leader of the lab’s Data-to-AI group, may have found the solution. In a paper on Rewrite and Rollback (“R&R: Metric-Guided Adversarial Sentence Generation”) recently published in the findings of AACL-IJCNLP 2022 , the proposed framework can tweak and turn low-quality data (from sources such as Twitter and 4Chan) into high-quality data (such as that from sources with editorial filters, such as Wikipedia and industry websites), increasing the amount of the correct type of data to test and train language models on. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data scarcity looming large Language AI researchers generally divide the data they use to train models into high-quality and low-quality data. High-quality data is generally defined as coming from sources that “have passed usefulness or quality filters” as noted by the Epoch study. In other words, it has been reviewed for editorial quality, either professionally or through peer review (in the case of scientific papers, published novels, Wikipedia, etc.) or positive engagement by many users (such as for filtered web content). Data from low-quality categories includes non-filtered, user-generated text such as social media postings or comments on websites such as 4chan, and these instances far outweigh those rated high quality. Training LLMs with flawed, low-quality datasets can lead to many issues: Mislabeled examples in the dataset introduce noise into the training, which can confuse the model and decrease the model quality. Spurious correlations (e.g., sentences with certain words always getting one particular label) encourage the model to pick up incorrect shortcuts and lead it to make mistakes in real scenarios. Data bias (e.g., a dataset containing text only from a specific group of people) makes the model perform poorly on particular inputs. High-quality datasets can alleviate these issues. Since ML models rely on training data to learn how to make predictions, data quality dramatically impacts the quality of the model. As a result, researchers often only train models with high-quality data, as they want their models to re-create superior language fluency. Training LLMs using high-quality text samples enables the model to understand the intricacies and complexity inherent in every language. This method has yielded outstanding results for complex language models like GPT-3. Veeramachaneni says that aiming for a more intelligent and articulate text generation can also be helpful in training LLMs on real-life human discourse. “Text from your average social media post, blog, etc., may not achieve this high quality, which brings down the overall quality of the training set,” Veeramachaneni told VentureBeat. “We thought, could we use existing high-quality data to train LLMs (which we now already have access to LLMs trained on high-quality data) and use those LLMs to raise the quality of the other data?” MIT addresses current challenges in LLM development Veeramachaneni explained that training LLMs requires massive amounts of training data and computing resources, which are only available to tech giants. This means most individual researchers must depend on the LLMs generated and released by tech giants rather than making their own. He said that despite LLMs becoming larger and requiring more training data, the bottleneck is still computational power most of the time. “Annotated high-quality data for downstream tasks [is] hard to obtain. Even if we design a method to create higher-quality sentences from lower-quality ones, how would we know the method did the job correctly? Asking humans to annotate data is expensive and not scalable.” “So, R&R provides a method to use LLMs reliably to improve the quality of sentences,” he said. Veeramachaneni believes that, in terms of model quality, current LLMs need to improve their ability to generate long documents. “Current models can answer questions with a few sentences but cannot write a fictional story with a theme and a logical plot. Architecture improvement is necessary for LMs to handle longer text,” said Veeramachaneni. “There are also more and more concerns about the potential negative impacts of LLMs. For example, LLMs may remember personal information from the training data and leak it when generating text. This issue is hard to detect, as most LLMs are black boxes.” Veeramachaneni and the research team in MIT’s Data-to-AI group aim to solve such issues through their Rewrite and Rollback framework. A new method of adversarial generation from the MIT team In the paper “R&R: Metric-Guided Adversarial Sentence Generation,” the research team proposes an adversarial framework that can generate high-quality text data by optimizing a critique score that combines fluency, similarity and misclassification metrics. R&R generates high-quality adversarial examples by capturing text data from different sources and rephrasing them, such as tweaking a sentence in various ways to develop a set of alternative sentences. “Given 30K words in its vocabulary, it can produce an arbitrary number of sentences. Then it winnows these down to the highest-quality sentences in terms of grammatical quality, fluency and semantic similarity to the original sentence,” Veeramachaneni told VentureBeat. To do this, it use an LLM trained on high-quality sentences to remove sentences that need to be grammatically correct or fluent. First, it attempts to rewrite the whole sentence, with no limitation on how many words are changed; then it tries to roll back some edits to achieve a minimal set of modifications. “Because text classifiers generally need to be trained on human-labeled data, they are often trained with small datasets, meaning they can easily be fooled and misclassify sentences. We used R&R to generate many of these sentences that could fool a text classifier and therefore could be used to train and improve it,” explained Veeramachaneni. It’s also possible to use R&R to transform a low-quality or poorly written sentence into a better-quality sentence. Such a method can have several applications, from editing assistance for human writing to creating more data for LLMs. The stochastic rewrite feature allows the tool to explore a larger text space, and the rollback feature allows it to make meaningful changes with minimal edits. This feature is powerful because it explores many options and can find multiple different adversarial examples for the same sentence. As a result, R&R can generate fluent sentences that are semantically similar to a target sentence without human intervention. “The primary use case of R&R is to conduct adversarial attacks on text classifiers,” said Veeramachaneni. “Given a sentence, it can find similar sentences where the classifier misclassified. R&R-generated sentences can help expand these training sets, thus improving text classifiers’ quality, which may also increase their potential applications.” Talking about the challenges faced while developing the R&R model, Veeramachaneni told VentureBeat that traditional methods for finding alternative sentences stick to changing one word at a time. When designing the rewrite step, the team initially developed the technique to mask only one word — that is, to change one word at a time. Doing so, they found that this led to a change of meaning from that of the original sentence. “Such a design led to the model getting stuck because there are not many options for a single masked position,” he said. “We overcome this by masking multiple words in each step. This new design also enabled the model to change the length of the text. Hence we introduced the rollback step, which eliminates unnecessary perturbations/changes.” The research team says that R&R can also help people change their writing in pursuit of a specific goal: for instance, it can be used to make a sentence more persuasive, more concise, etc. Both automatic and human evaluation of the R&R framework showed that the proposed method succeeds in optimizing the automatic similarity and fluency metrics to generate adversarial examples of higher quality than previous methods. The future of LLMs and generative AI Veeramachaneni believes that LLMs will push the boundaries for human discourse in the near future and hopes to see more applications of LLMs in 2023. “LLMs will be able to quickly and easily summarize and provide existing information. As a result, what we write and our interactions with each other will have to be more meaningful and insightful. It is progress,” he said. Veeramachaneni further explained that LLMs are currently only being used to summarize text or answer questions, but there are many more possible applications. “As the potential of these tools is continually realized, we expect a usage boom. The recent release of ChatGPT by OpenAI has demonstrated good text-generation capability. We can expect tech giants to compete on larger models and release larger models with better performance,” said Veeramachaneni. “At the same time, we expect serious evaluations of LLMs’ limitations and vulnerabilities. It is clear that LLMs can produce meaningful, readable sentences. Now, we expect people to begin focusing on evaluating the factual information contained in the generated text.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,734
2,023
"Pressure on Google as Microsoft plans to add ChatGPT to Bing | VentureBeat"
"https://venturebeat.com/ai/pressure-on-google-as-microsoft-plans-to-add-chatgpt-to-bing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pressure on Google as Microsoft plans to add ChatGPT to Bing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While it may sound intriguing, there are a number of “buts” around the news, first reported by The Information , that Microsoft is planning to add OpenAI’s generative AI-powered ChatGPT to its Bing search engine rather than just showing link results. The new feature, with an eye toward more full-sentence answers to queries, could reportedly launch by the end of March. Google still fields vast majority of search queries First of all, Google retained an 83% share of the search market in 2022, while Bing can only boast 9% of search volume. While Microsoft may certainly wish to challenge Google, it seems unlikely that ChatGPT would make a big dent. In addition, the current revenue model for search rests on link results, so there are questions as to how Bing would monetize its ChatGPT function. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But digital marketing expert Tim Peter pointed out on Twitter that Microsoft’s advantage is that they can subsidize the cost of ChatGPT in Bing via their other revenue streams. “Google makes essentially all its money from ads,” he tweeted. “Without that ad revenue, they’re a much less valuable company.” Google also leads in LLM innovation And Google remains a leader in large language models (LLMs) , added Emad Mostaque, founder of Stability AI, which means they are a force to be reckoned with when it comes to generative AI innovation. Still, he added that Google is “not communicating this well to shareholders and the market and being overly cautious here.” ChatGPT reportedly Google’s ‘code red’ Google was already under pressure in mid-December, when CNBC reported that employees raised concerns at a recent all-hands meeting that the company was losing its competitive edge in artificial intelligence (AI) given ChatGPT’s quick rise. And the New York Times reported a few days later that ChatGPT is considered a “code red” for Google’s search business. For Microsoft, the plans to add ChatGPT to Bing certainly shows that its 2019 $1 billion investment in OpenAI is paying off. ChatGPT struggling with trust issues But it is worth reiterating that none of this search drama addresses the hidden danger that literally underlies ChatGPT: that its results cannot be fully trusted for search queries. In December, even OpenAI CEO Sam Altman was forced to admit ChatGPT’s risks. “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” he tweeted. “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,735
2,023
"Top AI conference bans ChatGPT in paper submissions (and why it matters) | VentureBeat"
"https://venturebeat.com/ai/thats-so-meta-ml-conference-debates-use-of-chatgpt-in-papers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top AI conference bans ChatGPT in paper submissions (and why it matters) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A machine learning conference debating the use of machine learning? While that might seem so meta, in its call for paper submissions on Monday, the International Conference on Machine Learning did, indeed, note that “papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.” It didn’t take long for a brisk social media debate to brew, in what may be a perfect example of what businesses, organizations and institutions of all shapes and sizes, across verticals, will have to grapple with going forward: How will humans deal with the rise of large language models that can help communicate — or borrow, or expand on, or plagiarize, depending on your point of view — ideas? Arguments for and against the use of ChatGPT As a Twitter debate grew louder over the past two days, a variety of arguments for and against the use of LLMs in ML paper submissions emerged. “So medium and small-scale language models are fine, right?” tweeted Yann LeCun , chief AI scientist at Meta, adding, “I’m just asking because, you know… spell checkers and predictive keyboards are language models.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And Sebastian Bubeck, who leads the ML Foundations team at Microsoft Research, called the rule “shortsighted,” tweeting that “ChatGPT and variants are part of the future. Banning is definitely not the answer.” Ethan Perez, a researcher at Anthropic AI, tweeted that “This rule disproportionately impacts my collaborators who are not native English speakers.” Silvia Sellan, a University of Toronto Computer Graphics and Geometry Processing PhD candidate, agreed, tweeting : “Trying to give the conference chairs the benefit of the doubt but I truly do not understand this blanket ban. As I understand it, LLMs, like Photoshop or GitHub copilot, is a tool that can have both legitimate (e.g., I use it as a non-native English speaker) and nefarious uses…” ICML conference responds to LLM ethics rule Finally, yesterday the ICML clarified its LLM ethics policy : “We (Program Chairs) have included the following statement in the Call for Papers for ICML represented by 2023: “ Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis. “This statement has raised a number of questions from potential authors and led some to proactively reach out to us. We appreciate your feedback and comments and would like to clarify further the intention behind this statement and how we plan to implement this policy for ICML 2023.” [TL;DR] The response clarified that: “The Large Language Model (LLM) policy for ICML 2023 prohibits text produced entirely by LLMs (i.e., “generated”). This does not prohibit authors from using LLMs for editing or polishing author-written text. The LLM policy is largely predicated on the principle of being conservative with respect to guarding against potential issues of using LLMs, including plagiarism. The LLM policy applies to ICML 2023. We expect this policy may evolve in future conferences as we understand LLMs and their impacts on scientific publishing better.” The rapid progress of LLMs such as ChatGPT, the statement said, “often comes with unanticipated consequences as well as unanswered questions,” including whether generated text is considered novel or derivative, as well as issues around ownership. “It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted,” the statement said. “However, we do not yet have any clear answers to any of these questions.” What about use of ChatGPT attribution? Margaret Mitchell, chief ethics scientist at Hugging Face , agreed that there is a primary concern around plagiarism, but suggested putting that argument aside, as “what counts as plagiarism” deserves “its own dedicated discussion.” However, she rejected arguments that ChatGPT is not an author, but a tool. “With much grumpiness, I believe this is a false dichotomy (they are not mutually exclusive: can be both) and seems to me intentionally feigned confusion to misrepresent the fact that it’s a tool composed of authored content by authors,” she told VentureBeat by email. Moving on from the arguments, she believes using LLM tools with attribution could address ICML concerns. “To your point about these systems helping with writing by non-native speakers, there are very good reasons to do the opposite of what ICML is doing: Advocating for the use of these tools to support equality and equity across researchers with different writing abilities and styles,” she explained. “Given that we do have some norms around recognizing contributions from specific people already established, it’s not too difficult to extend these norms to systems derived from many people,” she continued. “A tool such as ChatGPT could be listed as something like an author or an acknowledged peer.” The fundamental difference with attributing ChatGPT (and similar models) is that at this point, unique people cannot be recognized — only the system can be attributed. “So it makes sense to develop strategies for attribution that take this into account,” she said. “ChatGPT and similar models don’t have to be a listed author in the traditional sense. Their authorship attribution could be (e.g.) a footnote on the main page (similar to notes on affiliations), or a dedicated, new kind of byline, or <etc>.” Grappling with an LLM-powered future Ultimately, said Mitchell, the ML community need not be held back by the traditional way we view authors. “The world is our oyster in how we recognize and attribute these new tools,” she said. Will that be true as other non-ML organizations and institutions begin to grapple with these same issues? Hmm. I think it’s time for popcorn (munch munch). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,736
2,023
"Two years after DALL-E debut, its inventor is "surprised" by impact | VentureBeat"
"https://venturebeat.com/ai/two-years-after-dall-e-debut-its-inventor-is-surprised-by-impact"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Two years after DALL-E debut, its inventor is “surprised” by impact Share on Facebook Share on X Share on LinkedIn OpenAI researcher, DALL-E inventor and DALL-E 2 co-inventor Aditya Ramesh/Image courtesy of OpenAI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Before DALL-E 2 , Stable Diffusion and Midjourney, there was just a research paper called “ Zero -Shot Text-to-Image Generation. ” With that paper and a controlled website demo, on January 5, 2021 — two years ago today — OpenAI introduced DALL -E , a neural network that “creates images from text captions for a wide range of concepts expressible in natural language.” (Also today: OpenAI just happens to reportedly be in talks for a “ tender offer that would value it at $29 billion.”) The 12 billion-parameter version of Transformer language model GPT – 3 was trained to generate images from text descriptions, using a dataset of text–image pairs. VentureBeat reporter Khari Johnson described the name as “meant to evoke the artist Salvador Dali and the robot WALL-E” and included a DALL-E generated illustration of a “baby daikon radish in a tutu walking a dog.” Since then, things have moved fast, according to OpenAI researcher, DALL-E inventor and DALL-E 2 co-inventor Aditya Ramesh. It’s more than a bit of an understatement, given the dizzying pace of development in the generative AI space over the past year. Then there was the meteoric rise of diffusion models , which were a game-changer for DALL-E 2, released last April, and its open-source counterparts, Stable Diffusion and Midjourney. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It doesn’t feel like so long ago that we were first trying this research direction to see what could be done,” Ramesh told VentureBeat. “I knew that the technology was going to get to a point where it would be impactful to consumers and useful for many different applications, but I was still surprised by how quickly.” Now, generative modeling is approaching the point where “there’ll be some kind of iPhone-like moment for image generation and other modalities,” he said. “I’m excited to be able to build something that will be used for all of these applications that will emerge.” >>Follow VentureBeat’s ongoing generative AI coverage<< Original research developed in conjunction with CLIP The DALL-E 1 research was developed and announced in conjunction with CLIP (Contrastive Language-Image Pre-training), a separate model based on zero-shot learning that was essentially DALL-E’s secret sauce. Trained on 400 million pairs of images with text captions scraped from the internet, CLIP was able to be instructed using natural language to perform classification benchmarks and rank DALL-E results. Of course, there were plenty of early signs that text-to-image progress was coming. “It has been clear for years that this future was coming fast,” said Jeff Clune, associate professor, computer science at the University of British Columbia. In 2016, when his team produced what he says were the first synthetic images that were hard to distinguish from real images, Clune recalled speaking to a journalist. “I was saying that in a few years, you’ll be able to describe any image you want and AI will produce it, such as ‘Donald Trump taking a bribe from Putin with a smirk on his face,’” he said. Generative AI has been a core segment of AI research since the beginning, said Nathan Benaich, general partner at Air Street Capital. “It’s worth pointing out that research like the development of generative adversarial networks (GANs) in 2014 and DeepMind’s WaveNet in 2016 were already starting to show how AI models could generate new images and audio from scratch, respectively,” he told VentureBeat in a message. Still, the original DALL-E paper was “quite impressive at the time,” added futurist, author and AI researcher Matt White. “Although it was not the first work in the area of text-to-image synthesis, OpenAI’s approach of promoting their work to the general public and not just in AI research circles garnered them a lot of attention and rightfully so,” he said. Pushing DALL-E research as far as possible From the start, Ramesh says his main interest was to push the research as far as possible. “We felt like text-to-image generation was interesting because as humans, we’re able to construct a sentence to describe any situation that we might encounter in real life, but also fantastical situations or crazy scenarios that are impossible,” he said. “So we wanted to see if we trained a model to just generate images from text well enough, whether it could do the same things that humans can as far as extrapolation.” One of the main research influences on the original DALL-E, he added, was VQ -VAE , a technique pioneered by Aaron van den Oord, a DeepMind researcher, to break up images into tokens that are like the tokens on which language models are trained. “So we can take a transformer like GPT, that is just trained to predict each word after the next, and augment its language tokens with these additional image tokens,” he explained. “That lets us apply the same technology to generate images as well.” People were surprised by DALL-E, he said, because “it’s one thing to see an example of generalization in language models, but when you see it in image generation, it’s just a lot more visceral and impactful.” DALL-E 2’s move towards diffusion models But by the time the original DALL-E research was published, Ramesh’s co-authors for DALL-E 2, Alex Nichol and Prafulla Dhariwal, were already working on using diffusion models in a modified version of GLIDE (a new OpenAI diffusion model). This led to DALL-E 2 being quite a different architecture from the first iteration of DALL-E. As Vasclav Kosar explained , “DALL-E 1 uses discrete variational autoencoder (dVAE), next token prediction, and CLIP model re-ranking, while DALL-E 2 uses CLIP embedding directly, and decodes images via diffusion similar to GLIDE.” “It seemed quite natural [to combine diffusion models with DALL-E] because there are many advantages that come with diffusion models — inpainting being the most obvious feature that’s kind of really clean and elegant to implement using diffusion,” said Ramesh. Incorporating one particular technique, used while developing GLIDE, into DALL-E 2 — classifier-free guidance — led to a drastic improvement in caption-matching and realism, he explained. “When Alex first tried it out, none of us were expecting such a drastic improvement in the results,” he said. “My initial expectation for DALL-E 2 was that it would just be an update over DALL-E, but it was surprising to me that we got it to the point where it’s already starting to be useful for people,” he said. When the AI community and the general public first saw the image output of DALL-E 2 on April 6, 2022, the difference in image quality was, for many, jaw-dropping. “Competitive, exciting, and fraught” DALL-E’s release in January 2021 was the first in a wave of text-to-image research that builds from fundamental advances in language and image processing, including variational auto-encoders and autoregressive transformers, Margaret Mitchell, chief ethics scientist at Hugging Face, told VentureBeat by email. Then, when DALL-E 2 was released, “diffusion was a breakthrough that most of us working in the area did not see, and it really upped the game,” she said. These past two years since the original DALL-E research paper have been “competitive, exciting, and fraught,” she added. “The focus on how to model language and images came at the expense of how best to acquire data for the model,” she said, pointing out that individual rights and consent are “all but abandoned” in modern-day text-to-image advances. Current systems are “essentially stealing artists’ concepts without providing any recourse for the artists,” she concluded. The fact that DALL-E did not make its source code available also led others to develop open-source text-to-image options that made their own splashes by the summer of 2022. The original DALL-E was “interesting but not accessible,” said Emad Mostaque, founder of Stability AI, which released the first iteration of the open-source text-to-image generator Stable Diffusion in August, adding that “only the models my team trained were [open-source].” Mostaque added that “we started aggressively funding and supporting this space in summer of 2021.” Going forward, DALL-E still has plenty of work to do, says White — even as it teases a new iteration coming soon. “DALL-E 2 suffers from consistency, quality and ethical issues,” he said. It has issues with associations and composability, he pointed out, so a prompt like “a brown dog wearing a red shirt” can produce results where the attributes are transposed (i.e. red dog wearing a brown shirt, red dog wearing a red shirt or different colors altogether). In addition, he added, DALL-E 2 still struggles with face and body composition, and with generating text in images consistently — “especially longer words.” The future of DALL-E and generative AI Ramesh hopes that more people learn how DALL-E 2’s technology works, which he thinks will lead to fewer misunderstandings. “People think that the way the model works is that it sort of has a database of images somewhere, and the way it generates images is by cutting and pasting together pieces of these images to create something new,” he said. “But actually, the way it works is a lot closer to a human where, when the model is trained on the images, it learns an abstract representation of what all of these concepts are.” The training data “isn’t used anymore when we generate an image from scratch,” he explained. “Diffusion models start with a blurry approximation of what they’re trying to generate, and then over many steps, progressively add details to it, like how an artist would start off with a rough sketch and then slowly flesh it out over time.” And helping artists, he said, has always been a goal for DALL-E. “We had aspirationally hoped that these models would be a kind of creative copilot for artists, similar to how Codex is like a copilot for programmers — another tool you can reach for to make many day-to-day tasks a lot easier and faster,” he said. “We found that some artists find it really useful for prototyping ideas — whereas they would normally spend several hours or even several days exploring some concept before deciding to go with it, DALL-E could allow them to get to the same place in just a few hours or a few minutes.“ Over time, Ramesh said he hopes that more and more people get to learn and explore, both with DALL-E and with other generative AI tools. “With [OpenAI’s] ChatGPT, I think we’ve drastically expanded the outreach of what these AI tools can do and exposed a lot of people to using it,” he said. “I hope that over time people who want to do things with our technology can easily access it through our website and find ways to use it to build things that they’d like to see.” [Updated by editor on 1/5/23 at 12:27 pm PT] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,737
2,016
"Lumus raises $45 million to make wearable augmented reality displays | VentureBeat"
"https://venturebeat.com/business/lumus-raises-45-million-to-make-wearable-augmented-reality-displays"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lumus raises $45 million to make wearable augmented reality displays Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Lumus has completed a $45 million funding round for its augmented reality displays for smartglasses. Lumus previously announced it had raised $15 million, and now it is announcing an additional $30 million as part of the same round. Quanta Computer, one of the biggest Taiwanese laptop makers, led the round, with additional participation from HTC and other strategic investors. Shanda Group and Crystal-Optech also participated. Rehovot, Israel-based Lumus makes the optical engine that empowers AR solutions. AR is expected to become a $90 billion market by 2020, according to tech advisor Digi-Capital. Market researcher IDC predicts that 30 percent of Global 2000 companies will begin incorporating AR and virtual reality (VR) into their marketing programs in 2017. Above: Lumus’ augmented reality glasses. “This new funding will help Lumus continue to scale up our R&D and production in response to the growing demand from companies creating new augmented reality and mixed reality applications, including consumer electronics and smart eyeglasses,” says Lumus CEO Ben Weinberger, in a statement. “We also plan to ramp up our marketing efforts in order to realize and capture the tremendous potential of our unique technology to re-envision reality in the booming AR industry.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Founded in 2000, Lumus is on a mission to create optics that transform the way people interact with their reality. The company is working on optical technology for see-through wearable displays and serves multiple AR vertical markets, including health care, manufacturing logistics, avionics and, more recently, consumer products. The Lumus solution is based on its patented Light-guide Optical Element (LOE) waveguide, which combines the smallest dimension eyewear for any given field of view. C.C. Leung, vice chairman and president of Quanta, said in a statement, “AR/VR is well aligned with our growth strategy, and we’re pleased to invest in the Lumus optics solution for augmented reality. This is pioneering technology, and we have great confidence in Lumus as an innovator and industry leader for transparent optical displays in the AR market.” “We are very committed to AR/VR,” said David Chang, chief operating officer of HTC, in a statement. “Our current investment is aligned with HTC’s natural extension into augmented reality, following our successful Vive launch earlier this year.” Lumus technology enables the production of wearable eyeglass displays that are compact, comfortable, and fashionable. The Lumus near-to-eye transparent display technology consists of a unique lens that contains an array of ultra-thin transparent reflectors — the patented Light-Guide Optical Element — and a mini-projector that injects an image into the lens, also patented. These two elements are combined to create a wide field of view, true color, daylight brightness, and a see-through display. Lumus was represented by lawyers Jonathan Feuchtwanger and Ido Erlich of the law firm Naschitz Brandes Amir & Co. The company has 70 employees. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,738
2,022
"Blockchain interoperability is essential to avoid the flaws of Web2 | VentureBeat"
"https://venturebeat.com/2022/05/03/blockchain-interoperability-is-essential-to-avoid-the-flaws-of-web2"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Blockchain interoperability is essential to avoid the flaws of Web2 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Blockchains are not merely storage and communication protocols. Each of them has a history, community and culture worth protecting. Some communities are more focused on creating “sound money” alternatives to current fiat systems. Others are working hard to maximize raw computing power or storage capacity. Some blockchains allow users to collect basketball shots and other sports moments. Others are emerging as metaverses for developing a particular cultural or gaming culture. We need to nurture spaces for these communities to grow and innovate. Like borders, languages and currencies, blockchain designs allow cultural particularities to thrive instead of being absorbed by the more powerful neighbor. We need to promote diversity. And, just like in the real world, we must also encourage dialogue between communities. We must invest in bridges that allow blockchain ecosystems to communicate, as long as these bridges emerge organically to serve the needs of their users, rather than top-down as a result of government-sponsored standards. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Interoperability holds the key to a multichain world Blockchain interoperability is not a set rule book. It refers to a broad range of techniques that allow different blockchains to listen to each other, transfer digital assets and data between one another and enable better collaboration. There are decentralized cross-chain bridges that facilitate the transfer of data and assets between Ethereum, Bitcoin, EOS, Binance Smart Chain, Litecoin and other blockchains. Currently, the main use cases of interoperability are: first, the transmission of a given cryptocurrency’s liquidity from one blockchain to another. Second, allowing users to trade an asset on one chain for another asset on another chain. Third, enabling users to borrow assets on one chain by posting tokens or NFTs as collateral on another chain. Each bridging technique makes its own design compromises in terms of convenience, speed, security and trust assumptions. Each blockchain operates on different sets of rules and bridges serve as a neutral zone where users can switch between one and the other. It greatly enhances the experience for users. For end-users, these trade-offs may not be easy to understand. Furthermore, the risks associated with each bridge technique may compound each other whenever an asset crosses several bridges to reach the hands of the end-user. Call to action As members of the Web3 ecosystem, we share a responsibility not just to promote a multichain world, but also to make it safer as more users begin to enter Web3. Everyone has a role to play. Cross-chain bridges must be transparent about risks and resist the temptation of growth at all costs; they must also publish bug bounties. Security researchers and analytics platforms should publish public risk ratings and report incidents. Blockchain protocols and wallet operators should agree on lists of cryptocurrencies and smart contracts officially supported on each chain. Dapp developers should aim to deliver simple user experiences without throwing away the core tenets of decentralization and user ownership. And media outlets and key opinion leaders must help end-users to “do their own research.” We must move away from “winner takes all dynamics” and offer a better future to user and developer communities. The Web3 movement gained traction because we wanted — and still want — to move away from the shackles of centralization. The seamless flow of information and tokens between different blockchains will be a major push towards a truly decentralized, multichain economy. Ken Timsit is the managing director of Cronos. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,739
2,022
"Universal Scene Description: The HTML of the metaverse | VentureBeat"
"https://venturebeat.com/virtual/universal-scene-description-the-html-of-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Universal Scene Description: The HTML of the metaverse Share on Facebook Share on X Share on LinkedIn Ever see the movie, “Finding Dory?” The 2016 Pixar film about a blue tang fish with anterograde amnesia might not be your thing, but it could be compared to CERN , the first-ever website that went live August 6, 1991. What’s the connection? The animated film was the first to be built using Universal Scene Description ( USD ) — which, many say, is a foundational building block of the metaverse. In other words, USD is the HTML for 3D virtual worlds. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “We weren’t thinking about the metaverse when we made USD,” Steve May, vice president and CTO of Pixar, said during a virtual panel discussion at Nvidia’s GTC event this week. “We did not anticipate that USD would grow this rapidly and this broadly.” [ Follow along with VB’s ongoing Nvidia GTC 2022 coverage » ] Without a doubt, the metaverse is one of the hottest topics of discussion in the tech world — how to build it, govern it, monetize it — and USD is being lauded for its pivotal role in speeding up its evolution. And, in this, USD is on a journey the world has seen before. An easily extensible, open-source framework for the interchange of 3D computer graphics data, USD was specifically built to be collaborative, to allow for non-destructive editing, and to enable multiple views and opinions. Many compare its current iteration to HTML: Assets can be loaded and representation can be specified. Its next phase will be enhanced interactivity and portability — the CSS moment, so to speak. The general consensus is, “Let’s get to the JavaScript of USD,” said Natalya Tatarchuk, distinguished technical fellow and chief architect for professional artistry and graphics innovation at Unity Gaming Services. But first: Universal Screen Description origins As May explained, USD came to be because Pixar was looking to solve workflow problems around film creation. The studio’s movies involve complex and often whimsical worlds that must be believable. Many animators work on scenes at the same time, so Pixar needed a tool that fostered collaboration and was also expressive, performant and fast. USD essentially merged, distilled and generalized numerous spread-out systems and concepts that had been around within Pixar for some time. The framework was fully leveraged for the first time in “Finding Dory,” which was released in June 2016. The next month, Pixar made USD open source. Ultimately, May described the platform as “old and new”; it is nascent and evolving rapidly. And, because it is so versatile and powerful, it is being widely adopted in many other areas beyond filming and gaming — design, robotics, manufacturing, architecture. Nvidia, for instance, took notice because the company had begun to develop content and apps internally for simulation and AI — particularly building worlds for simulating autonomous vehicles, explained Rev Lebaredian, Nvidia’s vice president of simulation technology and Omniverse engineering. The company needed a common way to describe and build worlds, “really large ones, collaboratively in many spaces,” said Lebaredian, and USD “stripped down to the essence of the problem.” Many file formats had come and gone over the decades, he said, but USD felt like “there was a lot of wisdom imbued in it.” Taking it home Similarly, home supply store Lowe’s had been leveraging 3D and augmented reality to present items to consumers, and the company wanted to expand such 3D visualization to operations, store design and the supply chain. Also, the company was looking for a way to describe digital twins for its stores — of which it has 2,000 with 20 different layouts and unique features to each, explained Mason Sheffield, director of creative technology at Lowe’s Innovation Labs. The company’s existing ad hoc system had different departments using Autodesk Revit, 2D CAD, SketchUp and others, he said. Understandably, this provided scaling challenges. But, in early 2021, Lowe’s adopted an Omniverse platform using USD that bridged its internal warehouse databases, shelf planning, store layout tools and product library. The company went from flat 3D models that had to be batch generated, to a hierarchical, shared file format (for instance, a planogram that can be altered and propagated throughout all stores), said Sheffield. “USD feels like a democratization of 3D that we hadn’t seen in other platforms,” he said. Collaborative evolution All that said, building blocks aren’t perfect. As Tatarchuk pointed out, USD is a vehicle for interoperability, and standards need to be evolved to get to portability. “It’s going to take all of us to align on it,” she said. Guido Quaroni, senior director of engineering of 3D and immersive at Adobe, said he would like to see the framework approach the web surface. This would enable authoring and not just consumption; also, there should be increased interoperability between apps and surfaces. Matt Sivertson, vice president and chief architect for media and entertainment at Autodesk, underscored the importance of allowing artists to use any tool they want. One longer-term potential of USD is driving down the cost — from a workflow perspective — of switching between apps. “It’s not just about the tools anymore,” he said. “A differentiation feature [will be] how well you support USD.” The ability to scale to different surfaces is important as well, said Sheffield; he would also like to see native solutions for USD deployment, and a gentler developer learning curve. “I’m excited for that evolution toward the real HTML of the metaverse,” said Sheffield. Ideally, get directly to HTML 5 and TypeScript, said Mattias Wikenmalm, senior expert for visualization at Volvo Cars. That said, while the concepts in USD have been “battle proven,” there is a “risk of making USD too complex too fast,” he said. We don’t want to end up in a situation where there are all sorts of plugins for different companies. “The building blocks are there, it’s just refining them and building on top of the solid foundation that’s already in USD,” said Wikenmalm. To continually support the evolution of the tool it released in the wild, Pixar is ramping up hiring for its USD team. This will help the company explore USD applications beyond filmmaking, said May. “There are a lot of things we still want to do, a lot of functionality we don’t yet have,” he said. Going forward, it will be critical to deeply engage with the community: “What does go into USD? What doesn’t go into USD? How do we avoid USD collapsing from its own weight?” “We need to make the right decisions — collectively,” said May. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,740
2,022
"Crypto and blockchain will be Web3 key to securing the future of payments, one company says | VentureBeat"
"https://venturebeat.com/security/crypto-and-blockchain-will-be-web3-key-to-securing-the-future-of-payments-one-company-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Crypto and blockchain will be Web3 key to securing the future of payments, one company says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Web3 is set to become the next iteration of the internet. And, while it has been on the horizon for some time now, it remains to be seen just how it will look and operate. Still, while there is no rigid definition yet, several core principles are guiding its creation. Notably, it will be decentralized — its ownership will be distributed as opposed to controlled by a handful of large corporations; permissionless — providing equal access; and trustless — as opposed to control by a central authority, participants must reach a consensus. It will also have native payments. That is, cryptocurrency — in contrast with traditional banking infrastructure. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But crypto, as we’re all aware, has had several significant security challenges. “Ultimately, Web3 is a young and evolving ecosystem,” said Deddy Lavid, cofounder and CEO of automated detection and response platform CyVers. “We are only at the beginning of creating its infrastructure — but one of the leading Web3 challenges is the theft of assets.” CyVers has sought to address this problem with its real-time prevention and detection platform; to help further this mission, the company today received an infusion of $8 million in funding led by Elron Ventures. Ultimately, the goal is to “bring proactive Web3 cybersecurity standards to financial institutions,” said Lavid. The power of blockchain At its core, Web3 uses blockchains, cryptocurrencies, and nonfungible tokens (NFTs) to put ownership in the hands of users. Blockchain is a distributed database of blocks linked through cryptography. This decentralized, distributed and public digital ledger records transactions across many computers so that records cannot be retroactively altered without network consensus or altering every subsequent block. Lauded as faster, cheaper, traceable, immutable, and universal, it is set to be the next financial system, said Lavid — but like Web3 itself, the technology is still in its early days. But many have been skeptical of crypto from the start — and a spate of cryptocurrency-based crimes have only served to compound that. The theft of digital assets surged to $22 billion in 2022, with 95% of stolen assets occurring in the decentralized finance (DeFi) sector. Recent high-profile hacks — to the tune of hundreds of millions — have included PolyNetwork , Ronin Bridge and Wormhole. Cross-chain bridges enable the transfer of digital assets and sensitive information between independent blockchains; this year alone, cross-chain bridge hacks have accounted for the majority of stolen crypto funds. More than $1 billion has been swiped via such hacks recently. Lavid pointed out that since the 2020 crypto rush, the market has been struggling with a spate of factors: liquidity, volatility, overhyped applications, bankruptcies, negligence, prevalent fraud and theft, overall mismanagement and a lack of trader trust. “It’s harder than ever to trust crypto,” he said. Real-time detection Still, the global cryptocurrency market is expected to reach nearly $12 billion by 2030, registering a compound annual growth rate (CAGR) of roughly 12% from 2022. And, by one estimate , the blockchain technology market is expected to balloon to $1.59 trillion by 2030, registering a CAGR of 87%. Platforms like CyVers have emerged as a result; the company’s software-as-a-service (SaaS) platform helps to detect criminal activity and provides real-time intelligence to stop it, Lavid explained. It is agentless and requires no deployment, and can be quickly integrated through an API or plug-in. Compliance, fraud and risk teams can automatically identify and respond to incidents across the entire crypto attack surface, said Lavid. CyVers can freeze illicit transactions and return stolen funds once an attack is detected — and before they become immutable on the blockchain. Lavid said the platform leverages artificial intelligence (AI) and machine learning (ML) to detect whether there is a pattern of abnormal or suspicious activities, patterns or criminal behaviors. Graph representation learning, or graph embedding, extracts structural information from networks, while representation learning enables graph-based ML, network-driven anomaly detection and visualization. In a screening process, addresses are checked against a deny list of addresses that the system has marked as problematic, Lavid explained. This list is based on extraction and analysis of the transaction history in different blockchain networks, smart contract events, logs, functions and other variables. This is then combined with data from public sources. CyVers then makes an immediate decision about whether to approve or deny a transaction, and a report is then issued explaining the decision and findings, as well as risk scores on transactions and suggested corrective actions. The company moves away from blacklists, code auditing, and fund tracing, identifying cyber-attacks and carrying out corrective measures within milliseconds, said Lavid. Founded in early 2022 by Meir Dolev and Lavid (who holds 11 patents in automated anomaly detection), CyVers serves financial institutions, banks, wallet providers, decentralized finance protocol companies, custodians, and exchanges. Trustworthy and transparent Web3 As for the benefits of Web3 itself, Lavid pointed out that centralized enterprises like Facebook, Google and Apple have helped millions of people join the current internet while establishing the infrastructure on which it runs. Still, that has given a handful of centralized entities a stronghold on large swathes of the internet, “unilaterally deciding” what should and should not be allowed. “Today, large technology companies monopolize the web,” he said. But Web3 is decentralized, meaning that users build, operate and own it. “Another way to look at it is Web2 is the internet of data,” said Lavid, “and Web3 is the internet of values.” And, CyVers aims to help enable that. As Elik Etzion, managing partner at CyVers investor Elron Ventures, commented, “together, we are enabling a world of a trustworthy and transparent Web3.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,741
2,022
"How Seoul is creating a metaverse for a smarter city | VentureBeat"
"https://venturebeat.com/ai/how-seoul-is-creating-a-metaverse-for-a-smarter-city"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Seoul is creating a metaverse for a smarter city Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The city of Seoul, South Korea, is planting the seeds for a metaverse ecosystem called “Metaverse Seoul” for all areas of its municipal administration. The effort combines digital twins , virtual reality (VR) and collaboration to improve city services as well as planning, administration and support for virtual tourism. At the MIT Future Compute Conference, Seoul smart city policy bureau CIO Jong-Soo Park elaborated on their vision and current progress. Today, users can create avatars and explore a virtual representation of the mayor’s office. The long-term vision is to add support for business development services, education and support for city services for filing complaints, inquiring about real estate and filing taxes. They are also hoping to operate the project as an open and free service for citizens. Building on a connected city Seoul is already one of the world’s most connected cities, with over 95% of its ten million residents already connected to 4G or 5G services. In addition, the city government provides an extensive network of free Wi-Fi with over 100,00 access points. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Park said they have three main goals with the project. First, they want to make it easier for citizens to connect with government services and each other. Additionally, they want to overcome the constraints of time, space and language. They want to explore new ways to improve user experience and satisfaction. The platform will help consolidate access to various city services. It will also make it easier to expand services that take advantage of 3D digital twins to improve access to local security footage, report fires and improve public infrastructure. For example, the S-Map service already provides a digital twin for urban planning, real-time fire monitoring and wind path analysis. A safety service called the Ansimi App connects users with Seoul police services, who can tap into local location data and camera feeds to speed investigations. Different realms in the metaverse They are approaching this project with a five-year plan to provide increasing capabilities around several key areas. A business services portal is already providing startups a place to showcase new business ideas and service. An educational portal brings together 34 campus towns to provide coaching, collaboration and networking opportunities. Virtual tourism services allow locals and international visitors to explore current attractions and historical recreations. Down the road, they are working on the infrastructure to support large festivals and museum exhibits. Eventually, the project will also provide virtual coworking spaces to allow citizens to work remotely as if working in a real office. “We hope to one day have an AI-based public servant working in the metaverse office in close collaboration with others for public services,” said Park. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,742
2,022
"Why you should work for a company that values diversity and inclusion | VentureBeat"
"https://venturebeat.com/programming-development/why-you-should-work-for-a-company-that-values-diversity-and-inclusion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs Why you should work for a company that values diversity and inclusion Share on Facebook Share on X Share on LinkedIn The U.S. population, and as a result the nation’s workforce, is becoming increasingly diverse. According to the Bureau of Labor Statistics, the white labor force population is projected to decline from 84% in 1994 to 77% in 2024, while the minority working population is projected to increase from 15% to 23%. Within the modern working world, “diversity and inclusion” is something that is at the forefront of many corporate mission and value statements, and it has been amplified by the unprecedented social justice movements that swept the world in 2020. Huge priority It’s also a huge priority with job hunters. Around 78% of employees in the tech industry shared that diversity, equity and inclusion (DEI) initiatives are very important to them when considering whether or not to accept a job offer. For Black, Indigenous, and People of Color (BIPOC), this number jumps to 88%. But what does diversity and inclusion in the workplace really mean? Diversity refers to political beliefs, race, culture, sexual orientation, religion, class and/or gender identity differences. And inclusion means that everyone in the diverse mix feels involved, valued, respected, treated fairly and embedded in the company culture. Essentially, diversity and inclusion is a conversation about rewriting implicit bias — rooting it out wherever it exists and challenging the idea that different means inferior. The overall goal of diverse and inclusive practices is to build a workforce that reflects the available labor market with all talent groups equally represented, and not excluding anyone because of their differences. Aside from this being a moral imperative, an inclusive workplace has financial and productivity benefits for all involved. Setting your sights on an employer which is committed to DEI initiatives may well be the way to develop and future proof your career in 2023. And here’s why: Diversity sparks innovation Everybody wants to work for an innovative organization — one that’s a leader in its field. Typically, these are companies that can anticipate market trends, industry disruptions and technological change, thrive in the face of this change and empower employees to do the same. A 2018 study by Harvard Business Review found that companies with above-average total diversity had 19% higher innovation revenues. And it’s not hard to imagine why. A diverse group of people will have an easier time coming up with inventive solutions to problems than a group that all bring a similar set of life experiences to the table. Organizations which prioritize DEI initiatives will always be more effective and adaptable. True creativity is fostered where different world views and skills collide, and increased creativity, in turn, leads to greater innovation. Worth noting are inclusive employers on the VentureBeat Job Board like Netflix (with current open roles ) which has a business model powered by the concept that better representation on-screen starts with representation in the office. Netflix firmly believes that the company performs better if employees come from different backgrounds, and if an environment of inclusion and belonging is created for them. Strengthening soft skills Both in educational and professional environments, cultural diversity benefits everyone. It paves the way to more empathy and compassion, deepened learning and approaches the world from various perspectives. A culturally-diverse workplace empowers people to develop their soft skills — particularly their curiosity and adaptability. Understanding the true meaning of diversity and inclusion positively affects all workers, regardless of whether or not they themselves are part of minority groups. Cultural diversity works to embolden individuals with good emotional judgment and teamwork skills to foster a better workplace culture. Check out progressive companies such as Ripple (with great opportunities for software engineer roles in New York and San Francisco) which prioritizes an inclusive collaborative work environment as policy. Ripple believes that in order for you to do your best work and thrive, the company must provide a space where no matter what race, ethnicity, gender, origin or culture they identify with, every employee is a respected, valued and empowered part of the team. DEI builds trust An employer which values diversity and inclusion will actively create a culture that is open and welcoming to all. DEI tools and programs give employees the ability to be themselves at work without fear, creating a sense of belonging that translates into positive outcomes in many areas of the organization. As a result, employees don’t feel they have to “fake it” to fit in or hide material aspects of who they are, or what is important to them to feel included, allowing them to flourish as their authentic selves. Also, an equitable environment fosters a sense of mutual trust and respect in the workplace. If you’re looking to work for a company where you will feel respected, understood and valued, look no further than the many job opportunities currently on offer at eClerx. With positions available throughout the U.S. from New York to Salt Lake City to Dallas for starters, eClerx is a global business with a progressive heartbeat. It’s particularly proud of its inclusive culture diversity — of thoughts and people. eClerx also has a terrific track road with employee engagement, achieved by a commitment to access to resources, continuous training, coaching and mentorships. If your current employer’s values aren’t aligned with yours, then it’s time to have a good look at alternatives for 2023. Your first stop is the VentureBeat Job Board, where you can browse hundreds of open roles right now. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,743
2,022
"5 principles needed to humanize metaverse experiences | VentureBeat"
"https://venturebeat.com/virtual/5-principles-needed-to-humanize-metaverse-experiences"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 principles needed to humanize metaverse experiences Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Picture the moment you arrive at a theater to attend a concert, talk, or play. Anticipation builds as you walk through the warmly lit entryway, ticket in hand. Ascending the stairs, the doors swing open to reveal the grand scale of the space, the murmuring audience, and the spot-lit stage. As you find your seat, the lights dim, the curtains part, and the opening music swells. The show is about to begin. Events are defined by their rituals, their sense of mounting thrill and narrative progression. From the moment you approach the entrance all the way until the final applause dies down, a well-designed theater will impart a sense of shared occasion and purpose. Historically, people are great at building these venues — spaces that enhance the quality of our communal experiences — in the physical world. And it is just as possible to build them in a virtual one. With virtual reality (VR) steadily entering the mainstream — just this month, news broke on two new headsets from Meta and Sony , both set to broaden VR adoption — it’s vital that designers create virtual spaces that acknowledge our humanity. As someone who designs virtual venues used by thousands, I want to share the learnings my team and I have gathered so that other designers can create experiences that will remain in memory long after the headsets come off. Take inspiration from the real world, but note the differences The fundamentals of virtual event spaces are similar to those of real-life venues, and so is the process of designing them. Often, our design team brings in architects to ensure we learn from real-world principles. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “There are considerations specific to the audience, the program, and the context — it’s just that this audience is made up of avatars and the context is virtual,” says architect Christopher Daniel, who designs both real and virtual performance venues. “We have the opportunity to work with features from a concert hall in Berlin or a theater in Buenos Aires, sidestep physical limitations, and create virtual places that feel both fantastical and authentic.” Bear in mind that virtual spaces do have different demands, however. We’ve found that virtual audience members require more space between seats to feel comfortable. And sight lines from seats to stage must allow for the fact that audience members are simultaneously in the room together, as well as around the world in separate physical environments. This means that avatars often move more often, and more erratically, than they would in a physical venue. To ensure that other audience members aren’t distracted, we typically make each seating tier higher than it would be in a physical space, with the seats more spread out. Be specific with your material choices Creating convincing virtual experiences is an exercise in world-building. Whether an environment is wholly fantastical or based in reality, that it feels “true” is an essential factor in its immersive potential. We experience virtual worlds up close, which means that every environment requires attention to fine detail. From the kind of stone chosen to the cut and grain of wood — think mahogany or red cedar, not just “brown wood” — a high level of craftsmanship will make your space feel like a destination to which people will want to return. Design virtual spaces with audio in mind The most convincing virtual reality spaces are multisensory, so a thoughtful use of audio elements is key to placing the audience inside a new world. There are many techniques to consider, including environmental sound, spatially anchored sound, audio feedback to reward specific interactions, or a mix of each. Regardless of your approach, effective spatial audio adds tangibility to a space while deepening the impact of compelling visuals. The sound of distant lapping waves, or a seagull passing overhead, can bring a space to life, so consider how your landscape contributes to your soundscape. Empathize with your audience Virtual reality poses a new challenge to creatives: When you can make anything, how do you choose where to begin? An initial discovery phase is key to deepening understanding of a space’s purpose and intended audience. How do you want your guests to feel? How will the space serve them? Or surprise them? The aim is for artists, user experience (UX) designers, and technologists to be open to inspiration at this stage while keeping the audience and the event’s purpose top of mind. At this point, it’s also critical to establish constraints and define what the environment is not. We often use Miro and Pinterest boards to highlight elements to avoid — low ceilings, strip lighting, flashy chrome — so that we don’t build something generic or characterless. This process helps the creative team eliminate ambiguity, build a shared visual vocabulary, and air out any assumptions. Think of your virtual event as a story With each virtual reality event, we are telling a story with a beginning and an end, much like a real-life performance. To ensure attendees feel that narrative progression, it’s helpful to provide cues inspired by screenwriting fundamentals, like the classic three-act structure. The start of each event, for example, should serve as your first act, one that’s characterized by scene-setting and exposition. Welcome your guests in, show them around, and provide initial information that inspires them to explore more. It’s important to guide attendees — many of whom might be new to virtual reality — gently from the start before escalating complexity. That rising action should culminate in the event’s keynote presentation or performance, generating a different audience response. It’s also vital that guests understand what to do when the main event ends by providing clear next steps for exiting the space and moving on. Humanity will remain vital even as technology evolves Like most technology, virtual reality is evolving exceptionally quickly. Today’s designers face the task of optimizing experiences around the constraints of current headsets while also preparing for the next evolution. The future will present even greater challenges. Artificial intelligence (AI), for example, will soon generate not just concept art but entire virtual worlds. Designing spaces with storytelling at their heart will continue to be a human differentiator. As we venture out into the metaverse , let’s not forget our humanity. Michael Ogden is chief creative officer at the VR company Mesmerise , where he runs their in-house creative lab, Atmospheric. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,744
2,022
"Who’s running this metaverse, anyway? | VentureBeat"
"https://venturebeat.com/virtual/whos-running-this-metaverse-anyway"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Who’s running this metaverse, anyway? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to Wikipedia , “metaverse” is defined as “a hypothetical iteration of the Internet as a single , universal and immersive virtual world that is facilitated by the use of virtual reality and augmented reality headsets.” Forgive me if I’m wrong, but hasn’t the term “cyberspace” defined the same concept since William Gibson wrote “Burning Chrome” in 1982? Check your Wikis again. Except for an esoteric art collective in the ’70s having naught to do with digital spaces, the popular concept of “cyberspace” has always described “a widespread interconnected digital technology…dating back from the first decade of the diffusion of the internet” and refers to the online world as “a world apart,” — distinct from everyday reality. Not to seem obtuse, but isn’t the only difference then that instead of plugging a stereo jack into the back of our heads, we are—for now—using goggles and handsets? Perhaps the term “metaverse” would be better used to describe the reality of a digital multiverse where many smaller digital landscapes exist. Why the distinction? Well, primarily because what is needed isn’t a fight for brand supremacy, a virtual version of the fight for market dominance, as witnessed between Apple and Microsoft and continues until now. What is needed now, today, up front, is a way for these separate virtual reality (VR) landscapes — metaverses — to work together. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! These small digital ecosystems such as The Sandbox, Cryptovoxels, and many more on the way will need to be designed with protocols on board that allow them to communicate and operate with each other, in synchronicity, inside the greater virtual reality of a singular metaverse. Users need to open the door of their virtual office in one metaverse and move via their avatar seamlessly into another, where their favorite game or perhaps their bank exists. How will those protocols come into existence? How will they be written, and who will be writing them? With the advent of 5G, a technology that promises to deliver download speeds up to 10 gigabits per second, there will be multiple ways to access the various metaverses. These methods will inevitably be as relevant as the different charging cables that prevent one brand of laptop from connecting to another brand, thus making users reliant on another branded, technologically unnecessary service. Attention must be paid to cyberspace interconnectivity protocols now, and more importantly, the development of the associated workforce to implement these protocols. Not five or 10 years down the road when 50 focus group-approved branded metaverses compete to provide similar services with wildly varying results. Think of it this way. Right now, we live in a world where Web2 platforms require logins, passwords, security questions, and/or one-time text message codes to validate that users are authorized. This Web2 world is clearly login and password-driven. But the future — Web3 and the metaverses–will be ideally driven by blockchain. Do we really want to stop repeatedly in the middle of transitioning from one metaverse to another to log in again? Or do we want to simply be recognized once and have those credentials travel with us wherever we go? If we don’t have the appropriate framework in place, choke points will result, causing at the very least inconvenience and at its worst complete chaos. With the vast, diverse potential of the various metaverses in cyberspace, there are basic concepts and capabilities we should examine. As I contrast VR with the smart city work I have done, many of the same questions and problems need to be addressed: Who will manage these portals, and how will they be managed? What kind of programming knowledge will be needed in metaverses that will almost certainly have the look and feel of video games? Those skill sets are already in surprisingly limited supply. Will the metaverses require a completely new data transport architecture? Also to be considered are microservices, open-source platforms and artificial intelligence (AI) systems. How will these and new emerging technologies fit within the multiple metaverses? Do the current industry high-density compute platforms meet the requirements? Will we need serious innovation and advancement in chipsets and compute power? Where does quantum fit into this picture? How will crypto and NFTs integrate? You might share your new NFT on your cell phone like a cat picture, but that Bored Ape or magnificent in-game sword will only be amazing in your 3D VR man cave. These challenges may feel insurmountable. They’ll no doubt require an immense investment into the development of integrated metaverse architecture on a scale we have never seen before. That much is certainly true. All I have to do is think about the many computing epochs I’ve already seen come and go in my working life, and all of a sudden, the lift needed here doesn’t seem greater than other hurdles we’ve overcome in technology and systems. Consider the 9.6 kbps fax machine and floppy disks. Then came Windows for Workgroups providing the first large-scale ability to connect communities of users, and ultimately, the internet reached global scale. That was soon followed by the rise of smartphones, and the millions of apps they support. Now, after a rise in bot technology, conversational AI and other new computing paradigms, we see the emergence of a global blockchain technology movement, replete with mass adoption of its ideas, protocols, and products. Past lessons have already laid out the punch list needed to make the rise of the metaverses an incredible event in our brief history. We have to be smart enough to learn from these lessons while ensuring our freedoms in the virtual world will equal and enhance our freedoms in the real world. Mark Schonberg is the chief strategy officer and chief of staff at 2B3D, Inc. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,745
2,022
"How the metaverse connects to Web2 storefronts to capture new markets | VentureBeat"
"https://venturebeat.com/virtual/how-the-metaverse-connects-to-web2-storefronts-to-capture-new-markets"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the metaverse connects to Web2 storefronts to capture new markets Share on Facebook Share on X Share on LinkedIn While the metaverse is far from being fully here, it’s garnered traction across multiple industries and might be “the future of the internet.” In fact, trends show that rapid interaction and participation in the metaverse will lead to an immersive experience that brings more people flocking to it. Gartner predicts 10% of public events — such as sports and performing arts — will offer participation in the metaverse, fueling rapid buildout of commercial metaverse-shared experiences by 2028. This is a projection that LORR ’s CEO, Nova Lorraine, agrees with. “The metaverse of the future will be a new interconnected web of physical and hyperrealistic virtual experiences — where digital asset ownership is the norm, not the rarity, avatars’ visual expression and sophistication is more important than social media reels, and meetups in virtual environments and events start eclipsing traditional Zoom,” Lorraine said in an interview with VentureBeat. McKinsey’s State of Fashion 2022 report shows that “as consumers spend more time online and the hype around the metaverse continues to cascade into virtual goods, fashion leaders will unlock new ways of engaging with high-value younger cohorts.” Furthermore, “to capture untapped value streams,” according to the report, “players should explore the potential of nonfungible tokens, gaming and virtual fashion — all of which offer fresh routes to creativity, community-building and commerce.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! If there are going to be as many events in the metaverse as expected, people need to properly dress their avatars for the occasion. Avatars must be created, at the behest of the people they represent, with luxury and fashionable items. As the metaverse stretches possibilities for users in virtual realms, digital fashion is being redefined for many in that space today. But what might the future of digital fashion be in the metaverse? The digital fashion future is bright, but it must be nurtured Fashion in the metaverse is intangible. There is no need for physical clothes, which makes it easier for users to experiment and create lavish wardrobes for themselves, way grander than what would be possible in the real world. Furthermore, since the clothes are in the form of digital collectibles or nonfungible tokens (NFTs ), they can be freely traded across open-NFT marketplaces, adding to their long-term value, which many physical or second-hand clothing items do not possess. Garments minted as NFTs are digital assets registered with unique data stored on the blockchain. This means that even though an image of a virtual dress could be seen or even saved by anyone on the internet, the person who purchased it — whether as a unique one-off or part of a limited run — can prove their ownership, and subsequently sell or trade it, with the value increasing or decreasing just as with a physical garment. “The industry has realized that the virtual world, despite being based on imaginary creations, actually has profound utility when it comes to garments,” according to Lokesh Rao, CEO of Trace Network Labs. “The evolution of design technologies allows creative freedom for all designers, but some clothes they design can never be worn in the real world. The metaverse removes this hurdle — a digital avatar can wear any garment without any constraints of type, design, fabric and use.” While several brands are jumping head-first into the metaverse, LORR’s founder, Prasanna Hari, preaches growth that’s nurtured and systemic, seated on the foundation of education. “ For adoption to happen, we need to go from Web2 to 2.1 to 2.3 and not hyperleap to Web3. A lot of the current virtual spaces in other metaverse platforms are expensive and not everyone has seven, eight or nine figures in their budget to launch brand experiences in the metaverse. Access is about giving small- to mid-sized fashion retailers and brands an opportunity to enter Web3.” LORR helps connect the 3D environments of the metaverse to a client’s existing Web2 storefront. Additionally, stores on the LORR platform have the option to upgrade features. “Through the metaverse, the world has witnessed a technological revolution,” Hari added. “We are now in a space where we can reimagine new realities in a bespoke virtual-reality space. Users interact in a new way within a bespoke computer-generated environment.” Metaverse fashion will keep evolving While the top guns in the world of retail fashion are going into the metaverse, some require help showcasing their products to their clients and prospects. It’s in these supportive roles that companies like LORR and ByondXR shine the brightest. LORR, driven by the inherent rise of digital fashion, provides small- to mid-size luxury and bespoke brands an opportunity to easily add a metaverse environment to their omnichannel strategy. ByondXR, on the other hand, pitches her tent with bigger players. LORR is leveraging its Unreal Engine to build what it calls “a one-of-a-kind hyperrealistic metaverse that will empower retailers to share their brand’s story in a more immersive way.” To design virtual stores, LORR uses digital asset management tools to create and render 3D assets that are personalized to each retailer. These technologies can be applied to showcase consumer goods, digital twins of physical real estate properties, health-tech services and many more. Conversely, ByondXR provides a platform for brands and enterprises to build 3D and metaverse stores for their customers and buyers. In addition, the platform enables users and partners to fully manage and configure their virtual stores by adding plugins, features, visuals and more. Controlling plugins lets users add their ecommerce, media, and fun elements inside. Users can also decide what, when and how they want to showcase in their store — making managing a virtual store comprehensive, easy and quick. Digital fashion in the metaverse will continue to evolve, riding on several immersive technologies that offer users flexibility and ownership. While it’s still uncertain how avatars and other fashionable assets might look in the future, McKinsey notes that “the emergence of the metaverse offers incredible potential for fashion and luxury players.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,746
2,022
"The metaverse: Land of opportunity for retailers | VentureBeat"
"https://venturebeat.com/virtual/the-metaverse-land-opportunity-retailers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The metaverse: Land of opportunity for retailers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Although everyone agrees that the metaverse is coming, there’s no broad consensus on what it really is, and how it’s going to work. Retailers see it as essential to serving customers, a natural progression from mobile, social media and the internet. But how will retailers shift the intricacies of selling physical goods over to a virtual world? That question and more are emerging as businesses begin exploring how they might navigate the uncharted waters of the metaverse. It’s important to learn “how we separate what we do in the virtual space from what we do in the physical world that’s further converging ,” metaverse expert Cathy Hackl says. “Commerce is evolving as we head into these new virtual spaces and shared experiences, both in the virtual and physical world[s].” In this context, technology has a new mission: illuminating how users behave in virtual retail spaces. Detecting who is visiting your virtual store, where and when they’re visiting, how they’re interacting with your products and for how long, why they engage with specific content — all of this information is crucial to retailers. To know what works in the metaverse, retailers need to follow the customer journey closely, and use customer insights to help guide the design and product placement in their virtual stores. The metaverse by the numbers Researchers have been studying the metaverse marketplace recently, and their predictions are optimistic: Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In a metaverse market report, Verified Market Research projects that the market will reach USD $825 billion by 2030 ; in comparison, the metaverse market was valued at USD $27 billion in 2020. Gartner predicts that by 2026, 25 percent of people will spend at least one hour each day working, shopping and more in the metaverse, while 30 percent of organizations worldwide will offer products and services in the metaverse. A McKinsey report on value creation in the metaverse forecasts growth to $5 trillion in value by 2030 , with ecommerce the greatest force at $2.6 trillion. Further, 95 percent of business leaders expect the metaverse to have a positive impact on their industry over the next decade, McKinsey says; 31 percent say the metaverse will change the way their industry operates. As for consumers , McKinsey senior partner Eric Hazan (lead co-author along with senior partner Lareina Yee) notes that among more than 3,400 surveyed worldwide, “two-thirds are excited about transitioning everyday activities to the metaverse, especially when it comes to connecting with people, exploring virtual worlds, and collaborating with remote colleagues.” Real customers in virtual spaces Consumers may be excited about shopping in the metaverse, but companies still have much to learn about designing appealing virtual retail spaces that consistently lead to sales conversions. Retailers need to know the demographics of their shoppers (age, gender, location, etc.), and how customers move through the virtual store. How long do they stay? What areas do they visit and interact with? What products do they engage with? And what do they buy? Metrics like these are commonly used to assess online shopping sites — and, even more commonly, physical store traffic — don’t exist in the metaverse. It’s still unknown territory. But tools that track consumer behavior in virtual retail stores could help retailers see what areas customers are visiting and how they’re engaging with products. Ideally, metaverse retailers can use such customer insights to guide the design of their virtual spaces, from layouts to product placements, as their long-term ecommerce strategies start diverging from one-time usage campaigns to full-blown flagship stores, enabling them to collect and analyze data on the behavior of their metaverse consumers. Virtual-to-physical, physical-to-virtual Retailers have some fundamental concepts to explore as they figure out how to sell products in the metaverse: the familiar virtual-to-virtual model will be joined by new virtual-to-physical, physical-to-virtual and direct-to-consumer models. Virtual-to-physical commerce enables customers to purchase goods while shopping in a virtual environment and have those purchases delivered to their real-world homes. The reverse, physical-to-virtual commerce, offers actual products designed to unlock a virtual experience (via a scannable QR code, for instance). Direct-to-consumer commerce may have the greatest potential of all, considering that during 2021 alone, some $100 million was spent on virtual goods in gaming platforms. Scaling up each of these models will play a large part in determining whether those future projections of billions of dollars will be on target. And leveraging the data each will generate — all of it critical to retailers — will require further technological breakthroughs. A new land of opportunity According to Gartner, “Enterprises experimenting with the metaverse can connect, engage with and incentivize human and machine customers to create new value exchanges , revenue streams and markets.” Capturing these economic opportunities will require companies to adopt new digital business assets (DBAs) along with metaverse-friendly updates to product development, brand placement, and customer engagement strategies, and “financial flows in the virtual world.” This “new land of opportunity” will be open to entrepreneurs, too, says metaverse guru Dirk Lueth, who co-authored the new book “ Navigating the Metaverse ” with Cathy Hackl and Tommaso DiBartolo — and it’s a limitless land. Today, “entrepreneurship will get a completely new twist in the sense that anyone can become an entrepreneur in the metaverse,” Lueth says. “It doesn’t matter where they’re from … People anywhere in the world can make it.” The metaverse is very much a work in progress, calling for innovative technology, up-to-the-minute data management and analysis, fresh products and a pioneering approach to commerce. Yet even while “everything’s evolving,” Hackl believes that brands must start building their metaverses now. “If you wait a year and a half or two years to do something, to have a clear strategy,” she cautions, “it might be a little bit too late.” Olga Dogadkina is founder and CEO of Emperia. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,747
2,022
"Why customer experience is at the center of metaverse retail | VentureBeat"
"https://venturebeat.com/virtual/why-customer-experience-is-at-the-center-of-metaverse-retail"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why customer experience is at the center of metaverse retail Share on Facebook Share on X Share on LinkedIn In the last decade, advancements in technology have reshaped how retail companies do business. Following COVID-19, new and immersive technologies like virtual reality (VR), extended reality (XR) and mixed reality (MR) are changing consumer behavior, employee expectations and the shopping experience. Retail is migrating to the metaverse. Years ago, these immersive technologies were seen only in science fiction books and movies. Then they made inroads into the gaming industry. But today, we’re seeing immersive technologies — which power the metaverse — impacting business operations across big and small retail companies. According to a 2022 Raydiant consumer behavior study , 56.6% of survey respondents prefer to shop online rather than in person — almost a 10% increase from 2020. In another study by PwC , about 32% of VR users shopped on VR platforms in the first half of 2022. Betsy Morse Rohtbart, VP, global web and ecommerce at Vonage, said in the Social Media Trends 2023 report by Talkwater and Khoros that the online shopping experience is poised to exceed the physical shopping experience in total market size. “By using communications technology through social and messaging platforms, online retailers are breaking through the bricks and mortar and directly connecting with customers (both reactively and proactively) and, in doing so, moving past the one-time transactions to two-way conversations and ongoing engagement for more meaningful relationships,” said Rohtbart. Lauren Mathews, a writer covering retail, noted in an article published by Shopify that “while physical retail has exceeded expectations, the pandemic has forever altered how we shop in person. Consumers expect stores to be digital-first, focused on speed, convenience, and community. As retailers navigate the future of physical retail, they will need to transform their store strategies to meet evolving customer needs.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Bigcommerce reports “ecommerce sales are expected to surpass $740 billion in the U.S. alone by 2023,” especially because of factors like “increased use of smartphones and mobile shopping, social media and social commerce, transformative technology” and others. Retail looks to the metaverse to craft engaging customer experiences The data show that this trend will continue, and businesses must level up their omnichannel strategies and find alternative marketing channels for delivering smooth shopping experiences. “In 2023, at least three more major providers of collaboration technologies — think Zoom, Slack, Webex, or Google apps — will add 3D metaverse-style features, reaching tens of millions of potential users,” according to Forrester’s new 2023 predictions. Forrester further predicts that “consumers’ tolerance for poor brand experiences will fall for the first time in three years,” adding that “businesses should expect to ramp up customer support and social care teams in 2023 to handle a harder-to-please customer.” The trajectory is clear: Businesses that prioritize customer experience are more likely to blossom than those that don’t — even in metaverse retail. Eran Galil, cofounder and CTO at Israel-based retail tech company ByondXR , told VentureBeat that companies must create engaging experiences for consumers if they are to improve sales. Galil said that’s why ByondXR developed a proprietary ecommerce platform that provides scalable immersive solutions to help retailers create and manage virtual stores and showrooms that increase customer engagement and sales. In metaverse retail, customer experience is key Customer experience is a key aspect of retail and it won’t be any different in the metaverse, said Scott Keeney, chief metaverse officer (CMTO) at TSX Entertainment. Keeney told VentureBeat that enterprises should focus on creating great experiences for customers. “We need to stop talking about the technical terms and just make great experiences. Because consumers don’t care to know how the meal is cooked. They just want something that tastes good and is healthy,” Keeney said. Galil said ByondXR recognizes how crucial customer experience is to retail in the metaverse, adding that the company’s self-service platform enables brands to build and manage immersive shopping experiences while allowing other partners to provide their technologies in already-launched stores. ByondXR’s technology stack includes live shopping, virtual try-ons, NFTs, avatars, AI bots, 3D asset creation and optimizations, 3D rendering, AR/VR, embeddable mini-games, cloud rendering, behavioral analytics, authoring and publishing tools and more. Another way ByondXR prioritizes customer experience, according to Galil, is by allowing retailers to add their own ecommerce, media and fun elements through control plugins. “Users can now decide what, when and how they want to showcase in their store, making managing a virtual store comprehensive, easy and quick,” he said. Trending now Galil noted some of the most active trends in the metaverse today: Web 3D: Brands are increasingly looking to build 3D experiences with photorealistic environments that enable customers to engage meaningfully in the metaverse. Gaming platforms: With a billion people today playing games on metaverse platforms like Roblox, Zepeto and Fortnite, brands are looking to tap into the audience that’s already there. Avatars: Brands are now utilizing avatars as hosts, sales assistants, models and more. Galil said ByondXR’s platform assists these trends by offering a technology that helps its clients build ultra-photorealistic environments, and a metaverse onboarding bundle that includes integrating into the most-used gaming platforms. He said the platform provides “a comprehensive cross-metaverse platform bundle that enables companies to build, manage and grow without the need of an external workforce, in a full self-service way.” He added that ByondXR partners with leading companies creating avatars and other plugins to help develop the most engaging customer experiences. To effectively tap into the benefits of the metaverse, Galil said enterprises must learn what options, capabilities and trends they want to follow. Brands, he said, should build a baby-step strategy, deciding which platform and experiences they would like to create for their customers, finding the right partners to execute it and testing their strategy to determine what works best for them. Only then can retailers reap the benefits of offering customer experience tailored to the metaverse, where more and more consumers will be doing their shopping in the years to come. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,748
2,022
"Using cryptocurrency to attract and retain employees | VentureBeat"
"https://venturebeat.com/2022/05/12/using-cryptocurrency-to-attract-and-retain-employees"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Using cryptocurrency to attract and retain employees Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It can be especially difficult for startups to compete for good people. Google, Amazon, Facebook and other tech giants have hiring war chests that startups simply can’t match. And it’s not just other tech companies that startups must compete with. In 2019, according to an analysis by Bain & Company , approximately 40% of software engineer and developer hires were made by companies outside of tech. So what can the “Davids” of the tech hiring battles to do? As a lawyer who serves as fractional general counsel to startups, I have an up-close perspective on how companies are hiring. One trend I’m seeing is companies offering cryptocurrency in a bid to lure workers. Some “Goliaths” are looking at crypto as an employee incentive, too. On CNBC, Twitter’s CFO said , “We’ve done a lot of the upfront thinking to consider how we might pay employees should they ask to be paid in bitcoin.” Even the City of Miami is getting in on the action. Mayor Francis Suarez announced in October that he is moving forward with a proposal to pay city workers in bitcoin. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So why are employers opting to incentivize workers with cryptocurrency? Put simply, the calculus in most cases is that it’s a form of differentiation that may attract workers looking for a forward-thinking, innovative employer that offers strong benefits and compensation. For the right worker (and often it’s the type of worker that a tech startup is looking for), a $10,000 starting bonus in bitcoin — because of, not in spite of, its volatility — may be seen as more valuable than a $10,000 cash bonus. Cryptocurrency compensation can also be an attractive option when a startup operates remotely and its workforce is dispersed around the world, as there is less red tape, time, and expense to pay with crypto than is often required to transfer U.S. dollars across jurisdictions. Is it legal to pay workers in cryptocurrency? As with most legal questions, the answer to whether it’s legal in the United States to pay workers in cryptocurrency is “it depends.” A number of factors must be examined, including whether the “pay” at issue is wages or other forms of compensation. The Fair Labor Standards Act (FLSA) requires “payments of the prescribed wages, including [minimum wage and] overtime compensation, in cash or negotiable instrument payable at par.” Since cryptocurrency is not cash, the question becomes whether a payment of wages to an employee in crypto would qualify as a payment “at par.” Again, there’s no clear answer. Certainly, an argument can be made that bitcoin, for example, is akin to a currency (although the IRS classifies it as property) with a demonstrable value and liquid marketplace, but as of today neither the U.S. Department of Labor nor any court has provided clarity on the issue. It’s important to keep in mind that federal law is not the only hurdle businesses face when it comes to using cryptocurrency as a form of employee compensation. Different states have different rules as well, including many with laws on the books (including California, Texas, and Illinois) requiring wages to be paid in United States currency. Employers that pay wages in cryptocurrencies in such jurisdictions run the risk of violating these state laws. One way to avoid running afoul of the FLSA and other laws is to offer employees the option of having a designated amount of their cash wages from every paycheck automatically be converted to cryptocurrency. Another option is to pay wages in cash and reserve any crypto payments for bonuses or other benefits. Cryptocurrency token options as employee incentive Beyond wages and benefits, another common means of attracting and retaining talent in the technology sector is the granting of stock awards and options. Companies are now using cryptocurrency in much the same way they use equity as an employee incentive. If a company raises funds using an “initial coin offering” (ICO), it can use its cryptocurrency tokens to incentivize its workforce without diluting its capitalization table. As with stock awards, token awards can be granted to employees outright or can be restricted and subject to a vesting period. Regardless of the manner in which a company decides to grant tokens, it’s important to understand the tax and other legal implications of doing so, and to work with experienced professionals (legal and tax, in particular) when implementing a token award program — or using cryptocurrency as an incentive in any manner. Proceed with care A few years ago, many (perhaps most) were still questioning whether cryptocurrency was at best a fad or at worst a scam. What a difference a few years makes. Today, the understanding of the utility of cryptocurrency , including as an employee incentive, is virtually universal in the tech world and is steadily becoming more ubiquitous in the broader economy. While using cryptocurrency as a means of attracting and retaining talent poses some legal and tax risks, there are ways to proceed and remain compliant. Companies need to get creative to win today’s war for talent. And crypto as a form of compensation is one way to gain a competitive advantage. Kristen Corpion is the founder of CORPlaw. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,749
2,022
"How to improve equity in mergers and acquisitions | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-to-improve-equity-in-mergers-and-acquisitions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to improve equity in mergers and acquisitions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Despite current market conditions and the prospect of shrinking profits, some companies continue to make diversity, equity and inclusion a priority, even as a recession looms. While this is encouraging news, it’s something we have actually noticed for some time now, particularly when it comes to financial services and mergers and acquisitions (M&A). While there is much more progress to be made, new evidence shows a more equitable landscape is emerging. At Exponent’s annual Exchange event, I shared some of the following details that point to the changes across M&A activity. Diversity, equity and inclusion matter in M&A Twenty-two percent of the 600 global deal makers Datasite surveyed reported seeing a deal fall apart in the last year due to diversity, equity and inclusion (DEI) issues uncovered in the due diligence process. Several of those surveyed cited HR hiring, advancement and retention policies as the greatest DEI risk to a deal, followed by sexual harassment claims. However, DEI is still not viewed as large of a threat as other risks to an M&A deal are, but the new research reveals how a company’s culture can impact both its performance and value. DEI matters in the workplace DEI doesn’t just matter in the context of a deal, though. It also matters in the context of the workplace that we all inhabit day in and day out — whether virtually or in person. There has been significant progress in the representation of women in dealmaking. In our latest survey , 44% of respondents identified as women, including 49% from the Millennial generation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What’s more, according to our research, while both genders are equally asking for promotions, women are 5% more likely to be offered a promotion and experience faster career progression to the manager level than men. Additionally, more women than men reported getting a base pay raise of 16% or more last year, though overall raises for men and women last year remain unequally distributed. Work that is still to be done However, it is not all good news. We also found that more women than men in M&A — 30% compared to 26% respectively — are actively seeking other jobs. Between the competing factors of the current Great Resignation and M&A talent crunch , these percentages can add up quickly. Our research also found that men continue to dominate M&A at the senior manager and executive levels. Additionally, 40% or more of both genders are not seeking a promotion out of concerns about workload and travel. Finally, children and childcare are areas that deserve more attention. Most M&A professionals reported they have children under 18 years of age, including 10% more men than women. What’s particularly interesting, though, is that more than 50% of both men and women consider themselves the primary caretakers of children 18 years old or younger. During the height of the pandemic, more women than men in M&A — and in many other companies and sectors — reported that they felt burned out as a result of performing more caretaking in their personal lives. Now, however, it seems both genders are managing multiple responsibilities, something dealmaking organizations will want to consider as they seek to retain and nurture talent. What else can dealmaking organizations do to create and support greater equality, both in the context of a deal and the workplace? Here are a few ideas: Encourage the use of family-friendly benefits by men Organizations need to encourage men to take advantage of family-friendly policies, including parental leave. Even if it’s offered, men are less likely to use parental leave because of financial costs, gender expectations, or the fear that it may hurt their careers. However, research shows that there are physical, emotional, and financial benefits for men who take parental leave, including the fact that they are more likely to be equal partners in raising their children. Educate Global deal makers have said they are unsure of how to show allyship with people from diverse backgrounds, with 20% citing fears about how to engage appropriately as the biggest factor holding them back. To find, foster and elevate M&A talent, managers need to support educational efforts on why inclusivity is important and how to be an ally. For example, we’ve created a learning-oriented culture that fosters openness, empathy, curiosity and adaptability, which improves diversity and inclusion at work. Our DEI council is an employee-led, cross-functional, global team driving DEI across the company. By including employees in this effort, we hope to create a shared responsibility for furthering a culture where every employee can bring their best self to work each day and offer a space for employees to learn together and from each other, which drives greater collaboration, understanding and belonging. Embrace flexibility The pandemic showed us that many activities can be done remotely. We saw this from the perspective of an organization, and through our customers. There are, of course, parts of dealmaking that benefit from in-person meetings, especially when it comes to cultivating new relationships, but virtual dealmaking works. Start a female genius club Refer to female colleagues as ‘geniuses.’ The thought behind this is that calling a female colleague a genius in passing conversations and discussions helps build up their credibility and elevates them. Just consider how describing a female colleague as a genius can play out the next time she is being considered for a plumb assignment, job, or promotion. It’s a small act that can have a powerful effect. Creating enduring and sustainable value will always be a sound investment strategy. And when it comes to M&A, organizations that prioritize DEI efforts and resources will help drive successful business outcomes. Deb LaMere is the chief human resources officer at Datasite. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,750
2,022
"The future of work is flexible: Hybrid is here to stay | VentureBeat"
"https://venturebeat.com/datadecisionmakers/the-future-of-work-is-flexible-hybrid-is-here-to-stay"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The future of work is flexible: Hybrid is here to stay Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A Cisco-commissioned study with MIT Sloan Management Review highlights progress toward the future of work. Now comes an equal challenge: building on its promise. Hybrid work is the most transformative workforce trend in a generation. It promises a more inclusive, flexible, and collaborative future. Yet I fear that we could regress if work cultures fail to evolve at the pace of technology change, and leaders don’t adapt to the evolving “mixed mode” with some people in the office and others at home (or anywhere else they choose). Despite all of its challenges and tragedies, the COVID-19 pandemic unveiled what’s possible with work. Much of the global workforce moved to remote work all but overnight. And despite fears to the contrary, we proved that we can be more productive than ever before. But now comes an equally difficult part — building on the knowledge gained these past two years to build a future of work that’s great for everyone, regardless of where they happen to be. To shed some light on current attitudes about work and how leaders can prepare for what’s coming, MIT Sloan Management Review and Webex by Cisco conducted a comprehensive survey. Its 1,561 respondents ranged from corporate directors and C-level executives to supervisors, managers, and individual contributors — all from a variety of industries and spread across 12 countries. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The results reveal much about what’s going right — and what isn’t. For starters, 59 percent still consider the ability to work from a place of their own choice to be a benefit, while only 36 percent believe it’s a given. Seventy-five percent said that working remotely gives them a sense of not being “in the know.” And another 72 percent feared a pay gap between hybrid workers and their in-office counterparts. For us to fully reap the rewards of hybrid work, there should be no difference in where you choose to work. That means seamless, secure technology — in the office and remotely — that dissolves the distance between people. And it means a culture that supports this new paradigm. The emerging mixed mode will be harder to manage than when everyone was in the office — and harder than when everyone was working remotely. So, organizations will need to be more aware of how their physical spaces are laid out as well as the quality of experience for those working from afar. And meetings will need to be better organized and facilitated — to avoid burnout and to ensure that everyone has a voice (not just the loudest person in the physical meeting room). That means that organizations will need to ensure that no one feels left out because of their geography, language, personality, disposition, or any other differences. Because if anyone feels unable to fully contribute due to any of those differences, that will be a great loss — and a setback for the progress that we’ve made over the past couple of years to this dimension of inclusivity. The good news is that many companies have done a good job of preserving their cultures despite moving to a dispersed workforce. In Cisco’s survey with MIT, a majority said that camaraderie, closeness to the organization, and feelings of inclusion and diversity have improved, or at least stayed the same, since the pandemic began. They also applauded their leaders’ ability to model empathy, work-life balance, and candid discussions. But I think we need to do more. Moving forward, there may be team members who will never even meet face to face. But close human connections will still need to be cultivated, so that interactions and engagements aren’t simply transactional. Leaders need to build relationships and establish trust. And that emotional intelligence needs to be fundamental to company cultures. At the same time, the office still matters. In the study, respondents cited in-office benefits like face-to-face creativity, collaboration, and learning. But the office experience needs to be great to get people to go there. That comes down to more welcoming physical layouts, better technology, and more inclusive, empathetic in-office cultures. The days of rigid hierarchical structures and 9 am-5 pm, Monday through Friday working weeks are gone. People demand the freedom to meet one, two, three, or zero days in the office, if that’s what suits their lifestyle. (And the Great Resignation shows that they mean business.) These changes culminate in a profound evolution in work. But I believe that the full dimension of hybrid work has not yet been completely grasped. If we get it right, the ultimate benefit will be in ensuring that anyone anywhere can participate in the global economy. Because it’s about leveling the playing field — whether someone is in Silicon Valley or three thousand miles away. We are beginning to see more and more companies evolve in these directions. And the ones that don’t evolve their technology and cultures won’t be able to attract the best talent — or customers. And leaders that can’t manage highly diverse and distributed teams with empathy and open communication will not advance. As culture change evolves, so does technology. I’m excited at how new innovations can help support equal collaboration in a meeting — regardless of who is or isn’t in the office. For example, innovations like background noise reduction and real-time language translations go a long way to foster inclusivity. In the office, intelligent video cameras can zero in on individuals and ensure they are being seen and given a chance to speak. And technology can ensure meetings are more interactive. Looking forward, we can expect a hyper-real 3D experience that will blur the lines between face-to-face and virtual experiences, to the point where the technology itself will be rendered all but invisible. In a few years, we’ll look back at how we once communicated with dozens of two-inch-by-two-inch video images on a square screen and laugh. The progress in augmented reality and holograms will move fast. And no doubt there’s plenty of exciting stuff we haven’t even imagined yet. In short, hybrid work is here to stay, and there’s no going back. So, be sure your technology and culture are ready. Jeetu Patel is EVP and General Manager of Security & Collaboration at Cisco DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,751
2,022
"3 things working parents want from their employers | VentureBeat"
"https://venturebeat.com/programming-development/3-things-working-parents-want-from-their-employers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs 3 things working parents want from their employers Share on Facebook Share on X Share on LinkedIn Unicef says it best: “The business case is clear: Investing in family-friendly policies helps improve workforce productivity and a company’s ability to attract, motivate and retain employees.” The agency, part of the United Nations responsible for providing humanitarian and developmental aid to children worldwide, recommends that employers implement a number of strategies to support working parents. These include a minimum of six months paid parental leave, the guarantee that women are not discriminated against, the proper enablement of breastfeeding at work, and supporting access to affordable and quality childcare. Hybrid working Working parents in the U.S. have long been frustrated with what is on offer at their workplaces and while some companies are doing the work to support employees with families, a brighter light has been shone on the issue since the Covid-19 pandemic, which forced so many workers home. These days, offices in major American cities are under half as busy as before, according to data from security provider Kastle Systems. According to Gallup data , six in 10 employees with remote-capable jobs want a hybrid work arrangement. About a third prefer fully remote work, and less than 10% want to be in the office. That part of the picture is abundantly clear, but for parents, what else do they really, really, want from their employers? Supportive management A 2021 survey of 1,500 working parents from family benefits platform Cleo found that 40% of the workforce is made up of parents. With churn already a massive concern across the entire U.S. workforce (In 2021, according to the U.S. Bureau of Labor Statistics, over 47 million Americans voluntarily left their jobs) parents who feel included and supported in their workplace are 41% less likely to leave. Additionally, Cleo’s survey discovered that over a third of parents planning to leave their job are doing so due to a lack of flexibility. Childcare is the most requested benefit by parents, but less than a fifth of working families have access through their employer. Tailored benefits Companies which offer additional family and health benefits that are tailored towards family and childcare will be ahead in the race for the best talent. For example, Adobe supports LGBTQ+ employees with progressive family planning and personal support benefits including same-sex dependent partner healthcare coverage, adoption and surrogacy assistance, and non-birth parent leave up to 16 weeks. Professional services firm Deloitte has a considered suite of benefits and compensation for its people, with the aim of creating a culture that promotes personal and professional development. It offers (territory dependent) a wide range of programmes to support families, among other benefits. These include adoptive/surrogacy leave, parental leave and parents’ leave, as well as foster care and carers leave. The company has also made provisions for how its teams want to work, with options for compressed working weeks in the summer, and hybrid working arrangements, which are of such importance and value to parents’. Cisco too offers family-friendly benefits. The company’s paternal leave policy offers paid time off which is not determined by the gender of the parent, or which parent gave birth, but by which parent will be the primary caregiver. Grandparents who work there get three days off to help out when a new baby joins the family too, and the company also offers subsidized child care, paid time off, and insurance. If you’re in the market for a new role, check out the VentureBeat Job Board to see who is hiring; we’re showcasing three open roles below. Senior Software Engineer, Labs, Google, Mountain View Google’s software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Labs is a group focused on incubating early-stage efforts in support of Google’s mission to organize the world’s information and make it universally accessible and useful. The Senior Software Engineer will use their technical expertise to manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. You will need a Bachelor’s degree or equivalent practical experience, plus five years’ of experience with software development in one or more programming languages, and with data structures/algorithms. Get the full job description here. Regional Critical Environment Program Manager, Microsoft, Redmond Microsoft’s Cloud Operations & Innovation (CO+I) is the engine that powers cloud services. You will perform a key role in delivering and managing the critical environment infrastructure and foundational technologies for Microsoft’s online services including Bing, Office 365, Xbox, OneDrive, and the Microsoft Azure platform. As a successful Regional CE Program Manager (CEPM) , your performance objectives will include providing direction, guidance, and oversight on projects, programs, and own and drive resolution of identified gaps in CE programs to allow data centers to achieve more. Get all the application criteria here. Software Engineer II — Ads Optimization Team, Indeed, Pittsburgh This Software Engineer II role will join Indeed’s Ads Optimization Team, which builds large-scale pipelines, back-end services, and advanced models/algorithms. You’ll design, develop, and maintain pipelines and services, and implement efficient algorithms. You will also continually improve search quality and performance and code innovative tools to support rapid experimentation and learning. To apply, you will need a BS or MS in computer science, engineering, mathematics, physics, or related field as well as four-plus years’ of production level software engineering experience and proficiency with high level object-oriented programming languages such as Java, Python, Kotlin, Go, or similar. Find out more about the Software Engineer II role here. Explore and bookmark the VentureBeat Job Board now to find your perfect tech role VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,752
2,022
"What is cybersecurity? Definition, importance, threats and best practices | VentureBeat"
"https://venturebeat.com/security/what-is-cybersecurity-definition-importance-threats-and-best-practices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is cybersecurity? Definition, importance, threats and best practices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What is cybersecurity? Why is cybersecurity important? Top 5 cybersecurity threats to manage Top 10 best practices for cybersecurity in 2022 Cybersecurity is essential for everyone Cybersecurity has become a central issue as digital technologies play a bigger role in everyone’s lives. Headlines about cybercrime populate the news, but what is cybersecurity and why is it important? Here’s everything you need to know: What is cybersecurity? Cybersecurity is the practice of protecting networks, devices and data from damage, loss or unauthorized access. Just as physical security protects buildings and the people in them from various physical threats, cybersecurity safeguards digital technologies and their users from digital dangers. Cybersecurity is a broad topic, covering many different disciplines, actions, threats and ideas. However, these parts come back to the same idea: protecting people’s digital lives and assets. Things like digital currency, data and access to some computers are valuable targets for criminals, so protecting them is crucial. Think of how many different things today use digital technologies and data. It’s a massive category, so there are various types of cybersecurity, too. Here are a few examples: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Network security: Protects computer networks like home Wi-Fi or a business’s network from threats Application security: Ensures programs and apps repel hackers and keep users’ data private Cloud security: Focuses on the cloud, where users and businesses store data and run apps online using remote data centers Information security: Focuses on keeping sensitive data safe and private Endpoint security : Secures devices like computers, phones or Internet of Things (IoT) gadgets to ensure they don’t become a way to get into other devices or data on a network. These cybersecurity examples are far from the only types, but they’re some of the biggest. As the field grows, many smaller, more specialized subcategories emerge. All these smaller considerations combine to create an organization’s overall cybersecurity. Why is cybersecurity important? Cybersecurity is vital because digital assets are valuable and vulnerable. With so much of daily life online, from bank account access to names and addresses, cybercrime can make lots of money and cause untold damage. Cybersecurity is also important because of how common cybercrime is. In 2019, 32% of businesses identified cyberattacks or other security breaches and that doesn’t account for those who were infiltrated without realizing it. That figure has also only increased. Big corporations with lots of valuable data aren’t the only targets, either. Security breaches happen to small businesses, too and even to random individuals. Cybersecurity is so important because everyone could be a victim. Top 5 cybersecurity threats to manage Just as there are many types of cybersecurity, there are multiple cybersecurity threats. Here’s a look at some of the most common and dangerous ones facing businesses and individuals today. 1. Malware Malware is one of the most common types of cybersecurity threats, despite a steady decline over the past few years. It’s short for “malicious software” and is a broad category covering programs and lines of code that cause damage or provide unauthorized access. Viruses, trojans, spyware and ransomware are all types of malware. These can be as insignificant as placing unwanted pop-ups on a computer or as dangerous as stealing sensitive files and sending them somewhere else. 2. Phishing While malware relies on technical factors to cause damage, phishing targets human vulnerabilities. These attacks involve tricking someone into giving away sensitive information or clicking on something that will install malware on their device. They’re often the starting point for a larger, more damaging attack. Phishing often comes in the form of emails in which cybercriminals pose as authority figures or have enticing news. These messages often appeal to people’s fears or desires to get them to act quickly without thinking. For example, many say the users are prize-winners or in trouble with the law. 3. Insider threats While most cybersecurity threats come from outside an organization, some of the most dangerous come from within. Insider threats happen when someone with authorized access, like an employee, threatens a system, intentionally or not. Many insider threats are non-malicious. This happens when an authorized user becomes a phishing victim or accidentally posts on the wrong account, unintentionally endangering a system. Others may act on purpose, like a disgruntled ex-employee taking revenge on their former employer by installing malware on their computers. 4. Man-in-the-middle attacks Man-in-the-middle (MITM) attacks are a form of eavesdropping, where cybercriminals will intercept data as it travels between points. Instead of stealing this information in the traditional sense, they copy it so it reaches its intended destination. Consequently, it may look like nothing took place at all. MITM attacks can happen through malware, fake websites and even compromised Wi-Fi networks. While they may not be as common as others, they’re dangerous because they’re hard to detect. A user could enter personal information into a hijacked website form and not realize it until it’s too late. 5. Botnets Botnets are another common type of cybersecurity threat. These are networks of multiple infected computers, letting one threat actor attack using many devices at once. This often takes the form of distributed denial-of-service (DDoS) attacks, where attackers crash a system by overloading it with requests. Botnet attacks have seen a massive jump recently. In June 2021, 51% of organizations had detected botnet activity on their networks, up from 35% just six months earlier. Large-scale DDoS attacks can also cause massive damage, shutting down critical systems for hours or even days. Top 10 best practices for cybersecurity in 2022 Cybercrime isn’t just a broad category, but a growing one. These threats cost the world $6 trillion in 2021 and experts say that figure will rise by 15% annually for the next five years. Amid these rising threats, cybersecurity best practices become all the more important. Here are 10 of the best cybersecurity practices for businesses, employees and consumers. 1. Use anti-malware software One of the most important cybersecurity best practices is to install anti-malware software. The market is full of antivirus programs and services that can help people with any budget. Best of all, these programs automate malware detection and prevention, so you don’t have to be an expert to stay safe. Many cybersecurity threats start as malware, so this software can stop various attacks. They also update regularly, which helps them stay on top of new attack methods. Considering how easy these are to use and how crucial they are, there’s no reason to avoid them. 2. Use strong, varied passwords Another crucial cybersecurity step is to use strong passwords. Most hacking-related data breaches stem from weak passwords, which are easy to avoid. Cracking a 12-character password takes 62 trillion times longer than a six-character one. Passwords should be long and contain numbers, symbols and varying letter cases. It’s also important to avoid using the same one for multiple accounts, as that lets a hacker into more places with one breached password. Changing them every few months can also minimize risks. 3. Enable multifactor authentication Sometimes, a strong password isn’t enough. That’s why enabling multifactor authentication (MFA) is another essential cybersecurity best practice for employees and general users. MFA is quick to set up, easy to use and can stop nearly all attacks, according to some experts. MFA adds another step to the login process, most often a one-time code sent to a user’s phone. Some MFA options are more advanced, like facial recognition services or fingerprint scanners. While these features may not see as much use as they should, they’re available on most internet services. 4. Verify before trusting It’s important to verify security, since cybersecurity threats often don’t seem suspicious at first glance. Before clicking a link or responding to an email, inspect it more carefully. It could be a trap if it contains spelling errors, unusual language and is strangely urgent or seems off. The same principle applies to internet networks, devices and applications. Never trust public Wi-Fi because anyone could use it to perform MITM attacks. Similarly, always check to make sure a program’s developer is trustworthy before downloading and installing it. Companies should apply this to business partners, too. 5. Update frequently Cybersecurity is a dynamic field. Criminals are always coming up with new ways to attack targets and cybersecurity tools adapt in response. That means it’s crucial to update all software regularly. Otherwise, users could be vulnerable to a weak point that app developers have already patched. Some of the most infamous cybersecurity breach examples have happened because of outdated software. In 2019, the United Nations tried to hide a data breach that used a vulnerability a current software update would have patched. This is a critical cybersecurity best practice for businesses, which may be bigger targets. 6. Encrypt where possible One more technical cybersecurity step is to encrypt sensitive data. Encryption makes information unreadable to anyone apart from its intended audience by scrambling it and giving authorized users a key to unscramble it. This doesn’t stop data breaches, but it makes them less impactful. If a cybercriminal can’t read or understand data, it’s useless to them, making someone a less enticing target. It also ensures that any sensitive information that leaks will stay private. Using multiple encryption types such as end-to-end and at-rest encryption keeps information extra safe. 7. Segment networks An important security best practice for businesses is to segment their networks. This involves running devices and storing data on different networks to ensure a breach in one area can’t provide access to everything else. This step is especially critical for large IoT networks. This mostly applies to organizations, but individual users can use this step, too. Running smart home devices on a different network than work or home computers is a good idea. That way, a smart TV, which is easier to hack into, doesn’t become a doorway to more sensitive data. 8. Create backups of sensitive files It’s also crucial to back up any sensitive data or programs. This won’t prevent a cyberattack, but it will minimize the damage. Stolen data or downed systems aren’t as pressing if you have extra copies you can use. With cybercrime as rampant as it is, it’s unsafe to assume someone will never be the target of a successful breach. More than half of all consumers have been the victim of cybercrime. Since no defense is perfect, ensuring a hack won’t be crippling is essential. 9. Stay informed and tell others Despite how massive a problem cybercrime is, many people don’t know cybersecurity best practices. Many simple steps can be effective. It’s just a matter of knowing what risks are out there and what to do about them. Consequently, staying informed is half the battle. This step is an important cybersecurity best practice for employees especially. Businesses should train all workers about things like strong password management and how to spot a phishing attempt. Holding these meetings regularly can help companies stay on top of emerging threats and remain safe despite a changing landscape. 10. Review security steps regularly Every user and company should understand that today’s best practices may not work tomorrow. Cybersecurity is a continually evolving field, so it’s important to review defenses to ensure they’re still reliable. Without regular reviews, people could be vulnerable and not realize it. Businesses can perform penetration testing, where a cybersecurity expert tries to break into their systems to reveal their weak points. Consumers can read up on the latest cybersecurity news to see what new steps they may need to take. The worst thing you can be is complacent. Cybersecurity is essential for everyone After learning what cybersecurity is and why it’s important, it’s easy to see why it’s in such high demand. This can be a complicated topic, but it’s essential. Everyone, from the world’s most powerful CEOs to casual Twitter users, should understand the importance of cybersecurity. These cybersecurity examples are just a sampling of the threats and defense steps out there today. Understanding these basics is the first step to staying safe in today’s digital world. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,753
2,022
"Building and delivering software in a hybrid workplace  | VentureBeat"
"https://venturebeat.com/virtual/building-and-delivering-software-in-a-hybrid-workplace"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Building and delivering software in a hybrid workplace Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When the pandemic hit, many companies were left to figure out how they could have productive teams with a distributed workforce. Software development teams were no exception. Now, over two years later, as more companies start solidifying their future work plans, it’s becoming more apparent that remote work is here to stay , in both fully remote and hybrid at-home and in-office forms. Examining the last two years, we have seen that the ability to build products using agile methodologies — a very collaborative feat — is possible even when teams are remote. So for founders, and product and engineering leaders, who are evaluating what building your company’s product and apps will look like in the so-called New Normal, here’s what I’ve learned in the past year and a half from consulting organizations that have built and brought to market new apps remotely. How to build a better hybrid workplace Choosing facilitators for meetings : Meetings via Zoom and other technologies require more work and preparation than in-person standups in a conference room. It’s useful to pick a facilitator upfront. That person should prep for that meeting and not wing it. There should be a defined agenda, and it should be shared in advance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Have feedback loops to assess value: At the end of meetings, there should be time dedicated to sharing feedback to examine if the work you are doing is serving you — or if you should be doing something else The Plus/Delta evaluation process is a tool for impromptu evaluations, and because we all know time is money, it’s helpful to use ROTI (return on time invested) , too. Set up a “virtual persistent office” for the team: Use virtual meeting platforms like Discord , Zoom Meetings , and other technologies that let your teams see and hear each other (rather than just read their communications). These apps allow for immediate collaboration for answering questions or working through a problem together just as you can in person. Coworkers no longer have to read each others’ calendars to determine if someone can collaborate on something that just came up. Establish core working hours: Strive for core, overlapping hours when the entire team is available to work. This is especially important when your hybrid workforce is spread across multiple time zones. Schedule team meetings and plan collaboration during these core hours. This will optimize work times on products, and give engineers room to be hands-on with the keyboard. Then aim to schedule solo activities, one-to-ones and other activities outside of those core hours. Provide detailed documentation: Embrace the use of digital whiteboards like Miro that have sticky-notes features to facilitate remote collaboration and offer an easily referenceable record for newcomers. This helps teams replicate the value of an in-person workshop with a digital whiteboard that everyone can reference individually, keeping all your work in one place. Also, encourage team members to create Personal User Manuals. These online documents can be used to learn each others’ personal preferences, values, and habits. The goal is to understand one another better so everyone can work together better on an ongoing basis. Knowing how each person ticks will help avoid possible obstacles and ensure stronger working relationships. The hurdles of hybrid work Pairing can be a challenge: Since many hybrid teams spend a great deal of time video-chatting and screen-sharing with teammates for extended, intense periods, it can be difficult to pull people away so they can stay connected with each other socially. That’s where core hours and team activities can help, along with implementing innovative techniques such as remote pair programming. Having too many meetings: Finding time to do heads-down work now that there are more meetings can be difficult. With a hybrid workforce, companies need to be intentional about meetings, such as specifying agendas, goals, and providing reference materials in invites. We’ve found it helpful to gather feedback about recurring meetings; if they are ineffective, consider canceling or changing future occurrences. Understanding norms and culture: At the start of the pandemic, remote onboarding was tricky, even painful. But, of course, this wasn’t the fault of any new team member. It was just the natural outcome of not knowing each others’ work styles and who reported to whom. Now it is imperative to develop in-depth onboarding guides not only for your company but for specific projects as well. The future of software development for a hybrid workforce As we get back to work, many of us won’t have regular access to that office water cooler anymore. Nonetheless, we can remain connected at the workplace as long as we are open to new techniques and technologies. We must be intentional about how we work. Once some (but not all) are in the office, we should ensure all coworkers are treated equally. Remote workers should be included in meetings and assignments — “out of sight” should not mean “out of mind,” or worse yet, “assuming the worst.” Above all, companies need to maintain a philosophy of “one remote, all remote.” That means not privileging those colocated with information unavailable anywhere else. If a question is asked in a Slack channel, rather than answering that question in person, it should be answered in that Slack channel so everyone benefits from your information. Be sure to take breaks together. Where we once might have played ping-pong, now we can use online tools such as card games, trivia games and murder mysteries. We try to do these things as a team to learn each others’ work styles and create a better working environment. Of course, creating a New Normal workplace is an iterative process. After a while, hold a retrospective — have an honest discussion with your team about what worked and what didn’t. During that process, perhaps you’ll discover a way of working that suits your company and culture in ways you hadn’t imagined. Joe Moore is a Sr. Staff Engineer and Consultant at VMware Tanzu Labs. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,754
2,021
"Remote work is boosting productivity, study finds | VentureBeat"
"https://venturebeat.com/2021/09/15/working-away-from-traditional-offices-will-become-the-new-norm"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Remote work is boosting productivity, study finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The pandemic is changing tech priorities for many companies, according to a new report from independent research firm Omdia. Enterprises rated cybersecurity and hybrid work as the top initiatives at their organizations, with customer experience, business processes, and better empowering frontline workers following close behind. Omdia’s Future of Work survey, which compiled over 300 responses from executives at large companies, implies that working away from traditional offices will become the new norm. Fifty-eight percent of respondents said they’ll either be primarily home-based or adopt a hybrid work style, while 68% of enterprises believe employee productivity has improved since the move to remote work. The report agrees with the conclusions of a recent Stanford study that found working from home increased productivity among a group of 16,000 workers by 13% over the course of nine months. Attrition rates were also cut by 50%, with employees citing a quieter, more convenient working environment as a major advantage. “The world of work has undergone significant change due to the disruptions brought about by the pandemic,” Omdia principal analyst Adam Holtby said in a press release. “Our research shows that people, process, place, and technology transformation are the foundation upon which successful digital workplace ecosystems are created.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Challenges Not all employers believe remote staff are more productive than in-office workers, however. In an Upwork survey , 22.5% of managers said productivity had decreased at their company since staffers started working from home in 2020. And in a meta-analysis by researchers at Chicago Booth and the University of Essex, productivity at a large IT-services company decreased by 20% after employees began working from home. Beyond impacting productivity, remote and hybrid work shifts can introduce new, sometimes unforeseen security problems. In a report from HP Wolf Security, 83% of IT teams said the increase in home workers has created a “ticking time bomb” for a corporate network breach. Eighty-three percent, moreover, believe that enforcing corporate policies around cybersecurity is impossible now that the lines between personal and professional lives are so blurred. Regardless of where they’re working, employees depend on organizations’ willingness to invest in optimizing the value from processes and technologies — as well as reinventing their business models. When it comes to the digital workplace, businesses need to work with partners to help select the most appropriate IT architecture and technologies, understand the impact of digitalization on their industry, and help align business and digital strategies, according to Omdia. By the same token, 39% of executives in a report said their companies will get the most value from digital transformation initiatives — for example, embracing ecommerce — in the next three to five years. In an encouraging piece of news, 87% of companies think digital will disrupt their industry in a positive way and 44% feel they’re prepared for a potential disruption, a recent Deloitte survey found. “An ‘anywhere workforce’ is reliant on a diverse set of capabilities, and this new normal is presenting new challenges,” Holtby added. “Businesses must now reinvent themselves to create productive, safe and empowering environments for their employees. It’s time to focus less on work location and more on employee experience.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,755
2,021
"New tools unveiled for collaboration across Teams and Microsoft 365 | VentureBeat"
"https://venturebeat.com/2021/11/02/new-tools-unveiled-for-collaboration-across-teams-and-microsoft-365"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New tools unveiled for collaboration across Teams and Microsoft 365 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At Ignite 2021 , beyond spotlight features like Loop and Context IQ , Microsoft announced enhancements to services across its Microsoft 365 product families. A new JavaScript API in Microsoft Excel allows developers to create custom data types, and Microsoft Forms Collection — which allows customers to manage an archive of forms — has reached generally available. There’s also an upgraded presentation recording experience in Microsoft PowerPoint and Smart Alerts, an Outlook capability that enables developers to validate content before a user sends an email or appointment. Millions of employees have transitioned to remote or hybrid work — either permanently or temporarily — during the pandemic. Against this backdrop, organizations have increased investments in project management software to support collaboration in the absence of physical workspaces. The worldwide market for social software and collaboration in the workplace is expected to grow from an estimated $2.7 billion in 2018 to $4.8 billion by 2023, nearly doubling in size, according to Gartner. Teams On the Teams side, Teams Connect — Microsoft’s answer to Slack Connect , which similarly allows users to chat with people outside their organizations in shared channels — will be updated in preview starting early 2022 to allow users to (1) schedule a shared channel meeting, (2) use Microsoft apps, and (3) share each channel with up to 50 teams and unlimited organizations. With cross-tenant access settings in Azure AD, admins will be able to configure specific privacy relationships for external collaboration with different enterprise organizations. Available by the end of 2021, Chat with Teams personal account users — a new capability — will “extend collaboration support by enabling Teams users to chat with team members outside their work network with a Teams personal account,” Microsoft says. With the enhanced Chat with Teams, customers will be able to invite any Teams user to a chat using an email address or phone number. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With any luck, the upgraded Chat with Teams will avoid befalling the same fate as the expanded Slack Connect at its debut. In March, Slack rolled back a feature that let anyone in the world with a paid Slack account send a direct message request to other Slack users — even if they didn’t have a paid account. While Connect direct messages were opt-in, users making the invitations could include a message of up to 560 characters to recipients, which Slack emailed to the them. Users who received abusive and threatening messages couldn’t easily block specific senders because Slack sent the notifications from a generalized inbox. For its part, Microsoft says that the Chat with Teams experience will “remain within the security and compliance policies [and] boundaries of [organizations.]” Teams Rooms In 2019, Skype Room Systems , Microsoft’s multivendor conference room control solution, was rebranded as Microsoft Teams Rooms with capabilities aimed at simplifying in-person meetings. New features include the expansion of direct guest join to BlueJeans and GoToMeeting (expected in the first half of 2022), which allows Teams users to join meetings hosted on other meeting platforms from a Teams Room. By 2022, Teams Rooms customers will be able to manage Surface Hubs from the Teams admin center alongside other Teams devices, as well as use compatible Teams panels to check into a room, see occupancy analytics, and set the room to release if no one’s checked out after a certain amount of time. Teams apps and chat In other Teams news, new apps from partners including Atlassian’s Jira Cloud and SAP Sales & Service Core will enable Teams users to engage “more collaboratively” across chat, channels, and meetings. Software-as-a-service (SaaS) apps using Teams components can embed functionality like chat connectivity in Dynamics 365 and Power Apps, while Azure Communications Services Teams interoperability — which can be used to build apps that interact with Teams — will soon be available. Several improvements in the Teams admin center make it easier to navigate and simplify IT management, according to Microsoft. Now, admins can search for any function and use the redesigned Teams App store — launching later this month — along with an app discovery tool to view apps by category, see additional app details, and gain a streamlined ability for users to request apps. Other IT management features now in preview include a new dashboard with customizable views of device usage metrics with insights, troubleshooting tips, suggested actions, proactive alerts, and the ability to download and share reports. A new workspace view provides data for all devices in a specific physical location, as all the Teams display in a particular building. And priority account notification enables IT admins to specify priority users, so they can monitor experiences with device alerts and post-call quality metrics. For users, there are new features like “chat with self” (which enables them to send themselves a message) and a “chat density” feature that lets users customize the number of chat messages they see on the screen. Its new Compact Mode fits 50% more messages on the screen compared with before. Elsewhere, Teams now features over 800 3D emojis and the ability to delay the delivery of messages until a specific time, as well as a new search results UI. The upgrades will roll out between now and early 2022. Webinars and events In tow with the other Teams updates are webinar- and events-focused features including virtual green room (available in preview in early 2022), which enables organizers and presenters to socialize, monitor the chat and Q&A, manage attendee settings, and share content before the event starts. Virtual green room arrives alongside enhanced controls for managing what attendees see during an event (available by the end of the year), and a Q&A set of functions (in preview this month) that let organizers and presenters mark best answers, filter responses, moderate, dismiss questions, and pin posts, such as a welcome message. Co-organizer (generally available by the end of the year) allows an event organizer to assign up to ten different co-organizers, who have the same capabilities and permissions as the organizer. As for isolated audio feed (in preview this month), it enables producers to create an audio mix using isolated feeds from each individual. In a related development, events, and hospitality management platform Cvent is now integrated with Teams, enabling customers to use it to manage the event lifecycle — including registration and agenda management. API and more The latest JavaScript API for Microsoft Office, generally available in Microsoft Excel later this month, gives developers the ability to create their own custom data types including images, entities, and formatted number values. Users will be able to build their own add-ins and extend previously existing ones to capitalize on data types, resulting in what Microsoft calls “a more integrated experience within Excel.” The aforementioned Forms Collection, which is also making its debut today, allows customers to create and manage an online archive for their forms and quizzes in Microsoft Forms without leaving the site. As for Smart Alerts (in preview), it can be used in conjunction with event-based add-in extensions to perform logic while users accomplish tasks in Outlook, like creating or replying to emails. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,756
2,021
"Zero trust network access should be on every CISO's SASE roadmap | VentureBeat"
"https://venturebeat.com/2021/12/07/zero-trust-network-access-should-be-on-every-cisos-sase-roadmap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zero trust network access should be on every CISO’s SASE roadmap Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Secure Access Service Edge (SASE) solutions close network cybersecurity gaps so enterprises can secure and simplify access to resources that users need at scale from any location. Closing the gaps between network infrastructures and supporting technologies helps streamline trusted real-time user authentication and access, which is essential for growing digital businesses. Zero Trust Network Access (ZTNA) is core to the SASE framework because it’s designed to define a personalized security perimeter for each individual, flexibly. It’s also needed for getting real-time integration and more trusted, secure endpoints across an enterprise. Ninety-eight percent of chief information security officers (CISOs) see clear benefits in SASE and are committed to directing future spending towards it, according to Cisco Investments. In fact, 55% of CISOs interviewed by Cisco say they intend to prioritize 25% to 75% of their future IT security budget on SASE. Additionally, 42% of CISOs said that ZTNA is their top spending priority within SASE initiatives. The finding highlights how closing network infrastructure and cybersecurity gaps is essential for enabling digitally-driven revenue growth. Above: Cisco Investments’ recent survey of CISOs finds that ZTNA dominates the spending priorities of those enterprises investing in Secure Access Service Edge (SASE) technologies this year. What is SASE? Gartner defines the SASE “as an emerging offering combining comprehensive WAN capabilities with comprehensive network security functions (such as SWG, CASB, FWaaS, and ZTNA) to support the dynamic, secure access needs of digital enterprises” that is delivered as a cloud-based service. Esmond Kane, CISO of Steward Health, says to “understand that – at its core – SASE is zero trust. We’re talking about things like identity, authentication, access control, and privilege. Start there and then build-out.” Gartner’s clients want to define identities as the new security perimeter and need better integration between networks and cybersecurity to achieve that. The SASE framework was created based on the momentum Gartner is seeing in the growing number of client inquiries focused on adapting existing infrastructure to better support digitally-driven ventures. Since publishing the initial research, the percentage of end-user inquiries mentioning SASE grew from 3% to 15% when comparing the same period in 2019 to 2020. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Integrating Network-as-a-Service and Network Security-as-a-Service to create a unified SASE platform delivers real-time data, insights and defines every identity as a new security perimeter. In short, unifying networks and security strengthen a ZTNA approach that has the potential to scale across every customer, employee, supplier, and service touchpoint. The goal is to provide every user and location with secure, low latency access to the web, cloud, and premises-based resources comparable to the corporate headquarters’ experience. Above: Enterprises realize customer and employee identities are the new security perimeter and prioritize ZTNA as a core part of their SASE architectures, with the simplified example shown here. What needs to be on CISO roadmaps in 2022 Enterprise networks and the identities that use them represent the greatest cybersecurity risk to any business. Sixty percent of CISOs believe their networks and the devices on them are the most difficult assets to manage and protect, according to Cisco Investments’ survey. In addition, many CISOs told Cisco that shadow IT isn’t going away, and apps, data, and endpoints are proliferating in response to greater reliance on digital business models. CISOs are going to need the following on their roadmaps in 2022 to succeed at integrating network infrastructure and cybersecurity, securing every customer identity while enabling real-time integration: Implement ZTNA as a core part of the SASE roadmap to replace VPNs first. Starting with replacing VPNs creates scale to secure all users regardless of location. The Cisco Investments survey implies that selecting a vendor with an integrated ZTNA component within its SASE platform is critical to getting the most from a SASE initiative. ZTNA enables organizations to implement a least-privileged access approach that provides real-time security and visibility to every user-device-application interaction, making identity effectively the new perimeter. Ericom’s ZTEdge cloud is the only provider that has done this with a platform designed specifically for mid-tier organizations, replacing VPNs globally. What’s noteworthy about the ZTEdge platform is how it’s been engineered in a single unified cloud-first platform for mid-tier organizations, yet also provides microsegmentation, Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG) with remote browser isolation (RBI), Cloud Firewall, and ML-enabled identity and access management (IAM). Strengthening SASE platforms through acquisition is a dominant strategy industry leaders are pursuing to become competitive more quickly in enterprises. For example, Cisco acquiring Portshift, Palo Alto Networks acquiring CloudGenix, Fortinet acquiring OPAQ, Ivanti acquiring MobileIron and PulseSecure, Check Point Software Technologies acquiring Odo Security, ZScaler acquiring Edgewise Networks, and Absolute Software acquiring NetMotion. “One of the key trends emerging from the pandemic has been the broad rethinking of how to provide network and security services to distributed workforces,” said Garrett Bekker, Senior Research Analyst, Security at 451 Research in his recent note, Another day, another SASE fueled deal as Absolute picks up NetMotion. Garrett continues, writing “this shift in thinking, in turn, has fueled interest in zero-trust network access (ZTNA) and secure access service edge.” Real-time network activity monitoring combined with Zero Trust Network Access (ZTNA) access privilege rights specified to the role level are essential for a SASE architecture to work. While Gartner lists ZTNA as one of many components in its Network Security-as-a-service, it is a key technology in delivering on the concept of treating every identity as the new security perimeter. ZTNA makes it possible for every device, location, and session to have full access to all application and network resources and for a true zero trust-based approach of granting least-privileged access to work. Vendors claiming to have a true SASE architecture need to have this for the entire strategy to scale. Leaders delivering a true SASE architecture today include Absolute Software, Check Point Software Technologies, Cisco, Ericom, Fortinet, Ivanti, Palo Alto Networks, ZScaler, and others. Ivanti Neurons for Secure Access’ approach is unique in how its cloud-based management technology is designed to provide enterprises with what they need to modernize VPN deployments and converge secure access for private and internet apps. What’s noteworthy about their innovations in cloud management technology is how Ivanti provides a cloud-based single view of all gateways, users, devices, and activities in real-time, helping to alleviate the risk of breaches from stolen identities and internal user actions. The following graphic illustrates the SASE Identity-Centric architecture as defined by Gartner: Above: Identities, access credentials, and roles are at the center of SASE, supported by a broad spectrum of technologies shown in the circular graphic above. Real-time Asset Management spanning across all endpoints and datacenters. Discovering and identifying network equipment, endpoints, related assets, and associated contracts leads CISOs to rely more on IT asset management systems and platforms to know what’s on their network. Vendors combining bot-based asset discovery with AI and machine learning (ML) algorithms provide stepwise gains in IT asset management accuracy and monitoring. Ivanti’s Neurons for Discovery is an example of how bot-based asset discovery is combined with AI & ML to provide detailed, real-time service maps of network segments or an entire infrastructure. In addition, normalized hardware and software inventory data and software usage information is fed real-time into configuration management and asset management databases. Leaders in this area also include Absolute Software, Atlassian, BMC, Freshworks, ManageEngine, MicroFocus, ServiceNow, and others. APIs that enable legacy on-premise, cloud & web-based apps to integrate with SASE. Poorly designed APIs are becoming one of the leading causes of attacks and breaches today as cybercriminals become more sophisticated at identifying security gaps. APIs are the glue that keeps SASE frameworks scaling in many enterprises, however. Each new series of APIs implemented risks becoming a new threat vector for an enterprise. API threat protection technologies, in some cases, can scale across entire enterprises. However, adding API security to a roadmap isn’t enough. CISOs need to define API management and web application firewalls to secure APIs while protecting privileged access credentials and identity infrastructure data. CISOs also need to consider how their teams can identify the threats in hidden APIs and document API use levels and trends. Finally, there needs to be a strong focus on API security testing and a distributed enforcement model to protect APIs across the entire infrastructure. SASE frameworks will bolster the future of enterprise security ZTNA is core to the future of enterprise cybersecurity and, given that it needs to interact with other components of the SASE framework to deliver on its promise, it needs to ideally share the same code line across an entire SASE platform. Whether it’s Ericom’s ZTEdge platform designed to meet mid-tier organizations’ specific requirements, or the many mergers, acquisitions, and private equity investments into SASE players aimed at selling SASE into the enterprise, getting ZTNA right has to be the priority. For CISOs, the highest priority must be accelerating ZTNA adoption to reduce dependence on vulnerable VPNs that hackers are targeting. ZTNA immediately boosts protection by securing every identity and endpoint, treating them as a continuously changing security perimeter of any business. SASE is achieving the goal of closing the gaps between network-as-a-service and network security-as-a-service, improving network speed, security and scale. The bottom line is that getting SASE right significantly improves the chance that digital transformation strategies and initiatives will succeed, and getting SASE right starts with getting ZTNA right. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,757
2,022
"Everything you need to know about zero-trust architecture  | VentureBeat"
"https://venturebeat.com/2022/06/13/zero-trust-architecture"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Everything you need to know about zero-trust architecture Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As more employees get used to hybrid working environments following the COVID-19 pandemic, enterprises have turned to zero-trust architecture to keep unauthorized users out. In fact, research shows that 80% of organizations have plans to embrace a zero-trust security strategy in 2022. However, the term zero trust has been used so much, by product vendors to describe security solutions, that it’s become a bit of a buzzword, with an ambiguous definition. “Zero trust isn’t simply a product or service — it’s a mindset that, in its simplest form, is not about trusting any devices — or users — by default, even if they’re inside the corporate network,” said Sonya Duffin, analyst at Veritas Technologies. Duffin explained that much of the confusion around the definition comes as a result of vendors “productizing the term”, which makes “companies think their data is safe because they have implemented a “zero trust” product, when, in fact, they are still extremely vulnerable.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pinning down zero-trust as a concept The first use of the term zero-trust can be traced all the way back to 1994 by Stephen Paul Marsh as part of a doctoral thesis, but only really started to pick up steam in 2010, when Forrester Research analyst John Kindervag challenged the concept of automatic trust within the perimeter network. Instead, Kindervag argued that enterprises shouldn’t automatically trust connections made by devices in the network, but should proactively verify all requests made from devices and users before granting them access to protected resources. The rationale behind this was to prevent malicious threat actors within the network from abusing automatic trust to gain access to sensitive information with additional verification steps. It’s worth noting that this concept evolved further in 2014 when Google released its own implementation of the zero-trust security model called BeyondCorp. It designed the BeyondCorp initiative to enable employees to work from untrusted networks without using a VPN, by using user and device-based authentication to verify access. Today, the global zero trust security market remains in a state of continued growth, with researchers anticipating that the market will increase from a valuation of $19.6 billion in 2020 to reach a valuation of $51.6 billion by 2026. Why bother with zero-trust architecture? One of the main reasons that organizations should implement zero-trust architecture is to improve visibility over on-premise and hybrid cloud environments. Mature zero-trust organizations report they are nearly four times more likely to have comprehensive visibility of traffic across their environment, and five times more likely to have comprehensive visibility into traffic across all types of application architectures. This visibility is extremely valuable because it provides organizations with the transparency needed to identify and contain security incidents in the shortest time possible The result is less prolonged downtime due to operational damage and fewer overall compliance liabilities. Zero-trust today: the ‘assume breach’ mindset Over the past few years, the concept of zero-trust architecture has also started to evolve as enterprises have shifted to an “assume breach” mindset, essentially expecting that a skilled criminal will find an entry point to the environment even with authentication measures in place. Under a traditional zero trust model, enterprises assume that every user or device is malicious until proven otherwise through an authentication process. Zero trust segmentation goes a step further by isolating workloads and devices so that if an individual successfully sidesteps this process, the impact of the breach is limited. “Zero Trust Segmentation (ZTS) is a modern security approach that stops the spread of breaches, ransomware and other attacks by isolating workloads and devices across the entire hybrid attack surface— from clouds to data centers to endpoints,” said Andrew Rubin, CEO and cofounder of Illumio. This means that “organizations can easily understand what workloads and devices are communicating with each other and create policies which restrict communication to only that which is necessary and wanted,” Rubin notes that these policies can then be automatically enforced to isolate the environment if there’s a breach. Implementing zero-trust segmentation Zero-trust segmentation builds on the concept of traditional network segmentation by creating micro perimeters within a network to isolate critical data assets. “With segmentation, workloads and endpoints that are explicitly allowed to communicate are grouped together in either a network segment or a logical grouping enforced by network or security controls,” said David Holmes, an analyst at Forrester. “At a high-level, zero-trust segmentation isolates critical resources so that if a network is compromised, the attacker can’t gain access,” Holmes said. “For example, if an attacker manages to gain initial access to an organization’s network and deploys ransomware, zero-trust segmentation can stop the attack from spreading internally, reducing the amount of downtime and data loss while lowering the attacker’s leverage to collect a ransom.” Holmes explains that enterprises can start implementing segmentation with policies saying that the development network should never be able to access the production segment directly, or that application A can communicate with database X, but not Y. Segmentation policies will help ensure that if a host gets infected or compromised, the incident will remain contained within a small segment of the network. This is a key reason why organizations that have adopted zero trust segmentation as part of their zero-trust strategy save an average of $20.1 million in application downtime and deflect five cyber disasters per year. How to implement zero-trust architecture For organizations looking to implement a true zero-trust architecture, there are many frameworks to use, from Forrester’s ZTX ecosystem framework to NIST , and Google’s BeyondCorp. Regardless of what zero-trust implementation an enterprise deploys, there are two main options for implementation; manually or via automated solutions. Holmes recommends two sets of automated solutions for enterprises to implement zero-trust. The first group of automated solutions rely on the underlying infrastructure, such as homogenous deployment of a single vendor’s network switches, like Cisco and Aruba. The second group relies on host software installed to each computer in the segmentation project, these solutions abstract segmentation away from network topology with vendors including Illumio and Guardicore. Though, Holmes notes that going beyond zero-trust to implement it fully can be very difficult. For this reason, he urges enterprises to opt for an automated solution and to plan the zero-trust deployment meticulously, to the point of overplanning to avoid any unforeseen disruption. Above all, the success or failure of zero-trust implementation depends on whether secure access is user-friendly for employees, or an obstacle to their productivity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,758
2,010
"Bloom Energy: Is its 'power plant in a box' worth all the hype? | VentureBeat"
"https://venturebeat.com/2010/02/22/bloom-energy-is-its-power-plant-in-a-box-worth-all-the-hype"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bloom Energy: Is its ‘power plant in a box’ worth all the hype? Camille Ricketts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bloom Energy finally emerged from stealth mode, unveiling its “Bloom Box” fuel cell during a 60 Minutes segment with Lesley Stahl yesterday (click here for bonus videos). Capable of powering more than 100 homes while producing close to zero emissions, just one of these boxes could radically alter how people get their energy. But is it the godsend that some are saying it is? Wireless and neatly compartmentalized, the Bloom Box could one day be a fixture in your backyard or basement, transmitting clean energy to your home as needed, Bloom CEO K.R. Sridhar says. Right now, it’s available on a large scale, with each box costing as much as $800,000. In the next five to 10 years, Bloom says it will release smaller boxes for individual households costing less than $3,000. If this happens, there is a chance that Bloom Boxes could supplant utilities and long-distance transmission lines — not to mention capital intensive wind farms and solar arrays. Bloom investor John Doerr of Kleiner Perkins Caufield & Byers , who played a big role in last night’s 60 Minutes debut, says it is definitely Bloom Energy’s goal to disrupt, and even replace the country’s electrical grid. This is a bold assertion, considering how much time, effort and money is being sunk into the creation of the so-called Smart Grid. Incidentally, Bloom Energy was Kleiner Perkins’ first investment in the green sector, which has now become a huge area of focus for the firm. Since then, former Secretary of State Colin Powell has also joined the board of directors. If this doesn’t inspire confidence in Bloom’s lofty claims, its roster of current customers probably will. Google was actually the first to install Bloom Boxes on its campus 18 months ago, followed soon after by eBay, FedEx, Wal-Mart and 16 other big names. EBay CEO John Donahoe gave the Box a strong endorsement last night, reporting that the several fuel cells it installed nine months ago have already saved the company $100,000 in energy costs — and are putting out five times more energy than its extensive rooftop solar system. Last night also marked the first glimpse anyone has gotten at Bloom’s actual technology. Each Bloom Box is filled with stacks of razor-thin discs made out of baked beach sand and coated with green and black proprietary inks (this component remains secretive). When the Box is infused with a source of fuel, whether it be natural gas, biomass-produced gas or even solar energy, each of these discs puts out enough electricity to power a light bulb. Together, they can light up whole city blocks. The design was adapted from a similar product that Sridhar worked on at NASA. As Greentech Media editor-in-chief Michael Kanellos pointed out during last night’s segment, previous attempts at similar fuel cells have been prohibitively expensive — especially when it comes to scaling the technology. But Bloom’s Sridhar says it has dramatically reduced the costs associated with building fuel cells. Not only does it use a cheaper metal alloy between each of its discs instead of the typical platinum, but it has replaced the expensive, pure hydrogen gas that used to be required, with more plentiful gas-based fuels. The bigger problem might be that the company only has the capacity to build about one box per day after raising upwards of $400 million. After letting Sridhar sing his Box’s praises at the beginning of the segment, 60 Minutes correspondent Lesley Stahl turned to potential problems and challenges. Notably, if the Bloom Box becomes available (and affordable) for average consumers, won’t threatened utilities start to push back? Sridhar and Doerr have foreseen this problem and reasonably argue that utilities could become major Bloom Box buyers themselves, selling the power the Boxes produce to their residential and commercial customers. After all, utilities already buy wind farms and nuclear reactors to do the same. Stahl also called attention to some of the technical difficulties existing Bloom customers have encountered. For instance, early on, one of Google’s Bloom Boxes used to power a data center abruptly shut down. Sridhar admits that not every Box has performed perfectly and acknowledges that several Boxes have had problems with air filter clogs. But he maintains that the technology is still being refined, and that the early adopters are playing an important role in providing feedback and making the product more commercially viable. Kanellos provided perhaps the most salient counterpoint: Companies like General Electric and Siemens have been working on their own fuel cell models for decades. If Bloom Energy succeeds as widely as Sridhar and Doerr say it will, what’s stopping these bigger players from investing their immense capital in developing their own branded solutions? Kanellos articulately framed this issue, agreeing that fuel cells may indeed become a staple in household basements in the next decade, but that they’ll bear the GE logo, not Bloom’s. Even after shedding some mystery, Bloom still seems to hold an amazing amount of potential. It will be interesting to see which companies sign up to be in its second flock of big-name customers. At what point will it begin to approach major utilities as potential buyers? And what will happen to fledgling competitors like home fuel-cell maker ClearEdge Power ? Will Bloom’s technology be adapted for automotive applications as well? Could it revolutionize the developing work with off-the-grid electricity? There are many more questions yet to be answered, but for now, it looks like Bloom deserves all the buzz. So that you can decide for yourself, here, in full, is the 13-minute segment that aired on last night’s episode of 60 Minutes with Lesley Stahl: http://cnettv.cnet.com/av/video/cbsnews/atlantis2/player-dest.swf Watch CBS News Videos Online VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,759
2,021
"AI Weekly: How the power grid can benefit from intelligent software | VentureBeat"
"https://venturebeat.com/2021/04/30/ai-weekly-how-the-power-grid-can-benefit-from-intelligent-software"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: How the power grid can benefit from intelligent software Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google parent Alphabet’s “moonshot” X lab announced last week at the White House Leaders Summit on Climate that it’s working on a project for the electric grid. Over the past three years, the lab says it has been investigating “new computational tools” designed to bring the grid “out of the industrial age and into the age of the intelligence.” Among other areas, X says it’s experimenting with (1) a real-time virtualization that shows power moving onto and off the grid, (2) tools that simulate what might actually happen on the grid, and (3) a platform to make information about the grid useful to stakeholders. The work is being led by Audrey Zibelman, former managing director at Australian electricity and gas systems operations firm Australia Energy Market Operator, and it remains in the planning stages. But experts believe the core of this effort — intelligent software — is likely to become increasingly important in the energy sector. “Hybrid plants and battery energy storage now mean power plants can be controlled and can simulate traditional power plants, and this will require sophisticated IT to integrate forecasting of reusable energy production, along with forecasting prices,” Ric O’Connell, executive director of clean energy consulting firm GridLab, told VentureBeat via email. The U.S. electrical grid has long been burdened by aging infrastructure. Sixty percent of distribution lines have surpassed their 50-year life expectancy, according to Black and Veatch , while the Brattle Group anticipates $1.5 trillion to $2 trillion in spending by 2030 to modernize the grid and maintain reliability. The latest report from the American Society for Civil Engineers found that current grid investment trends will lead to funding gaps of $42 billion for transmission and $94 billion for distribution by 2025. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Neil Sahota, chief innovation officer at the University of California, Irvine, says intelligent software opens the door to the deployment of AI designed for power grid use cases. Utilities are already employing AI to address the windfalls and fluctuations in energy usage. Precise load forecasting ensures operations aren’t interrupted, thereby preventing blackouts and brownouts. And it can bolster the efficiency of utilities’ internal processes, leading to reduced prices and improved service. “There are a lot of subtle clues that in aggregate show where and when a natural disaster can occur. To ‘see’ the clues, we need to process a lot of data across a broad spectrum of variables and look for subtle differences,” Sahota told VentureBeat via email. “This is difficult for people to do effectively but is in the wheelhouse of AI. Consider wildfires, where we are using climate information (including wind forecasts), drone surveillance, and satellite images to predict hot spots and how a fire may start and spread. AI can monitor all these millions of data points in real time and constantly generate prediction models.” For example, startup Autogrid works with more than 50 utilities in 10 countries to deliver AI-informed power usage insights. Its platform makes 10 million predictions every 10 minutes and optimizes over 50 megawatts of power, which is enough to supply the average suburb. Flex, the company’s flagship product, predicts and controls tens of thousands of energy resources from millions of customers by ingesting, storing, and managing petabytes of data from trillions of endpoints. Using a combination of data science, machine learning, and network optimization algorithms, Flex models both physics and customer behavior, automatically anticipating and adjusting for supply and demand patterns. O’Connell believes that efforts like X’s will face challenges, particularly on the distributed energy resource (DER) side of the equation. DER systems — small-scale power generation or storage technologies that provide an alternative to traditional power systems or enhance those systems — can be difficult to orchestrate because they might span solar panels, electric vehicle charging setups, and even smart thermostats. But if a digital transformation of the power grid succeeds, its long-term benefits could be significant, O’Connell says. “Currently, when independent system operators want to add a new market participant type, it takes them a year to incorporate those changes. That’s legacy IT systems,” he said. “The IT systems that grid operators will need are going to have to get a serious upgrade from the ’90s technology that they use now.” For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,760
2,021
"Energy companies can lean on digital to realize new opportunities | VentureBeat"
"https://venturebeat.com/2021/08/27/energy-companies-can-lean-on-digital-to-realize-new-opportunities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Energy companies can lean on digital to realize new opportunities Share on Facebook Share on X Share on LinkedIn Power pylons at sunset Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This post was written by Ashiss Kumar Dash, global head of services, utilities, resources, and energy industries, Infosys. Energy and utilities companies have made a somewhat delayed entry into the digital world — in relation to peers in other industries like retail or even banking. But now the dual trend of energy transition and decentralization is creating a new paradigm in energy management, generation and consumption that is overwhelmingly digital. As part of energy transition, the electric utility industry is making massive investments towards upgrading the grid and making it smarter to achieve three ends to support a variety of fluctuating renewable sources, democratize generation, and manage bidirectional flow of energy. Energy companies are also having to switch to a decentralized system, producing energy closer to where it is consumed instead of a central location. And this must be secure, predictable and reliable. These challenges — whether decentralization, building microgrids, or predicting demand-supply dynamics — can be massively impacted with digital solutions. After decades of being in a commoditized business, utilities have a chance to differentiate themselves, not through their energy or electricity supply, but through digital expertise, which influences everything from how much customers pay for energy to how they experience these services. Data is integral to this opportunity; for instance, utility companies need to know who is producing decentralized energy, how much of it is available for distribution and match that to surges and dips in demand. Data trends can help point to the answers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But how can energy companies, which are not fully digitally transformed, be expected to compete on technology? To help them in this journey, digital services companies are taking on the role of digital energy orchestrators, providing AI and analytics-based solutions to stabilize grids, predict supply of energy from fluctuating sources, and improve overall customer experience. Digital platforms are also accelerating innovation that can fundamentally transform how the industry functions. To the credit of the regulators in most countries, the need to make progressive efforts is being emphasized more than ramrodding a slew of impractical regulations. Digital technology is suited for precisely this agile build-up of progress. News that made the rounds earlier this year, was of the global energy company bp exploring avenues to deliver Energy as a Service. They brought together their strengths in energy and mobility and amplified it with digital technologies to create an AI-enabled EaaS offering to manage energy assets and provide low-carbon energy (electricity and heating/ cooling) as well as low-carbon mobility to large campuses. If their pilot serves them well, plans are in place to offer the service to industrial and business parks, and then to towns and cities, opening up a new source of differentiation and revenue for the energy company. And, more importantly, leading the way in sustainable energy. Another interesting turn of events is that of a North American energy behemoth onboarding a new grid-scale wind or solar energy plant every month, month on month! They are relying on a digital technology partnership to help with onboarding data systems and to also run the analytics for these implementations. The company is exploring ways and means to take a set of very comprehensive offerings for electric vehicles and also to scale solar energy storage with digital advances. As recently as earlier this month, Southern California Edison announced the country’s largest electric vehicle charging infrastructure project by any utility company, for installing nearly 40,000 charging points over the next 5 years. The company has capitalized on a new opportunity — it will install and maintain the supporting EV charging infrastructure — to differentiate itself. This will also give it access to new customer data from the charging stations, such as the type of car they own, how much they travel, when they may need recharging, etc. Digital technologies are at the heart of this initiative. In fact, we launched our Energy Innovation Centers in Houston and London to help more enterprises explore and navigate opportunities in this realm. Clearly, the path to zero is digitally paved. The energy sector is being reshaped by two major trends — the transition to non-fossil fuel resources and the decentralization of production and distribution. Digital technologies, such as AI/ML and analytics, are playing a big role in enabling the industry to leverage these shifts. There are both yet to be imagined and well-defined new opportunities to be had — for revenue generation and differentiation — as a result of these trends; and the game is technology-driven. It is impractical for energy companies to go alone, but there are technology providers with robust platforms and solutions that can help them run their operations so much more efficiently, as well as to explore deeper for value. Focusing on creating and distributing greener energy while leaning on a partner for the tech, makes more sense than ever, today. Ashiss Kumar Dash is the global head of services, utilities, resources, and energy industries at Infosys. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,761
2,021
"Smart microgrids are replacing legacy electrical systems | VentureBeat"
"https://venturebeat.com/2021/10/19/smart-microgrids-are-replacing-legacy-electrical-systems"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Smart microgrids are replacing legacy electrical systems Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The technology world tends to take electricity for granted, a commodity that simply flows from the wall. But behind the sockets and wires, a quiet revolution has been transforming how the world generates and delivers electricity. The old model of a single large plant is being replaced by a complex network of smaller generators and consumers connected by smart microgrids powered by intelligent algorithms and free markets. And with increased reliability for enterprise datacenters on the mind of every executive whose bonus relies on uptime, smart microgrids are the key to that level of data resiliency. Schneider Electric is one of the oldest industrial giants in Europe. It began in 1836 as a metal forge but shifted to supplying electricity during those 185 years. Now it is actively embracing merging software intelligence with the grid to provide cleaner, safer, and more reliable power. To understand how this change is happening behind the wall sockets and circuit breakers of the server room, we sat down with Mark Feasel, president of smart grid at Schneider Electric, to get an understanding of how the company is using better software and cybersecurity expertise to redesign the electrical system for the future. VentureBeat: What does the term “smart microgrid” mean for your organization? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Mark Feasel: I lead Smart Grid for Schneider Electric in North America, and what that means is that I am accountable for our electric utility business in North America. In addition, I lead a team that engages energy consumers who have been historically passive in how they react to the grid. Now our team engages in those discussions on how to interface the grid in new ways and also set up business models to facilitate that process to happen. VentureBeat: So when you mean that they interact with the grid, does that mean that they may be pushing electricity into it? They may be buying and selling? Feasel: For most of the past 100 years, without access to data, most consumers had no real ability to understand what electricity meant to them or how it correlated with the business objectives. But as you begin to digitize energy and create data, you de-commoditize it. People can choose very different models. For some, that means point distribution, energy resources — like solar with mine heat and power. Others maybe want more storage, maybe want to implement load control. Sometimes they’re powering their own needs, and sometimes they’re interfacing with the grid to provide power back. Defining a smart microgrid VentureBeat: This completely changes the old model, where there was one big generating plant and thousands or maybe millions of customers. Feasel: Exactly. A smart microgrid is a local control system that is composed of generation and controllable load and the system that orchestrates it. Microgrids can operate in three different operational modes. One of them is a mode in which they’re capable of operating completely independently from the grid. Another mode is when they’re islandable, which means that they can run at all times. But upon loss of power, they can continue providing that load. And the third kind is something you might call “microgrid light,” or a virtual power plant. These are assets that operate in parallel with the grid but only operate when the grid itself is available. VentureBeat: A disconnected generator is pretty easy to understand. When they’re working in parallel, that must require quite a bit of coordination to make sure that just the right amount of power is flowing. Feasel: It’s a tricky situation because electricity is invisible. It’s dangerous. It travels at the speed of light. And so, it isn’t like you can sit back and monitor things for a while and detect a problem before the impact is felt. So it really is a story of “How do you build this secure environment? How do you ensure the signaling between the grid and microgrids such that authenticity can be ascribed? And how do you detect when something’s wrong and isolate it quickly?” So that’s a pretty advanced story. VentureBeat: This coordination must require quite a bit of care to make sure it’s done correctly. Feasel: One aspect of security is simply … when you are connected to the smart microgrid and you know a consumer is interfacing with the smart microgrid. What are the commands and signals that your local equipment is given, and are those commands and signals authentic? Are they encrypted or not? What about the energy that you have on your microgrid and when and how you dispatch it back to the grid? Are you doing so in a controlled way, or perhaps, has — through some mechanism — someone been able to compromise your control systems? That is, providing power back to the grid. That is what someone else wants using your utility. So it’s not just a story of a power outage; it’s definitely the story of the connected assets and how they’re interfacing with the grid-scale assets. VentureBeat: So how do you go about changing this industry that’s already more than 100 years old? Feasel: First of all, it’s a quickly moving and emerging environment. But one of the things that we really focus on are systems based on openness. So open communication, open protocols. We’re in a world where much of what operates a power grid is filled with equipment that has lineages that go back decades and decades. Most of these controllers of a generator or a protective relay are not like your iPhone. They don’t get changed out every year. You can’t just adopt the newest and most advanced standard. Instead, you have to deal with the legacy of what’s built, and those legacies are built upon proprietary protocols. Smart microgrids need openness VentureBeat: What kind of computing systems do you need to accomplish this? Feasel: It’s very typical to go into a utility substation and have to install devices called RTUs (remote terminal units) that can speak about 20 to 50 of these old languages because of the ability to go in and change out all this equipment and masses is way too expensive. And so when I think about how we approach this from a Schneider Electric point of view, we think about a story of openness, with open protocols and peer-reviewed approaches. VentureBeat: What comes after openness? What do you want them to be? Feasel: There are four key things here. Number one: When we think about resilience, one of the things we talk about is proximity — that we’re generating energy as close as possible to where it’s being used. That’s something you can secure physically better at that point, instead of having miles-long areas in which a physical intrusion could occur. The second thing is the idea of redundancy. It’s pretty obvious: If an asset is compromised or a group of assets is compromised, the services can be provided by other assets. The third kind of premise is diversity. This is not the story of just having different types of generators, but it’s actually having different fuel supply chains. If you simply have, say, liquid fuels sent to your generators, then that could be disrupted. The fourth key thing is digitization, which brings better situational awareness. It’s very important that [systems] are digitized more fully now. When you’re creating more digital points, you might say, “Well, Mark, doesn’t that make it less secure?” But the truth is digital data gives you the ability to understand. The truth is extremely important when you have a commodity like electricity that’s moving so fast. VentureBeat: I’ve noticed that you and some of the others in the grid community are pushing the U.S. Congress to enact some legislation to support this. Can you tell me how you imagine this could work? Feasel: It’s really important to read [the proposed bill]. If you look at most of the stimuli or incentives, they’re aimed at either efficiency or sustainability, which is great. We’re huge supporters of that. But resilience is not something that is really encouraged or incentivized financially today. What the Microgrid Act does is really allow resilience to be the centerpiece of the solution. Right now, when most distributed energy solutions are put in, the incentives come from putting in as much solar as you can. That’s great. And we all get some benefits from that. But the hardening process is done with the spare change. The Microgrid Act will put resilience front and center, and that will mean that more of these security solutions are put in. It will prioritize that instead of resilience being just an afterthought. Resilience important in new microgrids VentureBeat: Does everyone need to worry about resilience? Or is it something that we can dial up or down as required? Feasel: Absolutely. It doesn’t necessarily make sense for everyone to pay for that high degree of resilience, because a datacenter or hospital needs much more resilience than maybe a home. The idea of a microgrid is that you can really dial that in. If resilience is uber important to you in your process, then you can put a solution in that solves for that. If it’s only moderately important, maybe pick up better sustainability options. Or take better cost savings. For me, it’s more about delivering a solution that is aligned [with] the business objectives of a facility or an entity. VentureBeat: All of this flexibility can help the power users, right? Didn’t Microsoft put a couple of datacenters right next to the Columbia River Gorge? Feasel: Data doesn’t quite travel at the speed of light because of latency and switches and things, but it’s still pretty fast. So you can move it. We’re massively in the datacenter business. It’s our biggest single segment here in North America. What’s interesting now is the decentralization of data. Everything’s moving to the edge. You start having a real need for local computing. This idea of a dramatically smaller datacenter at the edge is really proliferating. As you know, users do care about efficiency, they do care about sustainability, they care about resilience. And so, we’re often seeing microgrids as real viable ways to solve those problems. You can deliver that more resilient, more sustainable power than the grid is going to give. VentureBeat: I know one guy who runs the computing infrastructure for a college near the Canadian border. He wants to put server farms in the basements of the buildings and heat them with the waste heat while selling the compute power at the same time. Feasel: Absolutely. I’ll give you another example of a project that we worked on in the Netherlands. Wind farms take a lot of land, and then at the base go greenhouses that take electricity to run. It’s a nice little ecosystem. They can also put the edge compute together with the wind farm together with the greenhouse. They use the output of electricity from the windmill and the output of heat from the computers to grow plants. VentureBeat: It’s a great example that ties this all together. A small, microgrid setup that’s paired with two industrial needs: computing and food. This shows how they can be put together in imaginative ways. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,762
2,022
"Russian hackers exploited MFA and 'PrintNightmare' vulnerability in NGO breach, U.S. says | VentureBeat"
"https://venturebeat.com/2022/03/15/russian-hackers-exploited-mfa-and-printnightmare-vulnerability-in-ngo-breach-u-s-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Russian hackers exploited MFA and ‘PrintNightmare’ vulnerability in NGO breach, U.S. says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The FBI and CISA released a warning today highlighting that state-sponsored threat actors in Russia were able to breach a non-governmental organization (NGO) using exploits of multifactor authentication (MFA) defaults and the critical vulnerability known as “PrintNightmare.” The cyberattack “is a good example of why user account hygiene is so important, and why security patches need to go in as soon as is practical,” said Mike Parkin, senior technical engineer at cyber risk remediation firm Vulcan Cyber, in an email to VentureBeat. “This breach relied on both a vulnerable account that should have been disabled entirely, and an exploitable vulnerability in the target environment,” Parkin said. Security nightmare “PrintNightmare” is a remote code execution vulnerability that has affected Microsoft’s Windows print spooler service. It was publicly disclosed last summer, and prompted a series of patches by Microsoft. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to today’s joint advisory from the FBI and CISA (the federal Cybersecurity and Infrastructure Security Agency), Russia-backed threat actors have been observed exploiting default MFA protocols with the “PrintNightmare” vulnerability. The threat actors were able to gain access to an NGO’s cloud and email accounts, move laterally in the organization’s network and exfiltrate documents, according to the FBI and CISA. The advisory says the cyberattack targeting the NGO began as far back as May 2021. The location of the NGO and the full timespan over which the attack occurred were not specified. CISA referred questions to the FBI , which did not immediately respond to a request for those details. The warning comes as Russia continues its unprovoked assault on Ukraine, including with frequent cyberattacks. CISA has previously warned of the potential for cyberattacks originating in Russia to impact targets in the U.S. in connection with the war in Ukraine. On CISA’s separate “ Shields Up ” page, the agency continues to hold that “there are no specific or credible cyber threats to the U.S. homeland at this time” in connection with Russia’s actions in Ukraine. Weak password, MFA defaults In the cyberattack against an NGO disclosed today by the FBI and CISA, the Russian threat actor used brute-force password guessing to compromise the account’s credentials. The password was simple and predictable, according to the advisory. The account at the NGO had also been misconfigured, with default MFA protocols left in place, the FBI and CISA advisory says. This enabled the attacker to enroll a new device into Cisco’s Duo MFA solution — thus providing access to the NGO’s network, according to the advisory. While requiring multiple forms of authentication at log-in is widely seen as an effective cybersecurity measure, in this case, the misconfiguration actually allowed MFA to be used as a key part of the attack. “The victim account had been unenrolled from Duo due to a long period of inactivity but was not disabled in the Active Directory,” the FBI and CISA said. “As Duo’s default configuration settings allow for the re-enrollment of a new device for dormant accounts, the actors were able to enroll a new device for this account, complete the authentication requirements and obtain access to the victim network.” The Russia-backed attacker then exploited “PrintNightmare” to escalate their privileges to administrator; modified a domain controller file, disabling MFA; authenticated to the organization’s VPN; and made Remote Desktop Protocol (RDP) connections to Windows domain controllers. “Using these compromised accounts without MFA enforced, Russian state-sponsored cyber actors were able to move laterally to the victim’s cloud storage and email accounts and access desired content,” the FBI and CISA advisory says. The FBI-CISA advisory includes a number of recommended best practices and indicators of compromise for security teams to utilize. In a blog post , Cisco noted that “this scenario did not leverage or reveal a vulnerability in Duo software or infrastructure, but made use of a combination of configurations in both Duo and Windows that can be mitigated in policy.” Growing threat Ultimately, the FBI-CISA advisory recommends that “organizations remain cognizant of the threat of state-sponsored cyber actors exploiting default MFA protocols and exfiltrating sensitive information.” In recent years, Russian threat actors have shown that they’ve developed “significant capabilities to bypass MFA when it is poorly implemented, or operated in a way that allows attackers to compromise material pieces of cloud identity supply chains,” said Aaron Turner, a vice president at AI-driven cybersecurity firm Vectra. “This latest advisory shows that organizations who implemented MFA as a ‘check the box’ compliance solution are seeing the MFA vulnerability exploitation at scale,” Turner said in an email. Going forward, you can “expect to see more of this type of attack vector,” said Bud Broomhead, CEO at IoT security vendor Viakoo. “Kudos to CISA and FBI for keeping organizations informed and focused on what the most urgent cyber priorities are for organizations,” Broomhead said in an email. “All security teams are stretched thin, making the focus they provide extremely valuable.” In light of this cyberattack by Russian threat actors, CISA director Jen Easterly today reiterated the call to businesses and government agencies to put “shields up” in the U.S. This effort should include “enforcing MFA for all users without exception, patching known exploited vulnerabilities and ensuring MFA is implemented securely,” Easterly said in a news release. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,763
2,022
"How the data infrastructure stack will change this year | VentureBeat"
"https://venturebeat.com/2022/01/29/how-the-data-infrastructure-stack-will-change-this-year"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How the data infrastructure stack will change this year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2022 the digital advertising industry will be faced with further depreciation of third-party identifiers. Brands, agencies, publishers and technology companies will start to scramble to implement infrastructure to support connectivity of consented data. As the need for data connectivity starts to accelerate, a new technology stack will start to emerge. This “data connectivity infrastructure” stack will become a key investment category and we can expect to see major consolidation (read: M&A and key partnerships) in 2022 as a result. Components of the data connectivity infrastructure stack There are currently three major types of infrastructure for data connectivity: CDPs and DMPs — the software to manage audience data Data clean rooms and data sanctuaries — the software to safely port the data Identity technology — tools that marry and enrich piecemeal people data across silos and partners so that the data used is complete and correct. In 2021, we first started to see these solutions being packaged together, which makes sense because they are all part of the same “data connectivity” value chain. We’re also already seeing a consolidation, or at least a partnership trend. LiveRamp already offers all three solutions: identity, Safe Haven (clean room) and Data Marketplace (DMP/CDP). It’s not a perfect trifecta by any means, but the three-in-one packaging makes it highly competitive with the point solutions and is bound to trigger consolidation from the other infrastructure competitors. Meanwhile clean room InfoSum has launched InfoSum Bridge to link with identity providers. In 2022 the larger players like Adobe, Oracle, and Salesforce will start to stack the deck in their favor by eating up companies to support their ability to power end-to-end data connectivity for first-party data. Eventually, these partnerships will lead to M&A. While the timeline is contingent on when Google and Apple finally pull the plug on accelerating the need for first-party cookies, the change is happening already and is bound to come to a head in 2022. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! M&A may dictate changes to the stack For marketers, this means that the point solution data connectivity products they’re using today (currently offered by various providers) are likely to become features of a consolidated product tomorrow. Marketers will need to make a decision of whether to stick with many partners or consolidate. Some brands will continue to work with multiple point solutions because they want to create their own customized stack, and consolidation will take some of that flexibility away. Other brands may ultimately want a full suite, but will want to have more say in which suite they adopt and when. There certainly are pros to accepting wherever consolidated offering emerges. A larger suite solution may offer better customer data sharing — for example an identifier like a loyalty ID that can be used across the components of the solution. They may offer higher data sharing frequency as it’s ultimately going to be the favored system to build out real time updates vs. batch updates. This can be a major game changer for many brands who need real time data for dynamic customer experiences, for example. And the larger solution is likely going to offer a higher level of service and handholding just based on the complexity of their offering and the likely larger size of their typical customer. The cons to accepting a full suite offering are mostly for brands with their own complex data needs that require a high level of flexibility. Some big tech companies build for their average customer, providing off-the-shelf options with little customization available. While other companies like Adobe and Salesforce have great resources in place for custom builds and services, they would of course still be built out of their own suite. Brands need to determine what level of customization, independence, and flexibility they need and proceed with the understanding that in the future they may need to find a new partner if M&A changes things. Customers building their own stacks and using pieces of the consolidated offerings as point solutions will probably get the “standard package.” This equates to standard data integration with other point solutions, batch updates, and a lot less hand-holding. At that point, data interoperability becomes a problem — an issue we need to solve as an industry. For example, if a brand is using LiveRamp for identity today and Salesforce’s CDP, everything plays well together today. But if Salesforce gains its own identity layer, will it have the same incentive to play well with LiveRamp? It will likely build out better functionality for its own suite first. When an independent player is bought by a larger tech company, it often adopts the product and service approach of that larger company. This might mean that customization becomes impossible, or that there is little support for a specific vertical, or little experience with a certain element of the business such as loyalty data or non-programmatic marketing needs. Brands need to know what matters most While the ultimate endgame is impossible to forecast, it’s clear that major change is on the horizon. Brands of all sizes and levels of data maturity need to do some scenario planning and create a prioritized list of what matters most to them when it comes to identity. These “must haves” will make it easier to make a decision in case one of their partners does get acquired, or to integrate deeply with a specific partner. The CIO and CMO should be aligned on their priorities, of course, but they also should work together to discuss the implications of different moves — for example, estimating the cost of switching a major component of their product, or estimating the reduction in ROI if dynamic advertising is no longer possible. All of these different what-if scenarios can help the entire team prepare to make smart decisions more quickly, before they are too entrenched with an approach that may or may not work in the future. Nancy Marzouk is CEO and Founder of MediaWallah. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,764
2,021
"MLOps vs. DevOps: Why data makes it different | VentureBeat"
"https://venturebeat.com/business/mlops-vs-devops-why-data-makes-it-different"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest MLOps vs. DevOps: Why data makes it different Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Much has been written about struggles of deploying machine learning projects to production. As with many burgeoning fields and disciplines, we don’t yet have a shared canonical infrastructure stack or best practices for developing and deploying data-intensive applications. This is both frustrating for companies that would prefer making ML an ordinary, fuss-free value-generating function like software engineering, as well as exciting for vendors who see the opportunity to create buzz around a new category of enterprise software. The new category is often called MLOps. While there isn’t an authoritative definition for the term, it shares its ethos with its predecessor, the DevOps movement in software engineering: By adopting well-defined processes, modern tooling, and automated workflows, we can streamline the process of moving from development to robust production deployments. This approach has worked well for software development, so it is reasonable to assume that it could address struggles related to deploying machine learning in production too. However, the concept is quite abstract. Just introducing a new term like MLOps doesn’t solve anything by itself; rather, it adds to the confusion. In this article, we want to dig deeper into the fundamentals of machine learning as an engineering discipline and outline answers to key questions: Why does ML need special treatment in the first place? Can’t we just fold it into existing DevOps best practices? What does a modern technology stack for streamlined ML processes look like? How can you start applying the stack in practice today? Why: Data makes it different All ML projects are software projects. If you peek under the hood of an ML-powered application, these days you will often find a repository of Python code. If you ask an engineer to show how they operate the application in production, they will likely show containers and operational dashboards — not unlike any other software service. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Since software engineers manage to build ordinary software without experiencing as much pain as their counterparts in the ML department, it begs the question: Should we just start treating ML projects as software engineering projects as usual, maybe educating ML practitioners about the existing best practices? Let’s start by considering the job of a non-ML software engineer: writing traditional software deals with well-defined, narrowly-scoped inputs, which the engineer can exhaustively and cleanly model in the code. In effect, the engineer designs and builds the world wherein the software operates. In contrast, a defining feature of ML-powered applications is that they are directly exposed to a large amount of messy, real-world data that is too complex to be understood and modeled by hand. This characteristic makes ML applications fundamentally different from traditional software. It has far-reaching implications as to how such applications should be developed and by whom: ML applications are directly exposed to the constantly changing real world through data, whereas traditional software operates in a simplified, static, abstract world that is directly constructed by the developer. ML apps need to be developed through cycles of experimentation. Due to the constant exposure to data, we don’t learn the behavior of ML apps through logical reasoning but through empirical observation. The skillset and the background of people building the applications gets realigned. While it is still effective to express applications in code, the emphasis shifts to data and experimentation — more akin to empirical science — rather than traditional software engineering. This approach is not novel. There is a decades-long tradition of data-centric programming. Developers who have been using data-centric IDEs, such as RStudio, Matlab, Jupyter Notebooks, or even Excel to model complex real-world phenomena, should find this paradigm familiar. However, these tools have been rather insular environments; they are great for prototyping but lacking when it comes to production use. To make ML applications production-ready from the beginning, developers must adhere to the same set of standards as all other production-grade software. This introduces further requirements: The scale of operations is often two orders of magnitude larger than in the earlier data-centric environments. Not only is data larger, but models — deep learning models in particular — are much larger than before. Modern ML applications need to be carefully orchestrated. With the dramatic increase in the complexity of apps, which can require dozens of interconnected steps, developers need better software paradigms, such as first-class DAGs. We need robust versioning for data, models, code, and preferably even the internal state of applications — think Git on steroids to answer inevitable questions: What changed? Why did something break? Who did what and when? How do two iterations compare? The applications must be integrated with the surrounding business systems so ideas can be tested and validated in the real world in a controlled manner. Two important trends collide in these lists. On the one hand we have the long tradition of data-centric programming; on the other hand, we face the needs of modern, large-scale business applications. Either paradigm is insufficient by itself — it would be ill-advised to suggest building a modern ML application in Excel. Similarly, it would be pointless to pretend that a data-intensive application resembles a run-of-the-mill microservice that can be built with the usual software toolchain consisting of, say, GitHub, Docker, and Kubernetes. We need a new path that allows the results of data-centric programming, models, and data science applications in general, to be deployed to modern production infrastructure, similar to how DevOps practices allows traditional software artifacts to be deployed to production continuously and reliably. Crucially, the new path is analogous but not equal to the existing DevOps path. What: The modern stack of ML infrastructure What kind of foundation would the modern ML application require? It should combine the best parts of modern production infrastructure to ensure robust deployments, as well as draw inspiration from data-centric programming to maximize productivity. While implementation details vary, the major infrastructural layers we’ve seen emerge are relatively uniform across a large number of projects. Let’s now take a tour of the various layers, to begin to map the territory. Along the way, we’ll provide illustrative examples. The intention behind the examples is not to be comprehensive (perhaps a fool’s errand, anyway!), but to reference concrete tooling used today in order to ground what could otherwise be a somewhat abstract exercise. Foundational infrastructure layers Data Data is at the core of any ML project, so data infrastructure is a foundational concern. ML use cases rarely dictate the master data management solution, so the ML stack needs to integrate with existing data warehouses. Cloud-based data warehouses, such as Snowflake , AWS’s portfolio of databases like RDS, Redshift, or Aurora , or an S3-based data lake , are a great match to ML use cases since they tend to be much more scalable than traditional databases, both in terms of the data set sizes and in terms of query patterns. Compute To make data useful, we must be able to conduct large-scale compute easily. Since the needs of data-intensive applications are diverse, it is useful to have a general-purpose compute layer that can handle different types of tasks, from IO-heavy data processing to training large models on GPUs. Besides variety, the number of tasks can be high too. Imagine a single workflow that trains a separate model for 200 countries in the world, running a hyperparameter search over 100 parameters for each model — the workflow yields 20,000 parallel tasks. Prior to the cloud, setting up and operating a cluster that can handle workloads like this would have been a major technical challenge. Today, a number of cloud-based, auto-scaling systems are easily available, such as AWS Batch. Kubernetes, a popular choice for general-purpose container orchestration, can be configured to work as a scalable batch compute layer, although the downside of its flexibility is increased complexity. Note that container orchestration for the compute layer is not to be confused with the workflow orchestration layer, which we will cover next. Orchestration The nature of computation is structured: We must be able to manage the complexity of applications by structuring them, for example, as a graph or a workflow that is orchestrated. The workflow orchestrator needs to perform a seemingly simple task: Given a workflow or DAG definition, execute the tasks defined by the graph in order using the compute layer. There are countless systems that can perform this task for small DAGs on a single server. However, as the workflow orchestrator plays a key role in ensuring that production workflows execute reliably, it makes sense to use a system that is both scalable and highly available, which leaves us with a few battle-hardened options — for instance Airflow , a popular open-source workflow orchestrator, Argo , a newer orchestrator that runs natively on Kubernetes, and managed solutions such as Google Cloud Composer and AWS Step Functions. Software development layers While these three foundational layers, data, compute, and orchestration, are technically all we need to execute ML applications at arbitrary scale, building and operating ML applications directly on top of these components would be like hacking software in assembly language — technically possible but inconvenient and unproductive. To make people productive, we need higher levels of abstraction. Enter the software development layers. Versioning ML app and software artifacts exist and evolve in a dynamic environment. To manage the dynamism, we can resort to taking snapshots that represent immutable points in time — of models, of data, of code, and of internal state. For this reason, we require a strong versioning layer. While Git , GitHub, and other similar tools for software version control work well for code and the usual workflows of software development, they are a bit clunky for tracking all experiments, models, and data. To plug this gap, frameworks like Metaflow or MLFlow provide a custom solution for versioning. Software architecture Next, we need to consider who builds these applications and how. They are often built by data scientists who are not software engineers or computer science majors by training. Arguably, high-level programming languages like Python are the most expressive and efficient ways that humankind has conceived to formally define complex processes. It is hard to imagine a better way to express non-trivial business logic and convert mathematical concepts into an executable form. However, not all Python code is equal. Python written in Jupyter notebooks following the tradition of data-centric programming is very different from Python used to implement a scalable web server. To make the data scientists maximally productive, we want to provide supporting software architecture in terms of APIs and libraries that allow them to focus on data, not on the machines. Data science layers With these five layers, we can present a highly productive, data-centric software interface that enables iterative development of large-scale data-intensive applications. However, none of these layers help with modeling and optimization. We cannot expect data scientists to write modeling frameworks like PyTorch or optimizers like Adam from scratch! Furthermore, there are steps that are needed to go from raw data to features required by models. Model operations When it comes to data science and modeling, we separate three concerns, starting from the most practical progressing towards the most theoretical. Assuming you have a model, how can you use it effectively? Perhaps you want to produce predictions in real-time or as a batch process. No matter what you do, you should monitor the quality of the results. Altogether, we can group these practical concerns in the model operations layer. There are many new tools in this space helping with various aspects of operations, including Seldon for model deployments, Weights and Biases for model monitoring, and TruEra for model explainability. Feature engineering Before you have a model, you have to decide how to feed it with labelled data. Managing the process of converting raw facts to features is a deep topic of its own, potentially involving feature encoders, feature stores, and so on. Producing labels is another, equally deep topic. You want to carefully manage consistency of data between training and predictions, as well as make sure that there’s no leakage of information when models are being trained and tested with historical data. We bucket these questions in the feature engineering layer. There’s an emerging space of ML-focused feature stores such as Tecton or labeling solutions like Scale and Snorkel. Feature stores aim to solve the challenge that many data scientists in an organization require similar data transformations and features for their work and labeling solutions deal with the very real challenges associated with hand labeling datasets. Model development Finally, at the very top of the stack we get to the question of mathematical modeling: What kind of modeling technique to use? What model architecture is most suitable for the task? How to parameterize the model? Fortunately, excellent off-the-shelf libraries like scikit-learn and PyTorch are available to help with model development. An overarching concern: Correctness and testing Regardless of the systems we use at each layer of the stack, we want to guarantee the correctness of results. In traditional software engineering we can do this by writing tests. For instance, a unit test can be used to check the behavior of a function with predetermined inputs. Since we know exactly how the function is implemented, we can convince ourselves through inductive reasoning that the function should work correctly, based on the correctness of a unit test. This process doesn’t work when the function, such as a model, is opaque to us. We must resort to black box testing — testing the behavior of the function with a wide range of inputs. Even worse, sophisticated ML applications can take a huge number of contextual data points as inputs, like the time of day, user’s past behavior, or device type into account, so an accurate test setup may need to become a full-fledged simulator. Since building an accurate simulator is a highly non-trivial challenge in itself, often it is easier to use a slice of the real-world as a simulator and A/B test the application in production against a known baseline. To make A/B testing possible, all layers of the stack should be able to run many versions of the application concurrently, so an arbitrary number of production-like deployments can be run simultaneously. This poses a challenge to many infrastructure tools of today, which have been designed for more rigid traditional software in mind. Besides infrastructure, effective A/B testing requires a control plane, a modern experimentation platform, such as StatSig. How: Wrapping the stack for maximum usability Imagine choosing a production-grade solution for each layer of the stack — for instance, Snowflake for data, Kubernetes for compute (container orchestration), and Argo for workflow orchestration. While each system does a good job at its own domain, it is not trivial to build a data-intensive application that has cross-cutting concerns touching all the foundational layers. In addition, you have to layer the higher-level concerns from versioning to model development on top of the already complex stack. It is not realistic to ask a data scientist to prototype quickly and deploy to production with confidence using such a contraption. Adding more YAML to cover cracks in the stack is not an adequate solution. Many data-centric environments of the previous generation, such as Excel and RStudio, really shine at maximizing usability and developer productivity. Optimally, we could wrap the production-grade infrastructure stack inside a developer-oriented user interface. Such an interface should allow the data scientist to focus on concerns that are most relevant for them, namely the topmost layers of stack, while abstracting away the foundational layers. The combination of a production-grade core and a user-friendly shell makes sure that ML applications can be prototyped rapidly, deployed to production, and brought back to the prototyping environment for continuous improvement. The iteration cycles should be measured in hours or days, not in months. Over the past five years, a number of such frameworks have started to emerge, both as commercial offerings as well as in open-source. Metaflow is an open-source framework, originally developed at Netflix, specifically designed to address this concern (disclosure: one of the authors works on Metaflow ). Google’s open-source Kubeflow addresses similar concerns, although with a more engineer-oriented approach. As a commercial product, Databricks provides a managed environment that combines data-centric notebooks with a proprietary production infrastructure. All cloud providers provide commercial solutions as well, such as AWS Sagemaker or Azure ML Studio. It is safe to say that all existing solutions still have room for improvement. Yet it seems inevitable that over the next five years the whole stack will mature, and the user experience will converge towards and eventually beyond the best data-centric IDEs. Businesses will learn how to create value with ML similar to traditional software engineering and empirical, data-driven development will take its place amongst other ubiquitous software development paradigms. Ville Tuulos is CEO and Cofounder of Outerbounds. He has worked as an ML researcher in academia and as a leader at a number of companies, including Netflix, where he led the ML infrastructure team that created Metaflow , an open-source framework for data science infrastructure. He is also the author of an upcoming book, Effective Data Science Infrastructure. Hugo Bowne-Anderson is a data scientist, educator, evangelist, content marketer, and data strategy consultant. He has worked at data science scaling company Coiled and has taught data science topics for data education platform DataCamp as well as at Yale University and Cold Spring Harbor Laboratory, conferences such as SciPy, PyCon, and ODSC, and at Data Carpentry. He previously hosted and produced the weekly data industry podcast DataFramed. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,765
2,022
"What you need to know about managing the modern supply chain | VentureBeat"
"https://venturebeat.com/2022/05/25/what-you-need-to-know-about-managing-the-modern-supply-chain"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community What you need to know about managing the modern supply chain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. You can’t plan for what you can’t predict. A huge container ship blocks the Suez Canal, disrupting global trade for several days. A global pandemic wreaks havoc on global supply chains, causing shortages and pushing prices sky-high. Now, Russia’s invasion of Ukraine has injected chaos into an already fragile supply chain ecosystem. The availability of everything from oil and natural gas to wheat is now in doubt as car manufacturers stop production in Russian factories. Automakers, in fact, are facing their third supply chain crisis in as many years. Old-fashioned supply chain planning involved charting the journey of a material or product from the raw material stage to the consumer. It also encompassed supply planning, demand planning, production planning, operations, inventory optimization, routing, transportation, logistics, warehouses and more. But what happens when there’s a weak link — or a complete break — in the chain? Something as minor as a truck breaking down or as major as a global pandemic introduces uncertainty into our planning algorithms. These types of supply chain disruptions are inevitable. While we can’t control for all the variables or predict the unpredictable, we can be better prepared to respond. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Supply chain snags and the bullwhip effect Let’s take a closer look at how the delicate balance of a supply chain’s complex system can be disrupted at one point, causing chaos further down the line. Today’s mass production environment favors just-in-time manufacturing — an environment that encourages receiving goods only as needed for production. This, ideally, reduces inventory costs and waste. Meanwhile, factories are tuned to work at full capacity. Considering how expensive it is to build and operate a factory (employees, robots, electricity), the last thing you want to do is stop the production line. In a stable world, with no surprise variables, you can calculate the inventory you need to keep the factory running smoothly. Factor in a few extra “safety stock” inventory units (just in case) and 99% of the time, all is well. What happens when there’s a supplier snag upstream? Or a consumer change of heart downstream? Exercise bike manufacturer Peloton faced this exact scenario recently. After an uptick in consumer bike purchases at the beginning of the pandemic, consumer demand began to cool over time. The result: thousands of cycles and treadmills sitting jam-packed in warehouses or on cargo ships. Peloton temporarily halted production of its connected fitness products earlier this year, while the company laid off staff and overhauled its management team. Peloton fell victim to the “bullwhip effect,” a supply chain situation that results from small fluctuations in demand at the retail level causing progressively larger fluctuations in demand at the wholesale, distributor and supplier levels. The phenomenon is named after the physics involved in cracking a whip: a slight flick of the wrist results in increasingly larger motions toward the end of the whip. A small change upstream, a major change downstream The Bullwhip Effect describes changing consumer demand patterns, but what happens when there are disruptions upstream? People tend to overreact when there is a small fluctuation at one end of the supply chain, perhaps triggered by a major event (such as Russia’s invasion of Ukraine). You may have 1,000 different parts feeding into your factory. What happens when a part you need is out of stock from the supplier? All the steps you thought you could carefully manage with a small inventory suddenly seize up. Consider the global chip shortage. In the early stages of the pandemic, early signs of changing demand patterns led to stockpiling and advance ordering of chips by some, which left other companies struggling to obtain needed components. Auto manufacturers cut their orders for semiconductor chips as they predicted demand for new cars would take a nosedive. Those chips were then snatched up by other industries for phones, computers and video games. Meanwhile, worldwide auto production was halted when a missing single part stalled production of the entire vehicle. It’s not about planning, it’s about responding: Graphing “what if” scenarios Many claim that poor forecasting and ineffective planning result in these supply chain disruptions. The problem isn’t a failure to plan, it’s a failure to effectively respond. How can you forecast numbers for six months from now when you have no idea what will be happening six months from now? It’s not like you build forecasting engines for the next pandemic. Instead, you should set a baseline number of units for inventory and focus on looking for demand signals in the market — and responding to these signals in a sensible way. Adapting to change is key. All CEOs should ask themselves, “What is our ability to adapt to unforeseeable big changes?” The key to adapting to change is having data systems in place that showcase your options, as well as quantify the implications of any given option. And you need to do this as quickly within the supply chain as possible. Once that signal goes downstream, it gets more difficult to recalibrate throughout the supply chain (and the result may be overstocked items or an inventory shortage). Traditional supply chain software is linear, passive and limited by relational databases. Relational databases, which store customer, order and product data in separate tables, were designed for steady data retention rather than dynamic data-intensive use cases. A graph database, however, can model disparate relationships and dependencies in a way that closely mirrors the real world. Graph, which tracks every individual part from supplier through the manufacturer to the finished product, can load massive amounts of data and uncover real-time relationship patterns. The graph provides a “what-if” engine, allowing companies to create a digital representation of a complex system (such as an automotive supply chain). The graph represents a “digital twin” of your real-life supply chain, allowing you to evaluate alternative plans in response to global changes in supply and demand. Graph algorithms, which include the shortest path and geographical proximity, can help you manage and mitigate complex dependencies — in real-time. As several internal and external factors (involving parts, people and things) cannot be forecasted, businesses must be ready to respond. What is the end-to-end impact of a change in supply? If a part is unavailable, what product can you build now with what you do have? Graph empowers you to take an active role in managing your response to demand changes, meaning you move away from a passive view of risk. If demand for a particular car model is suddenly dropping in the U.S. market, what parts will we now have in surplus? How can we best use these parts? What other options do I have? Graph analytics helps you answer the difficult “what-if” questions — and it even helps you ask and answer questions you had never even imagined. Since we can’t predict the unpredictable, our next-best option is to be ready to act at any given moment. If you have a real-time, what-if mastery of the data, relationships and dependencies within your supply chain, you’ll be ready for any snag, shortage, or surplus — minus the sting of the bullwhip. Harry Powell is the head of industry solutions at TigerGraph DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,766
2,022
"How data and automation can help with sustainability | VentureBeat"
"https://venturebeat.com/automation/how-data-and-automation-can-help-with-sustainability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How data and automation can help with sustainability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The entire world is in the midst of a digital transformation, which has changed daily operations for countless businesses across industries. Technological advancements such as artificial intelligence (AI) and automation are helping company leaders operate at greater efficiency than ever before, generate revenue, and perhaps even make the world a better place in the process. But how? Why sustainability makes sense For years, companies of all sizes have recognized the inherent value of environmental, social and governance (ESG) initiatives when it comes to customer retention and smooth overall operations. Sustainability strategies are a smart business move that may foster company longevity and keep customers coming back. However, while plenty of company leaders recognize the critical importance of sustainable initiatives, only about one-fourth of companies include sustainability as part of their business model, according to the International Institute for Management Development (IMD). For the greatest chance of long-term business success, the Switzerland-based organization encourages executives and company policymakers to first comply with local laws and regulations and then take a more proactive approach to sustainability. To that end, data and automation can help, by giving established companies and startups alike the necessary tools to meet their sustainability goals. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Breaking barriers and implementing green initiatives Ideally, a company’s sustainable initiatives should be authentic and environmentally focused, rather than rooted in the hope of increased profits. Today’s tech-savvy consumers increasingly use their spending power to support environmentally conscious companies and are even willing to shell out a few extra dollars on sustainable products and brands. Forward-thinking companies can maintain transparency by disclosing their sustainability goals and initiatives publicly and by encouraging customer feedback. Yet, that feedback won’t amount to much without the ability to make sense of it all, and automation can be a game-changer in this regard. Automation software can help alleviate some of the burdens of data interpretation, enabling companies to speed up their green initiatives and saving time and money. For example, with automation software on hand, companies can quickly and easily track energy usage, amount of waste produced daily, consumer habits, carbon footprint and more, in an effort to streamline operations. Depending on the amount of data collected, it could take months for a human worker to properly organize and analyze the relevant information. Technology gets us there much faster, with greater accuracy. Data-powered insights to inform optimization When it comes to a company’s sustainability goals, waste reduction tends to be at the forefront of the conversation and for good reason: while it’s difficult to determine the exact numbers in terms of industrial waste production, waste generation is a massive global problem that’s only expected to grow. What’s more, solid waste management is an inherently wasteful process in its own right, contributing some 1.6 billion tons of greenhouse gas emissions into the atmosphere in 2016 alone, according to The World Bank. A massively wasteful industry, manufacturing may benefit greatly from the data-automation-sustainability interplay, starting with mindful inventory management. Excess inventory can clog up the supply chain and landfills alike. Yet, through data-based insights and intelligent automation, businesses may be able to strike a balance between too much stock and not enough, significantly reducing waste, emissions and overall environmental impact. Increased efficiency of processes Waste comes in many forms, and countless businesses are guilty of wasting time. The adage “time is money” comes into play here — inefficient processes and redundancies can significantly hinder day-to-day operations while wasting company time and money. The good news is that automation can help bridge some gaps, improving efficiency of processes in every corner of the supply chain. Human error contributes significantly to the problem of inefficiency and wasted company time, and company leaders across industries are taking note. Companies can decrease workplace stress and redundancies via workflow automation , allowing employees to focus on meaningful work and potentially make fewer mistakes. Companies looking to implement workflow automation into their sustainability plan should start small and identify the operations wherein automation will reap the biggest payout. That payoff could encompass financial goals, environmental goals, or another plan altogether. Weighing cost vs. benefit For small business owners, implementing sustainability initiatives may seem more like a pipe dream than a tangible goal, as the technology can be costly to implement. What’s more, businesses that are using technology to drive sustainability must employ talented workers who can tap into those resources and streamline operations for the greatest economic and environmental benefit. However, as companies can leverage automation and data analytics to increase efficiency, adjust energy usage, reduce waste and otherwise help with sustainability, the cost of investing in automation is worth it. By giving company leaders the ability to see the big picture in terms of carbon footprint, data and automation can help optimize operations and improve a company’s bottom line. Charlie Fletcher is a freelance writer passionate about workplace equity, and whose published works cover sociology, technology, business, education, health, and more. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "