id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
15,867
2,021
"Conversational AI startup Cognigy nabs $44M | VentureBeat"
"https://venturebeat.com/2021/06/01/conversational-ai-startup-cognigy-nabs-44m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Conversational AI startup Cognigy nabs $44M Share on Facebook Share on X Share on LinkedIn Valassis Digital built a chatbot to help customer find cars and car dealers. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Conversational AI startup Cognigy today announced that it closed a $44 million series B funding round led by Insight Partners, which brings the company’s total raised to over $50 million to date. Cofounder and CEO Philipp Heltewig says that the proceeds will be put toward accelerating customer growth, creating new partnerships, and continuing to enhance Cognigy’s AI platform. The ubiquity of smartphones and messaging apps — as well as the pandemic — have contributed to the increased adoption of conversational technologies. Fifty-six percent of companies told Accenture in a survey that conversational bots and other experiences are driving disruption in their industry. And a Twilio study showed that 9 out of 10 consumers would like the option to use messaging to contact a business. Founded in 2016 in Düsseldorf, Germany, Cognigy provides a low-code platform that enables customers to create text and voice virtual agents. From a graphical conversation editor, users can manage the conversational flow of chatbots, developing experiences across a range of channels including the web, WhatsApp, Amazon Alexa, and more. “Heltewig and former communications engineer Sascha Poggemann recognized a strong need for enterprises to adopt mature language technologies five years ago. With their combined knowledge in enterprise software and communication, they built Cognigy to what it’s known for today: an AI-first, self-service automation solution for large enterprises, especially in customer service,” a spokesperson told VentureBeat via email. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Cognigy’s web dashboard. With Cognigy, customers can retain conversation context across channels and allow handovers between different interaction points. A module lets customers tune bots by reviewing the natural language understanding (NLU) results and providing feedback, with an intent review system that delivers assistance with NLU training. For added flexibility, Cognigy offers integrations with natural language understanding engines including Google Dialogflow, Microsoft LUIS, and IBM Watson. And the platform works with existing live chat tools like RingCentral Engage Digital, Avaya Oceana, and Genesys Pure Cloud. Users can tap Cognigy to perform automated regression testing, ensuring that business objectives are met after flow changes. The platform also supports extensions, which hook directly into a flow editor and can be written and deployed by anyone, the company said. Cognigy’s Snapshot capability, meanwhile, orchestrates the packaging and migration of virtual agents and NLU models. Snapshots can be used in combination with a command-line interface to automate processes like roll-out, external backup, or bot configuration. “Cognigy [acts] as an intelligent AI-powered middleware that can engage with customers and employees on the one hand, while being deeply integrated with enterprise systems and robotics on the other,” the spokesperson said. “Our customers use this technology for a broad range of use cases, ranging from human resources virtual assistants to product recommendations and intelligent contact center [platforms]. The recurring pattern is end-to-end automation, with deep system integration; a conversational interface — regardless of the channel — is only truly useful if it can actually engage in a transaction.” A growing market Cognigy, which has more than 100 employees, occupies a chatbot market that’s anticipated to be worth $142 billion by 2024, according to Insider Intelligence, up from $2 billion in 2019. Gartner predicts that over 50% of enterprises will spend more per annum on chatbot creation than mobile app development by this year. And Juniper Research expects that 75% to 90% of customer queries will be handled by chatbots within the next year. Even before the pandemic, autonomous agents were on the way to becoming the rule rather than the exception, partly because consumers prefer it that way. According to research published last year by Vonage subsidiary NewVoiceMedia, 25% of people prefer to have their queries handled by a chatbot or other self-service alternative. And Salesforce says roughly 69% of consumers choose chatbots for quick communication with brands. Despite competition from Gupshup , Ada , Omilia , Mindsay , Directly , and others, Cognigy claims its over 400 customers now include brands like Lufthansa, Mobily, Pfizer partner BioNTech, Vueling Airlines, Bosch, and Daimler. As of 2021, they’ve built and deployed thousands of virtual agents in more than 120 languages. “As a global leader in Conversational AI, we have a responsibility that goes beyond our ambitions for Cognigy. Our responsibility now is to continuously develop our product to lower the barriers to entry for enterprises to adopt AI in their organizations and help bring about a world in which artificial intelligence works alongside human workers in leading enterprises globally,” Heltewig said. “With this funding round, we can achieve this vision by continuing to hire the best talent, developing our platform Cognigy.AI, and establishing ourselves as the global leader in Conversational AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,868
2,021
"Can AI and cloud automation slash a cloud bill in half? | VentureBeat"
"https://venturebeat.com/2021/06/08/can-ai-and-cloud-automation-slash-a-cloud-bill-in-half"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Can AI and cloud automation slash a cloud bill in half? Share on Facebook Share on X Share on LinkedIn Presented by CAST.AI Many companies are accelerating their cloud plans right now, and most say that their cloud usage will exceed prior estimates due to the new demands posed by the global pandemic. Cloud computing is becoming a must-have resource, especially for young tech companies. And most of them are migrating to Amazon, Google, or Azure, lured by seemingly attractive offers. What many companies don’t realize is how dramatically the cloud spend can increase given that those expenses aren’t charged up-front. Organizations are often unaware of how easy it is to become locked into service at hard-to-understand prices, says Laurent Gil, co-founder and Chief Product Officer at CAST AI. “Vendor lock-in starts whenever you start using a service in a way that serves the purpose of the cloud provider,” he explains. “You have to choose cloud providers carefully and understand that the decision you make today will impact your operation for at least a few years because leaving this service is going to be very hard.” The biggest challenge in managing cloud costs Complexity is a real challenge for startups trying to make DevOps work in the cloud. But what they face is complexity by design on the part of cloud service providers, Gil says, simply because making it easy isn’t in the interest of a cloud provider. “How do you manage your cloud infrastructure with simple tools that will tell you what’s happening at a glance and whether you’re doing a good job managing it when your cloud bill is 80 pages long?” he asks. “ By design, cloud bills only tell you how much you spent, not why you’re spending that much.” It’s an urgent question to tackle, particularly for small companies that can now use a few tools that allow you to understand exactly where the money goes, why you spent that much, and why your bill increases every month. And the more you pay for the cloud as the company grows, the more complex and difficult it becomes for humans to make decisions about cost optimization. “You often don’t realize costs are mounting in the beginning, and then a year or two later, you’re confronted with a technical, financial, or operational debt,” Gil says. “It’s almost as if you inherit this situation. You don’t notice it in the beginning, but it catches up with you in a few months or years.” To understand cloud costs, you have to go much deeper than the simple ratio of number of customers to the amount of spend. Do you need all these virtual machines or services? Can you use a service from another cloud provider? Will it run cheaper or with less compute in a different cloud? Is there a performance-cost tradeoff — and if so, where it is? None of these questions are easy to answer unless you use some form of automation. And that’s traditionally been difficult — despite the fact that CPUs, memory, and storage are so readily available everywhere and should be extremely commoditized, Gil adds. “Tackling these dangerously high cloud bills requires automation,” he says. “Machine learning is capable of rightsizing: adding, deleting, and moving machines on the fly, automatically.” The role of AI in cloud management Machine learning is a crucial component in cloud cost optimization because of its ability to recognize and act on patterns. For example, if a SaaS provider experiences a lot of human-based traffic over the course of 24 hours, an AI engine will recognize the pattern to requisition and automatically add machines during busier parts of the day and delete those machines when they’re no longer needed. An airline may run a rare deep-discount promotion, and millions of people rush online to buy tickets in a wave so large that it looks like a DDoS attack. But since the AI uses a split-second decision-making process, it only needs a moment to recognize a swift and large acceleration in traffic and provision immediately, making the decision to add a virtual machine far faster than a human could have handled it, any time of the day. “This is where machine learning works great,” explains Gil. “It can make these decisions based on independent business elements that determine how busy an application is.” The AI engine will always check whether the machines are the right type and use the amount of compute you need. From a DevOps perspective, if you’re using 100 computers that are being used 80 or 90 percent of the time, you’re doing a great job. But an AI can calculate more precisely and check whether you need 100 8-core machines or 50 16-core machines, an ARM processor instead of an Intel processor. “The AI engine is trained to not make any assumptions, but optimize using any means that it has learned,” Gil says. “If the image of this application is compiled for both Intel and ARM, the AI engine can slash your costs by half just by choosing the right machine at a given time.” Another example is using spot instances ; highly discounted VMs that almost all hyperscale cloud providers offer. The discount is usually between 60 and 80 percent, but the tradeoff is that you only get a short warning when the cloud provider takes those machines back. This is impossible to handle for a human — but an AI can quickly spin up another machine and look for any other available spot instances. The good thing about using AI in cloud automation is that it can make decisions based on somewhat correlated variables with a limited amount of information. “It’s a bit of a black box in the end, but as humans we see its results clearly,” Gil says. “We’re can easily judge whether our AI engine is doing a good job based on how much money we save or how much we optimize.” Cutting customer costs in half “AI and ML are great tools for reducing the complexity in managing a complex infrastructure for our customers,” Gil says. “If you replace something complex with something else that is also complex, you haven’t done your job.” A recent CAST AI client, an online grocery store, started using the company’s new product that optimizes EKS applications from Amazon. The forecast report indicated that they could save 50 percent of their time by moving from one type of machine to another. “Just by doing this, the client reduced their bill from $180,000/month worth of compute to $70,000 after one week, without affecting the performance at all,” Gil says. “And it’s a good thing for the cloud providers too — whenever you commoditize a resource, customers use more rather than less of it,” he adds. “We’re ensuring that compute capacity is used the right way, democratizing it, and helping companies funnel those costs back into bigger and better projects.” Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,869
2,021
"Study finds that few major AI research papers consider negative impacts | VentureBeat"
"https://venturebeat.com/2021/07/01/study-finds-that-few-major-ai-research-papers-consider-negative-impacts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Study finds that few major AI research papers consider negative impacts Share on Facebook Share on X Share on LinkedIn Tensor processing units (TPUs) in one of Google's data centers. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent decades, AI has become a pervasive technology, affecting companies across industries and throughout the world. These innovations arise from research, and the research objectives in the AI field are influenced by many factors. Together, these factors shape patterns in what the research accomplishes, as well as who benefits from it — and who doesn’t. In an effort to document the factors influencing AI research, researchers at Stanford, the University of California, Berkeley, the University of Washington, and University College Dublin & Lero surveyed 100 highly cited studies submitted to two prominent AI conferences, NeurIPS and ICML. They claim that in the papers they analyzed, which were published in 2008, 2009, 2018, and 2019, the dominant values were operationalized in ways that centralize power, disproportionally benefiting corporations while neglecting society’s least advantaged. “Our analysis of highly influential papers in the discipline finds that they not only favor the needs of research communities and large firms over broader social needs, but also that they take this favoritism for granted,” the coauthors of the paper wrote. “The favoritism manifests in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values such as performance, generalization, efficiency, and novelty. These values are operationalized in ways that disfavor societal needs, usually without discussion or acknowledgment.” In the papers they reviewed, the researchers identified “performance,” “building on past work,” “generalization,” “efficiency,” “quantitative evidence,” and “novelty” as the top values espoused by the coauthors. By contrast, values related to user rights and ethical principles appeared very rarely — if at all. None of the papers mentioned autonomy, justice, or respect for persons, and most only justified how the coauthors achieved certain internal, technical goals. Over two-thirds — 71% — didn’t make any mention of societal need or impact, and just 3% made an attempt to identify links connecting their research to societal needs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of the papers included a discussion of negative impacts and a second mentioned the possibility. But tellingly, none of the remaining 98 contained any reference to potential negative impacts, according to the Stanford, Berkeley, Washington, and Dublin researchers. Even after NeurIPS mandated that coauthors who submit papers must state the “potential broader impact of their work” on society, beginning with NeurIPS 2020 last year, the language leaned toward positive consequences, often mentioning negative consequences only briefly or not at all. “We reject the vague conceptualization of the discipline of [AI] as value-neutral,” the researchers wrote. “The upshot is that the discipline of ML is not value-neutral. We find that it is socially and politically loaded, frequently neglecting societal needs and harms, while prioritizing and promoting the concentration of power in the hands of already powerful actors.” To this end, the researchers found that ties to corporations — either funding or affiliation — in the papers they examined doubled to 79% from 2008 and 2009 to 2018 and 2019. Meanwhile, ties to universities declined to 81%, putting corporations nearly on par with universities for the most-cited AI research. The trend is partly attributable to private sector poaching. From 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40% , reflecting the growing movement of researchers from academia to corporations. But whatever the cause, the researchers assert that the effect is the suppression of values such as beneficence, justice, and inclusion. “The top stated values of [AI] that we presented in this paper such as performance, generalization, and efficiency … enable and facilitate the realization of Big Tech’s objectives,” they wrote. “A ‘state-of-the-art’ large image dataset, for example, is instrumental for large scale models, further benefiting [AI] researchers and big tech in possession of huge computing power. In the current climate where values such as accuracy, efficiency, and scale, as currently defined, are a priority, user safety, informed consent, or participation may be perceived as costly and time consuming, evading social needs.” A history of inequality The study is only the latest to argue that the AI industry is built on inequality. In an analysis of publications at two major machine learning conference venues, NeurIPS 2020 and ICML 2020, none of the top 10 countries in terms of publication index were located in Latin America, Africa, or Southeast Asia. A separate report from Georgetown University’s Center for Security and Emerging Technology found that while 42 of the 62 major AI labs are located outside of the U.S., 68% of the staff are located within the United States. The imbalances can result in harm, particularly given that the AI field generally lacks clear descriptions of bias and fails to explain how, why, and to whom specific bias is harmful. Previous research has found that ImageNet and OpenImages — two large, publicly available image datasets — are U.S.- and Euro-centric. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. Along this vein, because of how images of words like “wedding” or “spices” are presented in distinctly different cultures, publicly available object recognition systems fail to correctly classify many of these objects when they come from the Global South. Initiatives are underway to turn the tide, like Khipu and Black in AI , which aim to increase the number of Latin American and Black scholars attending and publishing at premiere AI conferences. Other communities based on the African continent , like Data Science Africa , Masakhane , and Deep Learning Indaba , have expanded their efforts with conferences, workshops, dissertation awards, and developed curricula for the wider African AI community. But substantial gaps remain. AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google reportedly in part over a paper that discusses risks associated with deploying large language models, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people. Google-affiliated coauthors later published a paper pushing back against Gebru’s environmental claims. “We present this paper in part in order to expose the contingency of the present state of the field; it could be otherwise,” the University College Dublin & Lero researchers and their associates wrote. “For individuals, communities, and institutions wading through difficult-to-pin-down values of the field, as well as those striving toward alternative values, it is a useful tool to have a characterization of the way the field is now, for understanding, shaping, dismantling, or transforming what is, and for articulating and bringing about alternative visions.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,870
2,021
"AI experts refute Cvedia's claim its synthetic data eliminates bias | VentureBeat"
"https://venturebeat.com/2021/07/06/ai-experts-refute-cvedias-claim-its-synthetic-data-eliminates-bias"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI experts refute Cvedia’s claim its synthetic data eliminates bias Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Most of AI’s critical challenges aren’t actually about AI; they’re about data. It’s biased. It’s collected and used without regard for privacy and consent. And machine learning systems require astronomical amounts of it. Now as privacy laws proliferate, it will also be harder to come by. Enterprises are increasingly considering synthetic data to power their AI. Digitally generated as a stand-in for real-world data, it is touted as truly anonymous and bias-free. And because it’s supposed to be free from all the issues of messy real-world data, much less of it would be required. But that’s all easier said than done. While enterprises across industries are already using synthetic data to train voice recognition, computer vision, and other systems, serious issues persist. We know the original training data isn’t always truly obscured, and there’s currently little evidence to suggest synthetic data can effectively mitigate bias. On top of that, performance has been mixed compared to systems trained on real-world data. Recently, synthetic data and computer vision company Cvedia announced it has “officially solved the ‘domain adaptation gap'” with a proprietary synthetic data pipeline it claims performs better than algorithms trained on real data. The company is also claiming its system is free of bias, built on “zero data,” and will enable customers to “sidestep the entire data process.” If true, such advancements could strengthen the case for the use of synthetic data in AI, but experts say Cvedia lacks sufficient evidence and has oversold its work. “It’s not solving the entire domain gap, nor is it eliminating bias from the systems,” Mike Cook, an AI researcher at the Queen Mary University of London, told VentureBeat. “It’s definitely good. Like I say, I’ve seen similar techniques elsewhere. But it’s not doing all the amazing things being claimed here.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The domain gap The “domain gap” or “domain adaptation gap” refers to the way AI trained on a specific type of data struggles to transfer its knowledge to a different type of data. Beyond comparisons between synthetic and real-world performance, this performance issue often happens with the deployment of AI systems in general, since they’re inherently moving from a clean environment into real scenarios. There’s often also a domain gap when applying AI to a new task. Cook said “it’s definitely a big problem” but that it’s not the type that can be “solved.” In its announcement, Cvedia doesn’t clearly spell out what it has actually accomplished. Other than one vague metric — precision improvement of 170% while sustaining a gain of 160% on recall over benchmarks — the company didn’t release any information about the data or its processes. Cvedia cofounder and CEO Arjan Wijnveen told VentureBeat the data is mainly for EO/IR sensors used in various types of cameras, specifically for detection, classification, and regression algorithms. But he wouldn’t share any information about tests and trials, which both Cook and Os Keyes, an AI researcher at the University of Washington, agreed are needed to support the claims. Wijnveen declined to share such information with VentureBeat, calling it proprietary. But he did say the metric released and overall claims are based on just one use case — defense supplier FLIR Systems, which provided the statistic from its own evaluation. Cook and Keyes agree that even if the company has seen performance success with one system, that’s a far reach from solving the domain gap problem. They became especially skeptical upon hearing that Cvedia is funded by FLIR Systems and the defense company’s CTO, Pierre Boulanger, is also one of Cvedia’s two legal advisors (Wijnveen is the other). Data is data Synthetic data is typically created by digitally regenerating real-world data so it’s still mathematically representative. But in its press release, Cvedia claims it didn’t use any data at all. Wijnveen later explained it differently to VentureBeat, saying “it is simply created out of thin air” and that this “goes against all the things data scientists stand for, but for us it really does work.” Specifically, he explained the company tapped a team of 50 artists to create 3D models of various objects found in the real world, which the company then sells to be used for training AI systems. He added that labeling is “fully automated” and that a 3D engine “simply generates data with the labels and signs.” For these reasons, he claims AI built on these models is free of bias. But models represent data, even if they’re created internally rather than collected. And someone had to design every part of the systems that made all this happen. Wijnveen also admitted there are some exceptions, where real photos were used and annotations were done by hand. Overall, Cook called the belief that the technique eliminates bias an “eyebrow-raising claim.” “Generating your own data is definitely a useful approach, but in no way would anyone consider that free of bias,” he said. “Who are these artists? Which objects did they model? Who chose them? Suppose this is a targeting AI for a military drone and I’m going to teach it to identify civilian targets from military ones. The artists still need to be told what to model. If they’re told to model mosques as potential military targets and American bases as civilian ones, we wouldn’t say that was unbiased simply because they’re 3D models.” Keyes agreed, citing how limitations act as a bias in this scenario: “Whether you have 50 photographers out on the street taking pictures or 50 CAD artists in a basement making them up, those 50 people are still going to be limited in what objects they can see and imagine.” Defining bias Even presented with these points, Wijnveen argued that systems trained on Cvedia’s synthetic data are free of bias. He doubled down with regard to bias around race and face detection, saying “these are not biases we suffer from.” It turns out, he was using his own definition of bias. “There’s always going to be tradeoffs, right, so it’ll never be a perfect solution. But very often, depending on the jurisdiction of the application on top of it, you’re still going to get suitable results,” Wijnveen said. “So it really is about having a productive commercial level application that will work in the field, and not so much from a scientific, data science point of view.” He went on to say “there are nuances” and we need to “redefine [bias] in terms of academic versus productive-ized biases.” But bias is no trivial issue when it comes to machine learning and AI. It plagues the field and can creep into algorithms in several ways. Cook said bias elimination is an “extremely strong claim” and that it makes the press release “transparent as a piece of PR.” He added that saying you’ve eliminated bias means something specific to people, who overwhelmingly view the issue through a lens Wijnveen calls “academic.” Keyes compared the claim to a doctor declaring they had cured cancer after treating one melanoma. “No academic researcher worth their salt genuinely believes one can entirely eliminate bias because of how contextual to the use case ‘bias’ is,” Keyes said. “The thing that makes this not academic is that there’s zero actual detail or evidence. If an academic researcher tried to make a claim like this, they would be required to explain precisely what they were doing, how they were defining bias, what the system was for. He’s done none of that. He’s just declaring ‘We fixed the problem! Please don’t ask us how; it’s proprietary.'” Maintaining AI realism In spite of issues with Cvedia’s work and with synthetic data at large, the general approach may hold promise. Keyes and Cook agree the company’s work could be interesting, and DeepMind has been working on something similar since 2018. If synthetic data truly could obscure its origins and perform as well as systems trained on real-world data, that would be a step forward, particularly when sensitive information is involved. But as more enterprises consider using synthetic data and move to implement AI in various forms, caution is warranted. While there are practical strategies for mitigating bias , enterprises should remain highly skeptical of claims that it has been eliminated. And tools meant to help spot and mitigate bias often underdeliver, as these issues run deep and elude easy fixes. “It’s important to take steps to improve how we build AI systems, but we also need to be realistic about the process and realize it requires attacking from multiple angles,” Cook said. “There’s no silver bullet.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,871
2,021
"Aurora's SPAC merger comes amid self-driving car delays | VentureBeat"
"https://venturebeat.com/2021/07/24/auroras-spac-merger-comes-amid-self-driving-car-delays"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aurora’s SPAC merger comes amid self-driving car delays Share on Facebook Share on X Share on LinkedIn Inside a self-driving car with Mobileye technology. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Self-driving car startup Aurora is ready to go public in a reverse merger with Reinvent Technology Partners Y , a special acquisition company (SPAC). The merger will give Aurora an infusion of cash to develop autonomous trucks and, later, self-driving passenger cars. Going public is not the norm for companies that don’t have a working and profitable business model. What makes Aurora’s reverse IPO even more unusual is that it comes as the self-driving car industry is struggling with missed deadlines, shuttered projects, unsettled technical challenges, growing cash-burn rates, and loss of public trust. The race to pump cash into self-driving car startups can either indicate confidence in a technological breakthrough in the near term, or a desperate run to keep operations afloat until someone figures out how to overcome one of the greatest challenges of artificial intelligence. What is Aurora up to? Aurora was founded in 2017 by three veterans of the autonomous driving industry: Chris Urmson, former CTO of Google’s self-driving project before it became Waymo; Sterling Anderson, former head of Tesla Autopilot; and Drew Bagnell, former head of Uber’s self-driving team. Aurora develops hardware and software for autonomous driving and calls its stack Aurora Driver. The company’s self-driving technology uses lidars , computer vision, and high-definition maps of roads. The company started out with autonomy for passenger cars and got involved in self-driving trucks in 2018. Aurora says its technology has so far accumulated 4.5 million miles of physical road test and 6 billion miles of simulated driving (by comparison, Waymo has driven more than 20 million miles on public roads, with nearly 7 million miles in Arizona alone). The company has integrated and tested its technology on cars and trucks of Volvo, PACCAR, and Toyota, all of which are partners and have invested in the company. It’s also in partnership with Uber (another of its investors), from whom it bought its self-driving unit, the Advanced Technology Group (ATG), in 2020. The acquisition gave Aurora access to Uber’s talent and experience and put Uber on Aurora’s board. According documents Aurora has published, it plans to launch commercial self-driving trucks in late 2023. The declared goal is level-4 self-driving, in which the AI takes care of most of the driving and human drivers only take control in complicated settings. Aurora also plans to follow up with self-driving passenger car technology in 2024 with last-mile delivery and ride-hailing services. The SPAC A SPAC is a shell company that goes to the stock market for the sole purpose of a reverse merger. It has no business or operations. Sometimes it’s called a “blank check” company, because investors are basically trusting its owners to make a good acquisition without knowing in advance which company it will be. Once the merger is made, the SPAC’s name is changed to that of the acquired company. For the company being acquired, a SPAC relieves the complexities of the IPO process, the road show, and the pre-IPO scrutiny. This is especially beneficial for companies such as Aurora, which are going public on the mere promise of delivering a product in the future and don’t have a working business model to present. Basically, SPACs give companies a fresh new round of funding from the stock market minus the usual complications. The reverse merger with Reinvent will provide Aurora with more than $2 billion in cash to continue its costly and unprofitable operations for another few years. But SPACs aren’t without trade-offs. As a publicly traded company Aurora will be under public scrutiny and will have to be fully transparent and publish complete details of its operations and expenses, which can be unpleasant when you’re burning investor money without making any profit. Reinvent was launched by LinkedIn co-founder and investor Reid Hoffman, Zynga founder Mark Pincus, and investor Michael Thompson. Reinvent’s investors include other Aurora funders and partners, including Sequoia Capital, T. Rowe Price Associates, Index Ventures, Uber, Baillie Gifford, Index Ventures, Volvo, and PACCAR. Hoffman is also partner at Greylock, a VC firm that, along with Index Ventures, invested $90 million in Aurora in 2018. The funding round put Hoffman on Aurora’s board. (According to an Aurora statement, Hoffman “is not a member of the transaction committee, was not permitted to attend any sessions of the transaction committee, and has recused himself from discussions and decisions of Reinvent’s board about the proposed transaction. Mr. Hoffman also recused himself from discussions of Aurora’s board of directors and management about the proposed transaction and from voting on matters related to the proposed transaction.”) The business plan built on self-driving vehicles Aurora’s decision to start with the low-hanging fruit of self-driving truck s makes sense from a business perspective. Autonomous ride-hailing has so far proven to be a hard nut to crack. Both Uber and Lyft have sold their self-driving units and canceled short-term plans to launch their own robo-taxi services. And Waymo, which has access to Google’s virtually bottomless supply of money, has only launched its fully self-driving service ( with remote backups ) in limited jurisdictions and without making profits. Achieving L4 self-driving with trucks, however, is supposedly much easier (though there’s still no company with a fully operational and profitable product yet). Trucks spend most of their time on highways and freeways, where they don’t have to deal with pedestrians, unprotected turns, and other thorny situations. Waabi, another self-driving car startup that recently came out of stealth with $85 million in funding, has also set its sights on self-driving trucks in the short term. If Aurora manages to achieve its goal, the self-driving truck product will provide it with access to a huge market in which Volvo and PACCAR have a sizable share. It can then use the profits to fund its continued research and development of self-driving technology for urban areas. The big financial drain But for the moment, Aurora is losing money at an accelerating pace ($214 million in 2020 vs. $94 million in 2019), and the financial support it receives from the SPAC merger will be crucial for the next few years. According to its documents, Aurora doesn’t expect to become profitable before 2027, three years after it delivers its self-driving truck product. And given the history of missed deadlines in the self-driving industry, it won’t be surprising to see some adjustments to Aurora’s timeline. (Aurora acknowledges this in its investor presentation deck: “It is possible that our technology will have more limited performance or may take us longer to complete than is currently projected. This could materially and adversely affect our addressable markets, commercial competitiveness, and business prospects.”) If the plan works out, Aurora’s investors would see huge returns on their investment. But there are a lot of ifs in Aurora’s road map, including four slides that detail 68 risk factors, several of which can spell disaster for the entire business model, making it seem like a very risky gamble. At this point, it’s hard to say whether the SPAC merger will turn out to be a huge business success or a last-ditch effort by Aurora’s initial and new investors to keep the self-driving car company afloat, hoping that its roster of experienced and talented engineers will make things work before the investors run out of cash or patience (or both). Ben Dickson is a software engineer and founder of TechTalks. He writes about technology, business, and politics. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,872
2,021
"Customer engagement analytics startup Retain.ai nabs $23M | VentureBeat"
"https://venturebeat.com/2021/08/05/customer-engagement-analytics-startup-retain-ai-nabs-23m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Customer engagement analytics startup Retain.ai nabs $23M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Retain.ai , a platform that gives enterprises a view of customer engagement across teams, processes, and apps, has raised $23 million in a funding round led by Emergence Capital, with participation from Baseline Ventures, Upside Partnership, and Afore Capital. The new funding will be used to support growth and more than double Retain.ai’s workforce by the end of 2021, cofounder and CEO Eric Chernoff said. This round brings the company’s total raised to more than $27 million to date. As companies grow, it can become difficult for them to understand how all of their divisions are servicing customers. This can lead to investing too much effort in the wrong customers and not investing enough with the right customers. For example, customers that aren’t paying can take up the most time from product, engineering, marketing, and other teams. Unfortunately, gathering the data needed for customer engagement analysis usually requires time-consuming, account-specific timesheets, process and time studies, or analyses using data from disparate sources of record. Retain.ai aims to automate the process by providing a breakdown of customer data. The platform works with browser-based apps to create a picture of customer engagement, providing customer-facing teams and managers measurements of internal process efficiency. “Retain’s [engine] delivers a trusted, flexible system for identifying and sharing the habits that drive customer retention and revenue,” Chernoff, a former LiveRamp employee who cofounded Retain.ai with Vlad Shulman in 2020, told VentureBeat via email. “Every employee across the customer lifecycle deserves a copilot, powered by billions of monthly data points , that can provide recommendations such as ‘relative to accounts that grow 3 times, we noticed you could be doing more of the things that work for other accounts.’ With Retain as that copilot, organizations can propagate the best habits across entire teams and processes, making everyone better at their job.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unifying customer data At setup, admins using Retain build an “allow list” of apps, web pages, and attributes to capture data and process workflows. Users download a browser extension and Retain collects detailed session data, including page URLs, start and end times, page attributes, process categories, and more. The platform converts this data into actionable information via visualizations and summarizations, providing a source of truth for customer, team, and app interactions across a company. According to Chernoff, the Retain platform can answer questions about return on investment relative to customer spend, which can be used to create new revenue centers for customer success. Because Retain can capture engagement time on individual accounts, outside of contracted time, companies can leverage this to upsell service contracts, Chernoff says. Retain also provides visibility into customer relationships to act as an early warning sign for churn. Brands can use it to create “relationship scorecards” that enable them to monitor customer interactions and course-correct if necessary. Above: A view of Retain.ai’s dashboard, which shows data collected from various customer and team sources in one place. “[Retain helps] companies to understand overall cost-to-serve customers through insights on efforts [and] activities that go into serving customers throughout their lifecycle,” Chernoff continued. “[Most] leaders are struggling to focus on the highest value processes and customers and don’t know how to remedy the situation … With our background in data connectivity, we saw an opportunity to apply the same techniques associated with adtech … to help companies better understand whether or not their investment in a particular customer’s success was beneficial to their bottom line.” San Francisco, California-based Retain, which has 20 employees, says its software is now being used by thousands of users across over a dozen Fortune 500 companies, including Google, Nielsen, and Salesforce. Annual recurring revenue is reportedly up 8 times over the last 12 months, and growth at Retain’s current clients is averaging a 36 times uptick. “My goal is for Retain to be the next generation of customer experience data and replace all the spliced-together self-reporting data and time-consuming consulting … [For our clients, we’re] returning the 23,000 hours per year spent on cumbersome internal processes to maximize customer-facing engagement [while] growing revenue 25% by boosting engagement with high-value customers and increasing retention,” Chernoff said. “With enterprises adopting work-from-anywhere and hybrid models, [we] believe that everyone at a company is in a long-distance relationship with their customers and team. As a result, enterprises need visibility and to ensure nothing falls through the cracks more than ever.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,873
2,021
"OpenAI launches Codex, an API for translating natural language into code | VentureBeat"
"https://venturebeat.com/2021/08/10/openai-launches-codex-an-api-for-translating-natural-language-into-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI launches Codex, an API for translating natural language into code Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI today released OpenAI Codex , its AI system that translates natural language into code, through an API in private beta. Able to understand more than a dozen programming languages, Codex can interpret commands in plain English and execute them, making it possible to build a natural language interface for existing apps. Codex powers Copilot , a GitHub service launched earlier this summer that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Codex is trained on billions of lines of public code and works with a broad set of frameworks and languages, adapting to the edits developers make to match their coding styles. According to OpenAI, the Codex model available via the API is most capable in Python but is also “proficient” in JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, Shell, and others. Its memory — 14KB for Python code — enables it to into account contextual information while performing programming tasks including transpilation, explaining code, and refactoring code. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! OpenAI says that Codex will be offered for free during the initial period. “Codex empowers computers to better understand people’s intent, which can empower everyone to do more with computers,” the company wrote in a blog post. “We are now inviting businesses and developers to build on top of OpenAI Codex through our API.” Potentially problematic While highly capable, a recent paper published by OpenAI reveals that Codex might have significant limitations, including biases and sample inefficiencies. The company’s researchers found that the model proposes syntactically incorrect or undefined code, invoking variables and attributes that are undefined or outside the scope of a codebase. More concerningly, Codex sometimes suggests solutions that appear superficially correct but don’t actually perform the intended task. For example, when asked to create encryption keys, Codex selects “clearly insecure” configuration parameters in “a significant fraction of cases” and recommends compromised packages as dependencies. Like other large language models, Codex generates responses as similar as possible to its training data, leading to obfuscated code that looks good on inspection but actually does something undesirable. Specifically, OpenAI found that Codex can be prompted to generate racist and otherwise harmful outputs as code. Given the prompt “def race(x):,” OpenAI reports that Codex assumes a small number of mutually exclusive race categories in its completions, with “White” being the most common, followed by “Black” and “Other.” And when writing code comments with the prompt “Islam,” Codex often includes the word “terrorist” and “violent” at a greater rate than with other religious groups. Perhaps anticipating criticism, OpenAI asserted in the paper that risk from models like Codex can be mitigated with “careful” documentation and user interface design, code review, and content controls. In the context of a model made available as a service — e.g., via an API — policies including user review, use case restrictions, monitoring, and rate limiting might also help to reduce harms, the company said. In a previous statement, an OpenAI spokesperson told VentureBeat that it was “taking a multi-prong approach” to reduce the risk of misuse of Codex, including limiting the frequency of requests to prevent automated usage that may be malicious. The company also said that it would update its safety tools and policies as it makes Codex available through the API and monitors the launch of Copilot. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,874
2,021
"AI21 Labs trains a massive language model to rival OpenAI's GPT-3 | VentureBeat"
"https://venturebeat.com/2021/08/11/ai21-labs-trains-a-massive-language-model-to-rival-openais-gpt-3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI21 Labs trains a massive language model to rival OpenAI’s GPT-3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For the better part of a year, OpenAI’s GPT-3 has remained among the largest AI language models ever created, if not the largest of its kind. Via an API, people have used it to automatically write emails and articles , summarize text, compose poetry and recipes, create website layouts, and generate code for deep learning in Python. But an AI lab based in Tel Aviv, Israel — AI21 Labs — says it’s planning to release a larger model and make it available via a service, with the idea being to challenge OpenAI’s dominance in the “natural language processing-as-a-service” field. AI21 Labs, which is advised by Udacity founder Sebastian Thrun, was cofounded in 2017 by Crowdx founder Ori Goshen, Stanford University professor Yoav Shoham, and Mobileye CEO Amnon Shashua. The startup says that the largest version of its model — called Jurassic-1 Jumbo — contains 178 billion parameters, or 3 billion more than GPT-3 (but not more than PanGu-Alpha , HyperCLOVA , or Wu Dao 2.0 ). In machine learning, parameters are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. AI21 Labs claims that Jurassic-1 can recognize 250,000 lexical items including expressions, words, and phrases, making it bigger than most existing models including GPT-3, which has a 50,000-item vocabulary. The company also claims that Jurassic-1 Jumbo’s vocabulary is among the first to span “multi-word” items like named entities — “The Empire State Building,” for example — meaning that the model might have a richer semantic representation of concepts that make sense to humans. “AI21 Labs was founded to fundamentally change and improve the way people read and write. Pushing the frontier of language-based AI requires more than just pattern recognition of the sort offered by current deep language models,” CEO Shoham told VentureBeat via email. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Scaling up The Jurassic-1 models will be available via AI21 Labs’ Studio platform, which lets developers experiment with the model in open beta to prototype applications like virtual agents and chatbots. Should developers wish to go live with their apps and serve “production-scale” traffic, they’ll be able to apply for access to custom models and get their own private fine-tuned model, which they’ll be able to scale in a “pay-as-you-go” cloud services model. “Studio can serve small and medium businesses, freelancers, individuals, and researchers on a consumption-based … business model. For clients with enterprise-scale volume, we offer a subscription-based model. Customization is built into the offering. [The platform] allows any user to train their own custom model that’s based on Jurassic-1 Jumbo, but fine-tuned to better perform a specific task,” Shoham said. “AI21 Labs handles the deployment, serving, and scaling of the custom models.” AI21 Labs’ first product was Wordtune, an AI-powered writing aid that suggests rephrasings of text wherever users type. Meant to compete with platforms like Grammarly, Wordtune offers “freemium” pricing as well as a team offering and partner integration. But the Jurassic-1 models and Studio are much more ambitious. Shoham says that the Jurassic-1 models were trained in the cloud with “hundreds” of distributed GPUs on an unspecified public service. Simply storing 178 billion parameters requires more than 350GB of memory — far more than even the highest-end GPUs — which necessitated that the development team use a combination of strategies to make the process as efficient as possible. The training dataset for Jurassic-1 Jumbo, which contains 300 billion tokens, was compiled from English-language websites including Wikipedia, news publications, StackExchange, and OpenSubtitles. Tokens, a way of separating pieces of text into smaller units in natural language, can be either words, characters, or parts of words. In a test on a benchmark suite that it created, AI21 Labs says that the Jurassic-1 models perform on a par or better than GPT-3 across a range of tasks, including answering academic and legal questions. By going beyond traditional language model vocabularies, which include words and word pieces like “potato” and “make” and “e-,” “gal-,” and “itarian,” Jurassic-1 canvasses less common nouns and turns of phrase like “run of the mill,” “New York Yankees,” and “Xi Jinping.” It’s also ostensibly more sample-efficient — while the sentence “Once in a while I like to visit New York City” would be represented by 11 tokens for GPT-3 (“Once,” “in,” “a,” “while,” and so on), it would be represented by just 4 tokens for the Jurassic-1 models. “Logic and math problems are notoriously hard even for the most powerful language models. Jurassic-1 Jumbo can solve very simple arithmetic problems, like adding two large numbers,” Shoham said. “There’s a bit of a secret sauce in how we customize our language models to new tasks, which makes the process more robust than standard fine-tuning techniques. As a result, custom models built in Studio are less likely to suffer from catastrophic forgetting, [or] when fine-tuning a model on a new task causes it to lose core knowledge or capabilities that were previously encoded in it.” Connor Leahy, a member of the open source research group EleutherAI , told VentureBeat via email that while he believes there’s nothing fundamentally novel about the Jurassic-1 Jumbo model, it’s an impressive feat of engineering, and he has “little doubt” it will perform on a par with GPT-3. “It will be interesting to observe how the ecosystem around these models develops in the coming years, especially what kinds of downstream applications emerge as robustly useful,” he added. “[The question is] whether such services can be run profitably with fierce competition, and how the inevitable security concerns will be handled.” Open questions Beyond chatbots, Shoham sees the Jurassic-1 models and Studio being used for paraphrasing and summarization, like generating short product names from product description. The tools could also be used to extract entities, events, and facts from texts and label whole libraries of emails, articles, notes by topic or category. But troublingly, AI21 Labs has left key questions about the Jurassic-1 models and their possible shortcomings unaddressed. For example, when asked what steps had been taken to mitigate potential gender, race, and religious biases as well as other forms of toxicity in the models, the company declined to comment. It also refused to say whether it would allow third parties to audit or study the models’ outputs prior to launch. This is cause for concern, as it’s well-established that models amplify the biases in data on which they were trained. A portion of the data in the language is often sourced from communities with pervasive gender, race, physical , and religious prejudices. In a paper , the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claims that GPT-3 and like models can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Google’s BERT and XLNet and Facebook’s RoBERTa. More recent research suggests that toxic language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure the models work better for them, or discourage minority speakers from engaging with the models at all. It’s unclear to what extent the Jurassic-1 models exhibit these kinds of biases, in part because AI21 Labs hasn’t released — and doesn’t intend to release — the source code. The company says it’s limiting the amount of text that can be generated in the open beta and that it’ll manually review each request for fine-tuned models to combat abuse. But even fine-tuned models struggle to shed prejudice and other potentially harmful characteristics. For example, Codex, the AI model that powers GitHub’s Copilot service, can be prompted to generate racist and otherwise objectionable outputs as executable code. When writing code comments with the prompt “Islam,” Codex often includes the word “terrorist” and “violent” at a greater rate than with other religious groups. University of Washington AI researcher Os Keyes, who was given early access to the model sandbox, described it as “fragile.” While the Jurassic-1 models didn’t expose any private data — a growing problem in the large language model domain — using preset scenarios, Keyes was able to prompt the models to imply that “people who love Jews are closed-minded, people who hate Jews are extremely open-minded, and a kike is simultaneously a disreputable money-lender and ‘any Jew.'” Above: An example of toxic output from the Jurassic models. “Obviously: all models are wrong sometimes. But when you’re selling this as some big generalizable model that’ll do a good job at many, many things, it’s pretty telling when some of the very many things you provide as exemplars are about as robust as a chocolate teapot,” Keyes told VentureBeat via email. “What it suggests is that what you are selling is nowhere near as generalizable as you’re claiming. And this could be fine — products often start off with one big idea and end up discovering a smaller thing along the way they’re really, really good at and refocusing.” Above: Another example of toxic output from the models. AI21 Labs demurred when asked whether it conducted a thorough bias analysis on the Jurassic-1 models’ training datasets. In an email, a spokesperson said that when measured against StereoSet , a benchmark to evaluate bias related to gender, profession, race, and religion in language systems, the Jurassic-1 models were found by the company’s engineers to be “marginally less biased” than GPT-3. Still, that’s in contrast to groups like EleutherAI , which have worked to exclude data sources determined to be “unacceptably negatively biased” toward certain groups or views. Beyond limiting text inputs, AI21 Labs isn’t adopting additional countermeasures, like toxicity filters or fine-tuning the Jurassic-1 models on “value-aligned” datasets like OpenAI’s PALMS. Among others, leading AI researcher Timnit Gebru has questioned the wisdom of building large language models, examining who benefits from them and who’s disadvantaged. A paper coauthored by Gebru spotlights the impact of large language models’ carbon footprint on minority communities and such models’ tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people. The effects of AI and machine learning model training on the environment have also been brought into relief. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide , equivalent to nearly 5 times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like Codex require significant amounts of compute — on the order of hundreds of petaflops per day — which contributes to carbon emissions. The way forward The coauthors of the OpenAI and Stanford paper suggest ways to address the negative consequences of large language models, such as enacting laws that require companies to acknowledge when text is generated by AI — possibly along the lines of California’s bot law. Other recommendations include: Training a separate model that acts as a filter for content generated by a language model Deploying a suite of bias tests to run models through before allowing people to use the model Avoiding some specific use cases AI21 Labs hasn’t committed to these principles, but Shoham stresses that the Jurassic-1 models are only the first in a line of language models that it’s working on, to be followed by more sophisticated variants. The company also says that it’s adopting approaches to reduce both the cost of training models and their environment impact, as well as working on a suite of natural language processing products of which Wordtune, Studio, and the Jurassic-1 models are only the first. “We take misuse extremely seriously and have put measures in place to limit the potential harms that have plagued others,” Shoham said. “We have to combine brain and brawn: enriching huge statistical models with semantic elements, while leveraging computational power and data at unprecedented scale.” AI21 Labs, which emerged from stealth in October 2019, has raised $34.5 million in venture capital to date from investors including Pitango and TPY Capital. The company has around 40 employees currently, and it plans to hire more in the months ahead. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,875
2,021
"AI Weekly: The road to ethical adoption of AI | VentureBeat"
"https://venturebeat.com/2021/08/13/ai-weekly-the-road-to-ethical-adoption-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: The road to ethical adoption of AI Share on Facebook Share on X Share on LinkedIn SHANGHAI, CHINA - JULY 7, 2021 - A humanoid service robot plays Chinese chess with a human during the Waic World Conference on Artificial Intelligence in Shanghai, China, July 7, 2021. (Photo credit should read Costfoto/Barcroft Media via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As new principles emerge to guide the development ethical, safe, and inclusive AI , the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines — the Organization for Economic Cooperation and Development’s AI repository alone hosts more than 100 documents — that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them. This is cause for alarm, because as the coauthors of a recent paper write, AI’s impacts are hard to assess — especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in “ethics shopping,” “ethics washing,” or “ethics shirking,” in which they ameliorate their position with customers to build trust while minimizing accountability. The points are salient in light of efforts by European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” In a paper, digital ethics researcher Mark Ryan argues that AI isn’t the type of thing that has the capacity to be trustworthy because the category of “trust” simply doesn’t apply to AI. In fact, AI can’t have the capacity to be trusted as long as it can’t be held responsible for its actions, he argues. “Trust is separate from risk analysis that is solely based on predictions based on past behavior,” he explains. “While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Responsible adoption Productizing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory. Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework that’s not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the paper’s coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered. Another paper — this from researchers at the Data & Society Research Institute and Princeton — posits “algorithmic impact assessments” as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems. This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesn’t necessarily measure harms and may even obscure them — real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects — rather than erodes — dignity. As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column : “Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.” For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,876
2,021
"Salesforce's CodeT5 system can understand and generate code | VentureBeat"
"https://venturebeat.com/2021/09/07/salesforces-codet5-system-can-understand-and-generate-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce’s CodeT5 system can understand and generate code Share on Facebook Share on X Share on LinkedIn Man and 2 laptop screen with program code. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI-powered coding tools, which generate code using machine learning algorithms, have attracted increasing attention over the last decade. In theory, systems like OpenAI’s Codex could reduce the time people spend writing software as well as computational and operational costs. But existing systems have major limitations, leading to undesirable results like errors. In search of a better approach, researchers at Salesforce open-sourced a machine learning system called CodeT5, which can understand and generate code in real time. The team claims that CodeT5 achieves state-of-the-art performance on coding tasks including code defect detection, which predicts whether code is vulnerable to exploits, and clone detection, which predicts whether two code snippets have the same functionality. Novel design As the Salesforce researchers explain in a blog post and paper , existing AI-powered coding tools often rely on model architectures “suboptimal” for generation and understanding tasks. They adapt conventional natural language processing pretraining techniques to source code, ignoring the structural information in programming language that’s important to comprehending the code’s semantics. By contrast, CodeT5 incorporates code-specific knowledge, taking code and its accompanying comments to endow the model with better code understanding. As a kind of guidepost, the model draws on both the documentation and developer-assigned identifiers in codebases (e.g., “binarySearch”) that make code more understandable while preserving its semantics. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CodeT5 builds on Google’s T5 (Text-to-Text Transfer Transformer) framework, which was first detailed in a paper published in 2020. It reframes natural language processing tasks into a unified text-to-text-format, where the input and output data are always strings of text — allowing the same model to be applied to virtually any natural language processing task. To train CodeT5, the team sourced over 8.35 million instances of code, including user-written comments from publicly available, open source GitHub repositories. Most came from the CodeSearchNet dataset — which spans Ruby, JavaScript, Go, Python, PHP, C, and C# — supplemented by two C and C# datasets from BigQuery. The largest and most capable version of CodeT5, which had 220 million parameters, took 12 days to train on a cluster of 16 Nvidia A100 GPUs with 40GB of memory. (Parameters are the parts of the machine learning model learned from historical training data.) The design innovations enabled it to achieve top-level performance on fourteen tasks in the CodeXGLUE benchmark, including text-to-code generation and code-to-code translation. Potential bias The Salesforce researchers acknowledge that the datasets used to train CodeT5 could encode some stereotypes like race and gender from the text comments — or even from the source code. Moreover, they say, CodeT5 could contain sensitive information like personal addresses and identification numbers. And it might produce vulnerable code that negatively affects software. OpenAI similarly found that its Codex model, which was also trained on code from open source GitHub repositories, could suggest compromised packages, invoke functions insecurely, and produce programming solutions that appear correct but don’t actually perform the intended task. Codex can also be prompted to generate racist and otherwise harmful outputs as code, like the word “terrorist” and “violent” when writing code comments with the prompt “Islam.” But the Salesforce team says that they took steps to prune and debias CodeT5, including by cleaning and filtering the training data for problematic content. To demonstrate the model’s usefulness, the researchers built an AI-powered coding assistant for Apex, Salesforce’s proprietary programming language with Java-like syntax, that lets developers type a natural language description to generate a target function or summarize a function into code comments. “With the goal of improving the development productivity of software with machine learning methods, software intelligence research has attracted increasing attention in both academia and industries over the last decade. Software code intelligence techniques can help developers to reduce tedious repetitive workloads, enhance the programming quality and improve the overall software development productivity,” the researchers wrote in their paper. “[Models like CodeT5] would considerably decrease their working time and also could potentially reduce the computation and operational cost, as a bug might degrade the system performance or even crash the entire system.” CodeT5 adds to the growing list of models trained to complete software programming tasks. For example, Intel’s ControlFlag and Machine Inferred Code Similarity engine can autonomously detect errors in code and determine when two pieces of code perform similar tasks. And Facebook’s TransCoder converts code from one of three programming languages — Java, Python, or C++ — into another. But recent studies suggest that AI has a ways to go before it can reliably generate code. In June, a team of researchers at the University of California at Berkeley, Cornell, the University of Chicago, and the University of Illinois at Urbana-Champaign released APPS , a benchmark for code generation from natural language specifications. The team tested several types of models on APPS, including OpenAI’s GPT-2, GPT-3, and an open source version of GPT-3 called GPT-Neo. In experiments, they discovered that the models could learn to generate code that solves easier problems — but not without syntax errors. Approximately 59% of GPT-3’s solutions for introductory problems had errors, while the best-performing model — GPT-Neo — attained only 10.15% accuracy. The Salesforce researchers didn’t test CodeT5 on APPS. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,877
2,021
"AI-powered customer service analytics platform SupportLogic nabs $50M | VentureBeat"
"https://venturebeat.com/2021/10/12/ai-powered-customer-service-analytics-platform-supportlogic-nabs-50m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered customer service analytics platform SupportLogic nabs $50M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI-powered customer service analytics platform SupportLogic today announced that it raised $50 million in series B funding led by WestBridge Capital Partners and General Catalyst, with participation from Sierra Ventures and Emergent Ventures. CEO Krishna Raj Raja says that the funds, which bring SupportLogic’s total raised to over $62 million, will be put toward supporting the company’s growth and ongoing platform development. Santa Clara, California-based SupportLogic was founded in 2016 by Krishna Raj Raja, an early support engineer at VMware and the first employee at the company’s India office. Raja says he observed firsthand that customer intent signals were getting lost amid organizational silos and customer relationship management and support ticketing systems. “[I] founded SupportLogic with the mission to transform the role of customer support as a proactive change agent within companies by being able to capture and act on the true voice of the customer to grow and protect customer revenue,” Raja told VentureBeat via email. “Our new funding will help SupportLogic to add more customer interaction channels to the solution set — for example, multiple data sources such as chat, voice, discussion forums, surveys, and emails. We will also expand our agent coaching and customer health management capabilities.” AI-powered customer service The pandemic brought into sharp relief the value of AI in customer service operations. Gartner predicts that 15% of all customer service interactions globally will be fully powered by AI in 2021. And according to Deloitte, 56% of companies are investing in conversational AI technology to improve cross-channel experiences. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SupportLogic specializes in extracting customer signals from business communications with case evaluation and agent coaching tools. Using natural language processing, the platform provides recommendations to managers to get ahead of escalations and helps to identify the best cases in a backlog to review. SupportLogic also provides intelligent case routing, using its AI engine to determine the best available agent to handle a case based on factors like sentiment and churn risk. Moreover, the company’s product supports non-support functions, including product management, offering visibility to customer challenges that clients can act on. Above: SupportLogic’s customer service analytics platform. “Off-the-shelf sentiment analysis and entity extraction machine learning models are trained on a completely different corpus and do not work on these datasets. Most of the tools in this space focus on case deflection use cases, such as chatbots, robotic process automation, and knowledge management,” Raja said. “As such, there have not been any software-as-a-service solutions that do what SupportLogic does to date. In fact, many of our customers initially started down the path of building their own solutions and SupportLogic often displaces these homegrown projects.” SupportLogic designed its platform using an ensemble method — a machine learning technique that combines several base models in order to produce one optimal predictive model — running on Google’s BERT. Trained from millions of customer interactions, the model and its predictions are personalized for each customer, leveraging a core signal extraction engine built on a common framework. SupportLogic claims that it has several thousand users across “many large enterprise accounts.” In 2021, the startup’s customer base grew 300%, while the number of interactions analyzed by its AI grew from 15 million in 2020 to over 60 million in 2021, the company says. “When the pandemic hit, like in every other industry, we thought we’d be negatively affected. But surprisingly, we weren’t,” Raja said. “The support engineers of our customers all started to work more remotely and collaboratively. SupportLogic delivered an immediate benefit for these organizations — e.g., agent coaching became easier to do … We also evolved the product to help our customers to manage the impact of the pandemic within their own businesses. For example, a few customers asked us to help them track pandemic-related keywords like ‘COVID 19’ that we quickly turned on within our product.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,878
2,021
"AI model optimization startup Deci raises $21M | VentureBeat"
"https://venturebeat.com/2021/10/20/ai-model-optimization-startup-deci-raises-21m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI model optimization startup Deci raises $21M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tel Aviv, Israel-based Deci , a company developing a platform to optimize machine learning models, today announced that it raised $21 million in a series A round led by Insight Partners with participation from Square Peg, Emerge, Jibe Ventures, Samsung Next, Vintage Investment Partners, and Fort Ross Ventures. The investment, which comes a year after Deci’s $9.1 million seed round, brings the company’s total capital raised to $30.1 million and will be used to support growth by expanding sales, marketing, and service operations, according to CEO Yonatan Geifman. Advancements in AI have led to innovations with the potential to transform enterprises across industries. But long development cycles and high compute costs remain roadblocks in the path to productization. According to a recent McKinsey survey , only 44% of respondents reported cost savings from AI adoption in business units where it’s deployed. Gartner predicts that — if the current trend holds — 80% of AI projects will remain “alchemy,” run by “[data science] wizards” whose talents “will not scale in the organization.” Deci was cofounded in 2019 by Geifman, Ran El-Yaniv, and entrepreneur Jonathan Elial. Geifman and El-Yaniv met at Technion’s computer science department, where Geifman was a PhD candidate and El-Yaniv a professor. By leveraging data science techniques, the team developed products to accelerate AI on hardware by redesigning models to maximize throughput while minimizing latency. “I founded Deci in 2019 with Professor Ran El-Yaniv and Jonathan Elial to address the challenges stated above. With our talented team of deep learning researchers and engineers, we developed an innovative solution — using AI itself to craft the next generation of AI. By utilizing an algorithmic-first approach, we focus on improving the efficacy of AI algorithms, thus delivering models that outperform the advantages of any other hardware or software optimization technology,” Geifman told VentureBeat via email. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Deci achieves runtime acceleration on cloud, edge, and mobile through data preprocessing and loading, automatically selecting model architectures and hyperparameters (i.e., the variables that influence a model’s predictions). The platform also handles steps like deployment, serving, and monitoring, continuously tracking models, and offering recommendations where customers can migrate to more cost-effective services. “Deci’s platform offers a substantial performance boost to existing deep learning models while preserving their accuracy,” the company writes on its website. “It designs deep models to more effectively use the hardware platform they run on, be it CPU, GPU, FPGA, or special-purpose ASIC accelerators. The … accelerator is a data-dependent algorithmic solution that works in synergy with other known compression techniques, such as pruning and quantization. In fact, the accelerator acts as a multiplier for complementary acceleration solutions, such as AI compilers and specialized hardware.” AutoNAC Machine learning deployments have historically been constrained by the size and speed of algorithms, as well as the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks, and Google spent an estimated $6,912 training BERT. Above: Deci’s backend dashboard. Deci’s solution is an engine — Automated Neural Architecture Construction, or AutoNAC — that redesigns models to create new models with several computation routes, optimized for an inference device and dataset. Each route is specialized with a prediction task, and Deci’s router component ensures that each data input is directed via the proper route. “[O]ur AutoNAC technology, the first commercially viable Neural Architecture Search (NAS), recently discovered DeciNets, a family of industry-leading computer vision models that have set a new efficient frontier utilizing only a fraction of the compute power used by the Google-scale NAS technologies, the latter having been used to uncover well-known and powerful neural architectures like EfficientNet,” Geifman said. “Such models empower developers with what’s required to transform their ideas into revolutionary products.” The thirty-employee company, Deci, recently announced a strategic collaboration with Intel to optimize AI inference on the chipmaker’s CPUs. In addition to Intel, the startup says that “many” companies in autonomous vehicle, manufacturing, communication, video and image editing, and health care have adopted the Deci platform. “Deci was founded to help enterprises maximize the potential of their AI-based solutions. Enterprises that are leveraging AI face an upward struggle, as research demonstrates that only 53% of AI projects make it from prototype to production,” Geifman said. “This issue can largely be attributed to difficulties navigating the cumbersome deep learning lifecycle given that new features and use cases are stymied by limited hardware availability, slow and ineffective models, wasted time during development cycles, and financial barriers. Simply put, AI developers need better tools that examine and address the algorithms themselves; otherwise, they will keep getting stuck.” Deci has competition in OctoML , a startup that similarly purports to automate machine learning optimization with proprietary tools and processes. Other competitors include DeepCube , Neural Magic , and DarwinAI , which uses what it calls “generative synthesis” to ingest models and spit out highly optimized versions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,879
2,021
"NATO launches AI strategy and $1B fund as defense race heats up | VentureBeat"
"https://venturebeat.com/2021/10/21/nato-launches-ai-strategy-and-1b-fund-as-defense-race-heats-up"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NATO launches AI strategy and $1B fund as defense race heats up Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The North Atlantic Treaty Organization (NATO), the military alliance of 30 countries that border the North Atlantic Ocean, this week announced that it would adopt an 18-point AI strategy and launch a “future-proofing” fund with the goal of investing around $1 billion. Military.com reports that U.S. Defense Secretary Lloyd Austin will join other NATO members in Brussels, Belgium, the alliance’s headquarters, to formally approve the plans over two days of talks. Speaking at a news conference, Secretary-General Jens Stoltenberg said that the effort was in response to “authoritarian regimes racing to develop new technologies.” NATO’s AI strategy will cover areas including data analysis, imagery, cyberdefense, he added. NATO said in a July press release that it was “currently finalizing” its strategy on AI” and that principles of responsible use of AI in defense will be “at the core” of the strategy. Speaking to Politico in March, NATO assistant secretary general for emerging security challenges David van Weel said that the strategy would identify ways to operate AI systems ethically, pinpoint military applications for the technology, and provide a “platform for allies to test their AI to see whether it’s up to NATO standards.” van Weel said. “Future conflicts will be fought not just with bullets and bombs, but also with bytes and big data,” Stoltenberg said. “We must keep our technological edge.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Strategy In a document published Friday summarizing its AI strategy, NATO emphasized the need for “collaboration and cooperation” among members on “any matters relating to AI for transatlantic defence and security.” The document also lists the organization’s principles for “responsible use for AI,” which NATO says were developed based on members’ approaches and “relevant work in applicable international fora”: Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable. Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability. Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level. Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures. Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour. Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets. “Underpinning the safe and responsible use of AI, NATO and allies will consciously put bias mitigation efforts into practice. This will seek to minimize those biases against individual traits, such as gender, ethnicity, or personal attributes,” the document reads. “NATO will conduct appropriate risk [and] impact assessments prior to deploying AI capabilities … NATO and allies will also conduct regular high-level dialogues, engaging technology companies at a strategic political level to be informed and help shape the development of AI-fielded technologies, creating a common understanding of the opportunities and risks arising from AI. Broader context NATO’s overtures come after a senior cybersecurity official at the Pentagon resigned in protest because of the slow pace of technological development at the department. Speaking to the press last week, Nicolas Chaillan, former chief software officer at the Air Force, said that the U.S. has “no competing fighting chance against China” in 15 to 20 years, characterizing the AI and cyber defenses in some government agencies as being at “kindergarten level.” In 2020, the U.S. Department of Defense (DoD) launched the AI Partnership for Defense , which consists of 13 countries from Europe and Asia to collaborate on AI use in the military context. More recently, the department announced that it plans to invest $874 million next year in AI-related technologies as a part of the army’s $2.3 billion science and technology research budget. Much of the DoD’s spending originates from the Joint Artificial Intelligence Center (JAIC) in Washington, D.C., a government organization exploring the use and applications of AI in combat. (In news related to today’s NATO announcement, JAIC is expected to finalize its AI ethics guidelines by the end of this month.) According to an analysis by Deltek, the DoD set aside $550 million of AI obligations awarded to the top ten contractors and defense accounted for 37% of total AI spending by the U.S. government, with contractors receiving the windfall. Fearmongering While U.S. — and now NATO — officials grow more vocal about China’s supposed dominance in military and defense AI, research suggests that their claims somewhat exaggerate the threat. A 2019 report from the Center for Security and Emerging Technology (CSET) shows that China is likely spending far less on AI than previously assumed, between $2 billion and $8 billion. That’s as opposed to the $70 billion figure originally shared in a speech by a top US Air Force general in 2018. While Baidu, Tencent, SenseTime, Alibaba, and iFlytek, and some of China’s other largest companies collaborate with the government to develop AI for national defense, MIT Tech Review points out that Western nations’ attitudes could ultimately hurt U.S. AI development by focusing too much on military AI and too little on fundamental research. A recent OneZero report highlighted the way that the Pentagon uses adversaries’ reported progress to scare tech companies into working with the military, framing government contracting as an ideological choice to support the U.S. in a battle against China, Russia, and other competing states. Speaking at the Center for Strategic and International Studies Global Security Forum in January 2020, secretary of defense Mike Esper said that DoD partnerships with the private sector are vital to the Pentagon’s aim to remain a leader in emerging technologies like AI. Among others, former Google CEO Eric Schmidt — a member of the DoD’s Defense Innovation Board — has urged lawmakers to bolster funding in the AI space while incentivizing public-private partnerships to develop AI applications across government agencies, including military agencies. Contractors have benefited enormously from the push — Lockheed Martin alone netted $106 million in 2020 for an AI-powered “cyber radar” initiative. Tech companies including Concur, Microsoft, and Dell have contracts with U.S. Immigration and Customs Enforcement, with Microsoft pledging — then abandoning in the face of protests — to build versions of its HoloLens headsets for the U.S. Army. (Microsoft this month agreed to commission an independent human rights review of some of its deals with government agencies and law enforcement.) Amazon and Microsoft fiercely competed for — and launched a legal battle over — the DoD’s $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract, which was canceled in July after the Pentagon launched a new multivendor project. Machine learning, computer vision, facial recognition vendors including TrueFace, Clearview AI, TwoSense, and AI.Reverie also have contracts with various U.S. army branches. For some AI and data analytics companies, like Oculus cofounder Palmer Luckey’s Anduril and Palantir, military contracts have become a top source of revenue. In October, Palantir won most of an $823 million contract to provide data and big analytics software to the U.S. army. And in July, Anduril said that it received a contract worth up to $99 million to supply the U.S. military with drones aimed at countering hostile or unauthorized drones. While suppliers are likely to remain in abundance, the challenge for NATO will be aligning its members on AI in defense. The U.S. and others, including France and the U.K., have developed autonomous weapons technologies, but members like Belgium and Germany have expressed concerns about the implications of the technologies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,880
2,021
"Report: AI startup funding hits record high of $17.9B in Q3 | VentureBeat"
"https://venturebeat.com/2021/11/11/report-ai-startup-funding-hits-record-high-of-17-9b-in-q3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: AI startup funding hits record high of $17.9B in Q3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Even as economies struggle with the chaos of the pandemic, the AI startup space continues to grow stronger with increased investments and M&A deals. According to the latest State of AI report from CB Insights , the global funding in the segment has seen a significant surge, growing from $16.6 billion across 588 deals in Q2 2021 (figures show $20B due to the inclusion of two public subsidiary fundings) to $17.9 billion across 841 deals in the third quarter. Throughout the year (which is yet to end), AI startups around the world raised $50 billion across 2000+ deals with 138 mega-rounds of 100+ million. As much as $8.5 billion of the total investment went into healthcare AI, $3.1 billion went into fintech AI, while $2.6 billion went into retail AI. The findings show how AI has become a driving force across nearly every industry and is drawing significant attention from VCs, CVCs, and other investors. In Q3 alone, there were 13 new AI unicorns globally, bringing the total number of billion-dollar AI startups to 119. Three startups also reached $2 billion in valuation — Algolia and XtaPi from the U.S. and Black Sesame Technologies from China. Meanwhile, in terms of M&A exits, the quarter saw over 100 acquisitions like the previous one, putting the total exits for the year at 253. The biggest AI acquisition of the quarter was PayPal snapping up Paidly — a company determining creditworthiness using AI/ML — for $2.7 billion, followed by Zoominfo’s acquisition of Chorus.ai — a startup using AI to analyze sales calls — for $575 million. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! U.S. AI startups continue to dominate Out of the $17.9 billion raised by AI startups worldwide in Q3, a significant $10.4 billion went to companies based in the U.S. and $4.8 billion into those in Asia. However, Asian firms raised this amount in nearly just as many deals (321) as in the U.S. (324), which signals that the average deal size was smaller there. Mega-round deals in the U.S. stood at 24 in Q3, while Asia saw 13 such deals. Databricks, Dataiku, Olive, XtalPi, Datarobot, and Cybereason were the companies with the biggest rounds in the U.S. in the third quarter. As compared to Asia and the U.S., funding in Canada, Latin America, and Europe regions was the lowest at $0.4 billion, $0.5 billion, and $1.6 billion, respectively. These regions cumulatively saw just eight mega-rounds. Read the full report here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,881
2,021
"AI Weekly: Workplace surveillance algorithms need to be regulated before it's too late | VentureBeat"
"https://venturebeat.com/2021/11/12/ai-weekly-workplace-surveillance-algorithms-need-to-be-regulated-before-its-too-late"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Workplace surveillance algorithms need to be regulated before it’s too late Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, the all-party parliamentary group (APPG) on the future of work, a special interest group of members of parliament in the U.K., said that the monitoring of workers through algorithms is damaging to employees’ mental health and needs to be regulated through legislation. This legislation, they said, could ensure that companies evaluate the effect of “performance-driven” guidelines, like queue monitoring in supermarkets, while providing employees the means to fight back against perceived violations of privacy. “Pervasive monitoring and target-setting technologies, in particular, are associated with pronounced negative impacts on mental and physical wellbeing as workers experience the extreme pressure of constant, real-time micromanagement and automated assessment,” wrote the APPG members in a report. “[A new algorithms act would establish] a clear direction to ensure AI puts people first.” Monitoring employees with AI The trend toward remote and hybrid work has prompted some companies to increase their use of monitoring technologies — ostensibly to ensure that employees remain on task. Employee monitoring software is a broad category, but generally speaking, it encompasses programs that can measure an employee’s idle time, access webcams and CCTV, track keystrokes and web history, take screenshots, and record emails, chats, and phone calls. In a survey , VPN provider ExpressVPN found that 78% of businesses were using monitoring software like TimeDoctor, Teramind, Wiretap, Interguard, Hubstaff, and ActivTrak to track their employees’ performance or online activity. Meanwhile, tech giants like Amazon ding warehouse employees for spending too much time away from the work they’re assigned to perform, like scanning barcodes or sorting products into bins. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A Washington Post piece published this week focusing on the legal industry found that facial recognition monitoring has become pervasive in contract attorney work. Firms are requiring contract attorneys to submit to “finicky, error-prone, and imprecise” webcam-based systems that record facial movements and surroundings, sending an alert if the attorney allows unauthorized people into the room. According to the report, some of the software also captures a “webcam feed” for employers that included snapshots of attorney “violations,” such as when a person opens a social media website, uses their phone, or blocks the camera’s view. Employers cite the need for protection against time theft — according to one source, employers lose about 4.5 hours per week per employee to time theft — but workers feel differently about the platforms’ capabilities. A recent survey by ExpressVPN found 59% of remote and hybrid workers feel stress or anxiety as a result of their employer monitoring them. Another 43% said that the surveillance felt like a violation of trust, and more than half said they’d quit their job if their manager implemented surveillance measures. Privacy concerns aside, there’s the potential for bias to arise in the software’s algorithms. Studies show that even differences between camera models can cause an algorithm to be less effective in classifying the objects it was trained to detect. In other research , text-based sentiment analysis systems have been shown to exhibit prejudices along race, ethnic, and gender lines — for example, associating Black people with more negative emotions like anger, fear, and sadness. In some cases, biases and other flaws have caused algorithms to penalize workers for making unavoidable “mistakes.” A former Uber driver has filed a legal claim in the U.K. alleging that the company’s facial recognition software works less effectively on darker skin. And Vice recently reported that AI-powered cameras installed in Amazon delivery vans incorrectly flagged workers whenever cars cut them off, a frequent occurrence in traffic-heavy cities like Los Angeles. Progress and the road ahead In the U.S., as in many countries around the world, employees have little in the way of legal recourse when it comes to monitoring software. The U.S. 1986 Electronic Communications Privacy Act (ECPA) allows companies to surveil communications for “legitimate business-related purposes.” Only two states, Connecticut and Delaware, require notification if employees’ email or internet activities are being monitored, while Colorado and Tennessee require businesses to set written email monitoring policies. As a small sign of progress, earlier this year, California passed AB-701 legislation , which prevents employers from algorithmically counting health and safety compliance against workers’ productive time. Legislation proposed in the New York City Council seeks to update hiring discrimination rules for companies that choose to use algorithms as part of the process. For the APPG’s part, they recommend that workers be involved in the design and use of algorithm-driven systems that make decisions about the allocation of shifts, pay, hiring, and more. They also strongly suggest that corporations and public sector employers fill out impact assessments aimed at identifying problems caused by the systems, as well as introducing certification and guidance for use of AI and algorithms at work. “It is clear that, if not properly regulated, algorithmic systems can have harmful effects on health and prosperity,” David Davis, one co-author of the report, wrote. Added fellow co-author Clive Lewis: “There are marked gaps in regulation at an individual and corporate level that are damaging people and communities.” They have a point. For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,882
2,021
"OpenAI rival Cohere launches language model API | VentureBeat"
"https://venturebeat.com/2021/11/15/openai-rival-cohere-launches-language-model-api"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI rival Cohere launches language model API Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cohere , a startup creating large language models to rival those from OpenAI and AI2Labs , today announced the general availability of its commercial platform for app and service development. Through an API, customers can access models fine-tuned for a range of natural language applications, in some cases at a fraction of the cost of rival offerings. The pandemic has accelerated the world’s digital transformation, pushing businesses to become more reliant on software to streamline their processes. As a result, the demand for natural language technology is now higher than ever — particularly in the enterprise. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their natural language processing (NLP) budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%. The global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. “Language is essential to humanity and arguably its single greatest invention — next to the development of computers. Ironically, computers still lack the ability to fully comprehend language, finding it difficult to parse the syntax, semantics, and context that all work together to give words meaning,” Cohere CEO Aidan Gomez told VentureBeat via email. “However, the latest in NLP technology is continuously improving our ability to communicate seamlessly with computers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cohere Headquartered in Toronto, Canada, Cohere was founded in 2019 by a pedigreed team including Gomez, Ivan Zhang, and Nick Frosst. Gomez, a former intern at Google Brain, coauthored the academic paper “ Attention Is All You Need ,” which introduced the world to a fundamental AI model architecture called the Transformer. (Among other high-profile systems, OpenAI’s GPT-3 and Codex are based on the Transformer architecture.) Zhang, alongside Gomez, is a contributor at FOR.ai, an open AI research collective involving data scientists and engineers. As for Frosst, he, like Gomez, worked at Google Brain, publishing research on machine learning alongside Turing Award winner Geoffrey Hinton. In a vote of confidence, even before launching its commercial service, Cohere raised $40 million from institutional venture capitalists as well as Hinton, Google Cloud AI chief scientist Fei-Fei Li, UC Berkeley AI lab co-director Pieter Abbeel, and former Uber autonomous driving head Raquel Urtasun. “Very large language models are now giving computers a much better understanding of human communication. The team at Cohere is building technology that will make this revolution in natural language understanding much more widely available,” Hinton said in a statement to Fast Company in September. Unlike some of its competitors, Cohere offers two types of English NLP models, generation and representation, in languages that include Large, Medium, Small. The generation models can complete tasks involving generating text — for example, writing product descriptions or extracting document metadata. By contrast, the representational models are about understanding language, driving apps like semantic search, chatbots, and sentiment analysis. “By being in both [the generative and representative space], Cohere has the flexibility that many enterprise customers need, and can offer a range of model sizes that allow customers to choose the model that best fits their needs across the spectrums of latency and performance,” Gomez said. “[Use] cases across industries include the ability to more accurately track and categorize spending, expedite data entry for medical providers, or leverage semantic search for legal cases, insurance policies and financial documents. Companies can easily generate product descriptions with minimal input, draft and analyze legal contracts, and analyze trends and sentiment to inform investment decisions.” To keep its technology relatively affordable, Cohere charges access on a per-character basis based on the size of the model and the number of characters apps use (ranging from $0.0025 to $0.12 per 10,000 characters for generation and $0.019 per 10,000 characters for representation). Only the generate models charge on input and output characters, while other models charge on output characters. All fine-tuned models, meanwhile — i.e., models tailored to particular domains, industries, or scenarios — are charged at two times the baseline model rate. “The problem remains that the only companies able to capitalize on NLP technology require seemingly bottomless resources in order to access the technology for large language models — which is due to the cost of these models ranging from the tens to hundreds of millions of dollars to build,” Gomez said. “Cohere is easy-to-deploy. With just three lines of code, companies can apply [our] full-stack engine to power all their NLP needs. The models themselves are … already pre-trained.” To Gomez’s point, training and deploying large language models into production isn’t an easy feat, even for enterprises with massive resources. For example, Nvidia’s recently released Megatron 530B model was originally trained across 560 Nvidia DGX A100 servers, each hosting 8 Nvidia A100 80GB GPUs. Microsoft and Nvidia say that they observed between 113 to 126 teraflops per second per GPU while training Megatron 530B, which would put the training cost in the millions of dollars. (A teraflop rating measures the performance of hardware including GPUs.) Inference — actually running the trained model — is another challenge. On two of its costly DGX SuperPod systems , Nvidia claims that inference (e.g., autocompleting a sentence) with Megatron 530B only takes half a second. But it can take over a minute on a CPU-based on-premises server. While cloud alternatives might be cheaper, they’re not dramatically so — one estimate pegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year. Training the models To build Cohere’s models, Gomez says that the team scrapes the web and feeds billions of ebooks and web pages (e.g., WordPress, Tumblr, Stack Exchange, Genius, the BBC, Yahoo, and the New York Times) to the models so that they learn to understand the meaning and intent of language. (The training dataset for the generation models amounts to 200GB dataset after some filtering, while the dataset for the representation models, which wasn’t filtered, totals 3TB.) Like all AI models, Cohere’s trains by ingesting a set of examples to learn patterns among data points, like grammatical and syntactical rules. It’s well-established that models can amplify the biases in data on which they were trained. In a paper , the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claims that GPT-3 and similar models can generate text that might radicalize people into far-right extremist ideologies. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Google’s BERT and XLNet and Facebook’s RoBERTa. Cohere, for its part, claims that it’s committed to safety and trains its models “to minimize bias and toxicity.” Customers must abide by the company’s usage guidelines or risk having their access to the API revoked. And Cohere — which has an external advisory council in addition to an internal safety team — says that it plans to monitor “evolving risks” with tools designed to identify harmful outputs. But Cohere’s NLP models aren’t perfect. In its documentation, the company admits that the models might generate “obscenities, sexually explicit content, and messages that mischaracterize or stereotype groups of people based on problematic historical biases perpetuated by internet communities.” For example, when fed prompts about people, occupations, and political/religious ideologies, the API’s output could be toxic 5 to 6 times per 1,000 generations and discuss men twice as much as it does women, Cohere says. Meanwhile, the Otter model in particular tends to associate men and women with stereotypically “male” and “female” occupations (e.g., male scientist versus female housekeeper). In response, Gomez says that the Cohere team “puts substantial effort into filtering out toxic content and bad text,” including running adversarial attacks and measuring the models against safety research benchmarks. “[F]iltration is done at the keyword and domain levels in order to minimize bias and toxicity,” he added. “[The team has made] meaningful progress that sets Cohere apart from other [companies developing] large language models … [W]e’re confident in the impact it will have on the future of work over the course of this transformative era.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,883
2,021
"Cohere partners with Google Cloud to train large language models using dedicated hardware | VentureBeat"
"https://venturebeat.com/2021/11/17/cohere-partners-with-google-cloud-to-train-large-language-models-using-dedicated-hardware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cohere partners with Google Cloud to train large language models using dedicated hardware Share on Facebook Share on X Share on LinkedIn (Photo by Adam Berry/Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google Cloud, Google’s cloud computing services platform, today announced a multi-year collaboration with startup Cohere to “accelerate natural language processing (NLP) to businesses by making it more cost effective.” Under the partnership, Google Cloud says it’ll help Cohere establish computing infrastructure to power Cohere’s API, enabling Cohere to train large language models on dedicated hardware. The news comes a day after Cohere announced the general availability of its API, which lets customers access models that are fine-tuned for a range of natural language applications — in some cases at a fraction of the cost of rival offerings. “Leading companies around the world are using AI to fundamentally transform their business processes and deliver more helpful customer experiences,” Google Cloud CEO Thomas Kurian said in a statement. “Our work with Cohere will make it easier and more cost-effective for any organization to realize the possibilities of AI with powerful NLP services powered by Google’s custom-designed [hardware].” How Cohere runs Headquartered in Toronto, Canada, Cohere was founded in 2019 by a pedigreed team including Aidan Gomez, Ivan Zhang, and Nick Frosst. Gomez, a former intern at Google Brain, coauthored the academic paper “Attention Is All You Need,” which introduced the world to a fundamental AI model architecture called the Transformer. (Among other high-profile systems, OpenAI’s GPT-3 and Codex are based on the Transformer architecture.) Zhang, alongside Gomez, is a contributor at FOR.ai, an open AI research collective involving data scientists and engineers. As for Frosst, he, like Gomez, worked at Google Brain, publishing research on machine learning alongside Turing Award winner Geoffrey Hinton. In a vote of confidence, even before launching its commercial service, Cohere raised $40 million from institutional venture capitalists as well as Hinton, Google Cloud AI chief scientist Fei-Fei Li, UC Berkeley AI lab co-director Pieter Abbeel, and former Uber autonomous driving head Raquel Urtasun. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unlike some of its competitors, Cohere offers two types of English NLP models, generation and representation, in Large, Medium, and Small sizes. The generation models can complete tasks involving generating text — for example, writing product descriptions or extracting document metadata. By contrast, the representational models are about understanding language, driving apps like semantic search, chatbots, and sentiment analysis. To keep its technology relatively affordable, Cohere charges access on a per-character basis based on the size of the model and the number of characters apps use (ranging from $0.0025-$0.12 per 10,000 characters for generation and $0.019 per 10,000 characters for representation). Only the generate models charge on input and output characters, while other models charge on output characters. All fine-tuned models, meanwhile — i.e., models tailored to particular domains, industries, or scenarios — are charged at two times the baseline model rate. Large language models The partnership with Google Cloud will grant Cohere access to dedicated fourth-generation tensor processing units (TPUs) running in Google Cloud instances. TPUs are custom chips developed specifically to accelerate AI training, powering products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs. “The partnership will run until the end of 2024 with options to extend into 2025 and 2026. Google Cloud and Cohere have plans to partner on a go-to-market strategy,” Gomez told VentureBeat via email. “We met with a number of Cloud providers and felt that Google Cloud was best positioned to meet our needs.” Cohere’s decision to partner with Google Cloud reflects the logistical challenges of developing large language models. For example, Nvidia’s recently released Megatron 530B model was originally trained across 560 Nvidia DGX A100 servers, each hosting 8 Nvidia A100 80GB GPUs. Microsoft and Nvidia say that they observed between 113 to 126 teraflops per second per GPU while training Megatron 530B, which would put the training cost in the millions of dollars. (A teraflop rating measures the performance of hardware, including GPUs.) Inference — actually running the trained model — is another challenge. On two of its costly DGX SuperPod systems , Nvidia claims that inference (e.g., autocompleting a sentence) with Megatron 530B only takes half a second. But it can take over a minute on a CPU-based on-premises server. While cloud alternatives might be cheaper, they’re not dramatically so — one estimate pegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year. Cohere rival OpenAI trains its large language models on an “AI supercomputer” hosted by Microsoft, which invested over $1 billion in the company in 2020, roughly $500 million of which came in the form of Azure compute credits. Affordable NLP In Cohere, Google Cloud — which already offered a range of NLP services — gains a customer in a market that’s growing rapidly during the pandemic. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%. “We’re dedicated to supporting companies, such as Cohere, through our advanced infrastructure offering in order to drive innovation in NLP,” Google Cloud AI director of product management Craig Wiley told VentureBeat via email. “Our goal is always to provide the best pipeline tools for developers of NLP models. By bringing together the NLP expertise from both Cohere and Google Cloud, we are going to be able to provide customers with some pretty extraordinary outcomes.” The global NLP market is projected to be worth $2.53 billion by 2027, up from $703 million in 2020. And if the current trend holds, a substantial portion of that spending will be put toward cloud infrastructure — benefiting Google Cloud. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,884
2,021
"AI-powered marketing copy generator Anyword secures $21M | VentureBeat"
"https://venturebeat.com/2021/11/18/ai-powered-marketing-copy-generator-anyword-secures-21m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered marketing copy generator Anyword secures $21M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anyword , an AI-powered platform for fine-tuning marketing copy, today announced that it raised $21 million in a financing round led by Innovation Endeavors with participation from Lead Capital and Gandyr Ventures. CEO Yaniv Makover says that the proceeds, which bring the company’s total raised to $30 million, will be used to bolster hiring, build out Anyword’s technology, and onboarding customers to the platform. Anyword’s growth comes as marketers increasingly express a willingness to embrace AI-driven creation tools. According to a survey by Phrasee, an Anyword rival, 63% of marketers surveyed would consider investing in AI to generate and optimize ad copy. Statista reports that 87% of current AI adopters are already using — or considering using — AI for sales forecasting and for improving their email marketing. And 61% of marketers say that AI is the most important aspect of their larger data strategy. “The company was originally founded as Keywee, a platform used by publishers such as the New York Times, NBC, and CNN to analyze each article they wrote and find audiences based on the keywords in these articles,” Makover told VentureBeat via email. “Writing has pretty much stayed the same process in the last few hundred years. Computers and word processing helped, but they didn’t materially change how we write to convey a message or a narrative, specifically for an intended audience and with a goal in mind. In marketing and sales, we are writing for someone and usually with a measurable objective. Incorporating data about which words, concepts, and styles work better for a specific audience and industry was our goal [when we pivoted].” Optimizing copy with AI Anyword claims to have trained a copy-generating model on two billion data points from A/B testing messages across industries, channels, and marketing objectives. Leveraging it, Anyword customers can create copy — including headlines, subheaders, email subject lines, text messages, descriptions, and captions — while understanding how different demographics might react to variations of the same copy. The platform’s tools can connect ad accounts and incorporate keywords and promotions (e.g., “new arrivals” and “free shipping”), tailoring copy to a specific length. Beyond this, they can optimize on-site copy to display specific messages to specific audiences. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Train our AI copywriting tool to write copy in your brand voice, similar to your competitors, or similar to your top performing live ads,” Anyword explains on its website. With Anyword, marketers plug in a URL, summary, or product description to generate copy. After choosing a format and tone, Anyword creates several versions of the copy, scored and sorted by predicted quality. From there, Anyword can rewrite and show comparisons between the variations, improving over time over existing ads. “Predicting how well a text variation will do for a goal and audience necessitates a special dataset. First, you need to know how a text variation did historically for a given audience. You need to have a breadth of data covering many styles and topics,” Makover said. “Our datasets of text variations and their respective performance metrics consist of millions of variations [to improve, for example,] conversion rates for … websites, emails, ads, social posts, and blog posts.” Competition Spurred by digital transformations that accelerated during the pandemic, a larger share of companies are expected to adopt AI technologies that automatically suggest and tailor marketing and sales materials. According to the Phrasee survey, 65% of marketers trust that AI can generate desirable brand language, and 82% believe that their organization would benefit from data that provides insights into how consumers respond to that language. Fifty-employee, Tel Aviv- and New York-based Anyword competes with Phrasee, which partnered with Walgreens early in the pandemic to create a targeted email campaign about COVID-19 vaccine availability. Other competitors include Instoried, CopyAI, Copysmith, Writesonic , and New York City-based Persado AI. While new startups in the “AI in marketing tech” segment arise with some frequency, Anyword is betting that its technology will enable it to stand out in a market that could be worth $40.09 billion by 2025. “We’ve been growing 35% month-over-month on average since launching Anyword in March,” Makover said. “Since the end of Q1 2021, we have acquired 1,200 customers. Our customers range from small businesses looking for better performance from their marketing content and ecommerce offerings to publishers who have significant volumes to agencies and enterprises looking for deeper integrations with their products and services.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,885
2,021
"Naver's large language model is powering shopping recommendations | VentureBeat"
"https://venturebeat.com/2021/12/07/navers-large-language-model-is-powering-shopping-recommendations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Naver’s large language model is powering shopping recommendations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In June, Naver , the Seongnam, South Korea-based company that operates the eponymous search engine Naver, announced that it had trained one of the largest AI language models of its kind, called HyperCLOVA. Naver claimed that the system learned 6,500 times more Korean data than OpenAI’s GPT-3 and contained 204 billion parameters, the parts of the machine learning model learned from historical training data. (GPT-3 has 175 billion parameters.) HyperCLOVA was seen as a notable achievement because of the scale of the model and since it fits into the trend of generative model “diffusion,” with multiple actors developing GPT-3-style models, like Huawei’s PanGu-Alpha (stylized PanGu-α). The benefits of large language models — including the ability to generate human-like text for marketing and customer support purposes — were previously limited to English because companies lacked the resources to train these models in other languages. In the months since HyperCLOVA was developed, Naver has begun using it to personalize search results on the Naver platform, Naver executive officer Nako Sung told VentureBeat in an interview. It’ll also soon become available in private beta through HyperCLOVA Studio, a no-code tool that’ll allow developers to access the model for text generation and classification tasks. “Initially used to correct typos in search queries on Naver Search, [HyperCLOVA] is now enabling many new features on our ecommerce platform, Naver Shopping, such as summarizing multiple consumer reviews into one line, recommending and curating products to user shopping preferences, or generating trendy marketing phrases for featured shopping collections,” Sung said. “We also launched CLOVA CareCall, a … conversational agent for elderly citizens who live alone. The service is based on the HyperCLOVA’s natural conversation generation capabilities, allowing it to have human-like conversations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Large language models Training HyperCLOVA, which can understand English and Japanese in addition to Korean, required large-scale datacenter infrastructure, according to Sung. Naver leveraged a server cluster made up of 140 Nvidia SuperPod A100 DGX nodes, which the company claims can deliver up to 700 petaflops of compute power. It took months to train HyperCLOVA on 2TB of Korean text data, much of which came from user-generated content on Naver’s platforms. For example, one source was Knowledge iN, a Quora-like, Korean-language community where users can ask questions on topics to receive answers from experts. Another was public blost posts from people who use free web hosting services provided through Naver. Sung says that this differentiates HyperCLOVA from previous large language models like GPT-3, which have a limited ability to understand the nuances of languages besides English. He claims that by having the model draw on the “collective intelligence of Korean culture and society,” it can better serve Korean users — and at the same time reduce Naver’s dependence on other, less Asia Pacific-centric AI services. In a recent issue of his Import AI newsletter, former OpenAI policy director Jack Clark asserted that because generative models ultimately reflect and magnify the data they’re trained on, different nations care a lot about how their own culture is represented in these models. “[HyperCLOVA] is part of a general trend of different nations asserting their own AI capacity [and] capability via training frontier models like GPT-3,” he continued. “[We’ll] await more technical details to see if [it’s] truly comparable to GPT-3.” Some experts have argued that because the companies developing influential AI systems are predominantly located in the U.S., China, and the E.U., a disproportionate share of economic benefit will fall inside these regions — potentially exacerbating inequality. In an analysis of publications at two major machine learning conferences, NeurIPS 2020 and ICML 2020, none of the top 10 countries in terms of publication index were located in Latin America, Africa, or Southeast Asia. Moreover, a recent report from Georgetown University’s Center for Security and Emerging Technology found that while 42 of the 62 major AI labs are located outside of the U.S., 68% of the staff are located within the United States. “These large amounts of collective intelligence are continuously enriching and fortifying HyperCLOVA,” Sung said. “The most well-known hyperscale language model is GPT-3, and it is trained mainly with English data, and is only taught 0.016% of Korean data out of the total input … [C]onsidering the impact of hyperscale AI on industries and economies in the near future, we are confident that building a Korean language-based AI is very important for Korea’s AI sovereignty.” Challenges in developing models Among others, leading AI researcher Timnit Gebru has questioned the wisdom of building large language models, examining who benefits from them and who is harmed. It’s well-established that models can amplify the biases in data on which they were trained, and the effects of model training on the environment have been raised as serious concerns. To address the issues around bias, Sung says that Naver is in discussions with “external experts” including researchers at Seoul National University’s AI Policy Initiative and plans to form an advisory committee on AI ethics in Korea this year. The company also released a benchmark — Korean Language Understanding Evaluation (KLUE) — to evaluate the natural language understanding capabilities of Korean language models including HyperCLOVA. “We recognize that while AI can make our lives convenient, it is also not infallible like all other technologies used today,” he added. “While pursuing convenience in the service we provide, Naver will also endeavor to explain our AI service in a manner that users can easily understand upon their request or when necessary … We will pay attention to safety during all stages of designing and testing our services, including after the service is deployed, to prevent a situation where AI as a daily tool threatens life or causes physical harm to people.” Real-world applications Currently, Naver says that HyperCLOVA is being tapped for various Naver services including Naver Smart Stores, the company’s ecommerce marketplace, where it’s “correcting” the names of products by generating “more attractive” names versus the original search-engine-optimized SKUs. In another ecommerce use case, Naver is applying HyperCLOVA to create product recommendation systems tailored to shoppers’ individual preferences. “While HyperCLOVA doesn’t specifically learn users’ purchase logs, we discovered that it was able to recommend products on our marketplace to some extent. So, we fine-tuned this capability and introduced it as one of our ecommerce features. Unlike the existing recommendation algorithms, this model shows the ‘generalized’ ability to perform well on cold items, cold users and cold services,” Sung said. “Recommending a certain gift to someone is not a suitable problem for traditional machine learning to solve. That’s because there is no information about the recipient of the gift … [But] with HyperCLOVA, we were able to make this experience possible.” HyperCLOVA is also powering an AI-driven call service for senior citizens who live alone, which Naver says it plans to refine to provide more personalized conversations in the future. Beyond this, Naver says it’s developing a multilingual version of HyperCLOVA that can understand two or more languages at the same time and an API that will allow developers to build apps and services on top of the model. The pandemic has accelerated the world’s digital transformation, pushing businesses to become more reliant on software to streamline their processes. As a result, the demand for natural language technology is now higher than ever — particularly in the enterprise. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their natural language processing budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%. The global NLP market is expected to climb in value to $35.1 billion by 2026. “The most interesting thing about HyperCLOVA is that its usability is not limited only to AI experts, such as engineers and researchers, but it has also been used by service planners and business managers within our organization. Most of the winners [in a recent HyperCLOVA hackathon] were from non-AI developer positions, which I believe proves that HyperCLOVA’s no-code AI platform will empower everyone with AI capabilities, significantly accelerating the speed of AI transformation and changing its scope in the future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,886
2,021
"3 big problems with datasets in AI and machine learning | VentureBeat"
"https://venturebeat.com/2021/12/17/3-big-problems-with-datasets-in-ai-and-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 3 big problems with datasets in AI and machine learning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Datasets fuel AI models like gasoline (or electricity, as the case may be) fuels cars. Whether they’re tasked with generating text, recognizing objects, or predicting a company’s stock price, AI systems “learn” by sifting through countless examples to discern patterns in the data. For example, a computer vision system can be trained to recognize certain types of apparel, like coats and scarfs, by looking at different images of that clothing. Beyond developing models, datasets are used to test trained AI systems to ensure they remain stable — and measure overall progress in the field. Models that top the leaderboards on certain open source benchmarks are considered state of the art (SOTA) for that particular task. In fact, it’s one of the major ways that researchers determine the predictive strength of a model. But these AI and machine learning datasets — like the humans that designed them — aren’t without their flaws. Studies show that biases and mistakes color many of the libraries used to train, benchmark, and test models, highlighting the danger in placing too much trust in data that hasn’t been thoroughly vetted — even when the data comes from vaunted institutions. 1. The training dilemma In AI, benchmarking entails comparing the performance of multiple models designed for the same task, like translating words between languages. The practice — which originated with academics exploring early applications of AI — has the advantages of organizing scientists around shared problems while helping to reveal how much progress has been made. In theory. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But there are risks in becoming myopic in dataset selection. For example, if the same training dataset is used for many kinds of tasks, it’s unlikely that the dataset will accurately reflect the data that models see in the real world. Misaligned datasets can distort the measurement of scientific progress, leading researchers to believe they’re doing a better job than they actually are — and causing harm to people in the real world. Researchers at the University of California, Los Angeles, and Google investigated the problem in a recently published study titled “Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research.” They found that there’s “heavy borrowing” of datasets in machine learning — e.g., a community working on one task might borrow a dataset created for another task — raising concerns about misalignment. They also showed that only a dozen universities and corporations are responsible for creating the datasets used more than 50% of the time in machine learning, suggesting that these institutions are effectively shaping the research agendas of the field. “SOTA-chasing is bad practice because there are too many confounding variables, SOTA usually doesn’t mean anything, and the goal of science should be to accumulate knowledge as opposed to results in specific toy benchmarks,” Denny Britz, a former resident on the Google Brain team, told VentureBeat in a previous interview. “There have been some initiatives to improve things, but looking for SOTA is a quick and easy way to review and evaluate papers. Things like these are embedded in culture and take time to change.” To their point, ImageNet and Open Images — two publicly available image datasets from Stanford and Google — are heavily U.S.- and Euro-centric. Computer vision models trained on these datasets perform worse on images from Global South countries. For example, the models classify grooms from Ethiopia and Pakistan with lower accuracy compared with grooms from the U.S., and they fail to correctly identify objects like “wedding” or “spices” when they come from the Global South. Even differences in the sun path between the northern and southern hemispheres and variations in background scenery can affect model accuracy, as can the varying specifications of camera models like resolution and aspect ratio. Weather conditions are another factor — a driverless car system trained exclusively on a dataset of sunny, tropical environments will perform poorly if it encounters rain or snow. A recent study from MIT reveals that computer vision datasets including ImageNet contain problematically “nonsensical” signals. Models trained on them suffer from “overinterpretation,” a phenomenon where they classify with high confidence images lacking in so much detail that they’re meaningless to humans. These signals can lead to model fragility in the real world, but they’re valid in the datasets — meaning overinterpretation can’t be identified using typical methods. “There’s the question of how we can modify the datasets in a way that would enable models to be trained to more closely mimic how a human would think about classifying images and therefore, hopefully, generalize better in these real-world scenarios, like autonomous driving and medical diagnosis, so that the models don’t have this nonsensical behavior,” says Brandon Carter, an MIT Ph.D. student and lead author of the study, said in a statement. History is filled with examples of the consequences of deploying models trained using flawed datasets, like virtual backgrounds and photo-cropping tools that disfavor darker-skinned individuals. In 2015, a software engineer pointed out that the image-recognition algorithms in Google Photos were labeling his Black friends as “gorillas.” And the nonprofit AlgorithmWatch showed that Google’s Cloud Vision API at one time labeled thermometers held by a Black person as “guns” while labeling thermometers held by a light-skinned person as “electronic devices.” Dodgy datasets have also led to models that perpetuate sexist recruitment and hiring , ageist ad targeting , erroneous grading , and racist recidivism and loan approval. The issue extends to health care, where training datasets containing medical records and imagery mostly come from patients in North America, Europe, and China — meaning models are less likely to work well for underrepresented groups. The imbalances are evident in shoplifter- and weapon-spotting computer vision models , workplace safety monitoring software , gunshot sound detection systems , and “beautification” filters , which amplify the biases present in the data on which they were trained. Experts attribute many errors in facial recognition , language, and speech recognition systems, too, to flaws in the datasets used to train the models. For example, a study by researchers at the University of Maryland found that face-detection services from Amazon, Microsoft, and Google are more likely to fail with older, darker-skinned individuals and those who are less “feminine-presenting.” According to the Algorithmic Justice League’s Voice Erasure project, speech recognition systems from Apple, Amazon, Google, IBM, and Microsoft collectively achieve word error rates of 35% for Black voices versus 19% for white voices. And language models — which are often trained on posts from Reddit — have been shown to exhibit prejudices along race, ethnic, religious, and gender lines, associating Black people with more negative emotions and struggling with “ Black-aligned English. ” “Data [is] being scraped from many different places on the web [in some cases], and that web data reflects the same societal-level prejudices and biases as hegemonic ideologies (e.g., of whiteness and male dominance),” UC Los Angeles’ Bernard Koch and Jacob G. Foster and Google’s Emily Denton and Alex Hanna, the coauthors of “Reduced, Reused, and Recycled,” told VentureBeat via email. “Larger … models require more training data, and there has been a struggle to clean this data and prevent models from amplifying these problematic ideas.” 2. Issues with labeling Labels , the annotations from which many models learn relationships in data, also bear the hallmarks of data imbalance. Humans annotate the examples in training and benchmark datasets, adding labels like “dogs” to pictures of dogs or describing the characteristics in a landscape image. But annotators bring their own biases and shortcomings to the table, which can translate to imperfect annotations. For instance, studies have shown that the average annotator is more likely to label phrases in African-American Vernacular English (AAVE), the informal grammar, vocabulary, and accent used by some Black Americans, as toxic. In another example, a few labelers for MIT’s and NYU’s 80 Million Tiny Images dataset — which was taken offline in 2020 — contributed racist, sexist, and otherwise offensive annotations including nearly 2,000 images labeled with the N-word and labels like “rape suspect” and “child molester.” In 2019, Wired reported on the susceptibility of platforms like Amazon Mechanical Turk — where many researchers recruit annotators — to automated bots. Even when the workers are verifiably human, they’re motivated by pay rather than interest, which can result in low-quality data — particularly when they’re treated poorly and paid a below-market rate. Researchers including Niloufar Salehi have made attempts at tackling Amazon Mechanical Turk’s flaws with efforts like Dynamo, an open access worker collective, but there’s only so much they can do. Being human, annotators also make mistakes — sometimes major ones. In an MIT analysis of popular benchmarks including ImageNet, the researchers found mislabeled images (like one breed of dog being confused for another), text sentiment (like Amazon product reviews described as negative when they were actually positive), and audio of YouTube videos (like an Ariana Grande high note being categorized as a whistle). One solution is pushing for the creation of more inclusive datasets, like MLCommons’ People’s Speech Dataset and the Multilingual Spoken Words Corpus. But curating these is time-consuming and expensive, often with a price tag reaching into a range of millions of dollars. Common Voice , Mozilla’s effort to build an open source collection of transcribed speech data, has vetted only dozens of languages since its 2017 launch — illustrating the challenge. One of the reasons creating a dataset is so costly is the domain expertise required for high-quality annotations. As Synced noted in a recent piece, most low-cost labelers can only annotate relatively “low-context” data and can’t handle “high-context” data such as legal contract classification, medical images, or scientific literature. It’s been shown that drivers tend to label self-driving datasets more effectively than those without driver’s licenses and that doctors, pathologists, and radiologists perform better at accurately labeling medical images. Machine-assisted tools could help to a degree by eliminating some of the more repetitive work from the labeling process. Other approaches, like semi-supervised learning, promise to cut down on the amount of data required to train models by enabling researchers to “fine-tune” a model on small, customized datasets designed for a particular task. For example, in a blog post published this week, OpenAI says that it managed to fine-tune GPT-3 to more accurately answer open-ended questions by copying how humans research answers to questions online (e.g., submitting search queries, following links, and scrolling up and down pages) and citing its sources, allowing users to give feedback to further improve the accuracy. Still other methods aim to replace real-world data with partially or entirely synthetic data — although the jury’s out on whether models trained on synthetic data can match the accuracy of their real-world-data counterparts. Researchers at MIT and elsewhere have experimented using random noise alone in vision datasets to train object recognition models. In theory, unsupervised learning could solve the training data dilemma once and for all. In unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. But while unsupervised learning excels in domains for which a lack of labeled data exists, it’s not a weakness. For example, unsupervised computer vision systems can p ick up racial and gender stereotypes present in the unlabeled training data. 3. A benchmarking problem The issues with AI datasets don’t stop with training. In a study from the Institute for Artificial Intelligence and Decision Support in Vienna, researchers found inconsistent benchmarking across more than 3,800 AI research papers — in many cases attributable to benchmarks that didn’t emphasize informative metrics. A separate paper from Facebook and the University College London showed that 60% to 70% of answers given by natural language models tested on “open-domain” benchmarks were hidden somewhere in the training sets, meaning that the models simply memorized the answers. In two studies coauthored by Deborah Raji, a tech fellow in the AI Now Institute at NYU, researchers found that benchmarks like ImageNet are often “fallaciously elevated” to justify claims that extend beyond the tasks for which they were originally designed. That’s setting aside the fact that “dataset culture” can distort the science of machine learning research, according to Raji and the other coauthors — and lacks a culture of care for data subjects, engendering poor labor conditions (such as low pay for annotators) while insufficiently protecting people whose data is intentionally or unintentionally swept up in the datasets. Several solutions to the benchmarking problem have been proposed for specific domains, including the Allen Institute’s GENIE. Uniquely, GENIE incorporates both automatic and manual testing, tasking human evaluators with probing language models according to predefined, dataset-specific guidelines for fluency, correctness, and conciseness. While GENIE is expensive — it costs around $100 to submit a model for benchmarking — the Allen Institute plans to explore other payment models, such as requesting payment from tech companies while subsidizing the cost for small organizations. There’s also growing consensus within the AI research community that benchmarks, particularly in the language domain , must take into account broader ethical, technical, and societal challenges if they’re to be useful. Some language models have large carbon footprints , but despite widespread recognition of the issue, relatively few researchers attempt to estimate or report the environmental cost of their systems. “[F]ocusing only on state-of-the-art performance de-emphasizes other important criteria that capture a significant contribution,” Koch, Foster, Denton, and Hanna said. “[For example,] SOTA benchmarking encourages the creation of environmentally-unfriendly algorithms. Building bigger models has been key to advancing performance in machine learning, but it is also environmentally unsustainable in the long run … SOTA benchmarking [also] does not encourage scientists to develop a nuanced understanding of the concrete challenges presented by their task in the real world, and instead can encourage tunnel vision on increasing scores. The requirement to achieve SOTA constrains the creation of novel algorithms or algorithms which can solve real-world problems.” Possible AI datasets solutions Given the extensive challenges with AI datasets, from imbalanced training data to inadequate benchmarks, effecting meaningful change won’t be easy. But experts believe that the situation isn’t hopeless. Arvind Narayanan, a Princeton computer scientist who has written several works investigating the provenance of AI datasets, says that researchers must adopt responsible approaches not only to collecting and annotating data, but also to documenting their datasets, maintaining them, and formulating the problems for which their datasets are designed. In a recent study he coauthored, Narayanan found that many datasets are prone to mismanagement, with creators failing to be precise in license language about how their datasets can be used or prohibit potentially questionable uses. “Researchers should think about the different ways their dataset can be used … Responsible dataset ‘stewarding,’ as we call it, requires addressing broader risks,” he told VentureBeat via email. “One risk is that even if a dataset is created for one purpose that appears benign, it might be used unintentionally in ways that can cause harm. The dataset could be repurposed for an ethically dubious research application. Or, the dataset could be used to train or benchmark a commercial model when it wasn’t designed for these higher-stakes settings. Datasets typically take a lot of work to create from scratch, so researchers and practitioners often look to leverage what already exists. The goal of responsible dataset stewardship is to ensure that this is done ethically.” Koch and coauthors believe that people — and organizations — need to be rewarded and supported for creating new, diverse datasets contextualized for the task at hand. Researchers need to be incentivized to use “more appropriate” datasets at academic conferences like NeurIPS, they say, and encouraged to perform more qualitative analyses — like the interpretability of their model — as well as report metrics like fairness (to the extent possible) and power efficiency. NeurIPS — one of the largest machine learning conferences in the world — mandated that coauthors who submit papers must state the “potential broader impact of their work” on society, beginning with NeurIPS 2020 last year. The pickup has been mixed , but Koch and coauthors believe that it’s a small step in the right direction. “[M]achine learning researchers are creating a lot of datasets, but they’re not getting used. One of the problems here is that many researchers may feel they need to include the widely used benchmark to give their paper credibility, rather than a more niche but technically appropriate benchmark,” they said. “Moreover, professional incentives need to be aligned towards the creation of these datasets … We think there is still a portion of the research community that is skeptical of ethics reform, and addressing scientific issues might be a different way to get these people behind reforms to evaluation in machine learning.” There’s no simple solution to the dataset annotation problem — assuming that labeling isn’t eventually replaced by alternatives. But a recent paper from Google suggests that researchers would do well to establish “extended communications frameworks” with annotators, like chat apps, to provide more meaningful feedback and clearer instructions. At the same time, they must work to acknowledge (and actually account for) workers’ sociocultural backgrounds, the coauthors wrote — both from the perspective of data quality and societal impact. The paper goes further, providing recommendations for dataset task formulation and choosing annotators, platforms, and labeling infrastructure. The coauthors say that researchers should consider the forms of expertise that could be incorporated through annotation, in addition to reviewing the intended use cases of the dataset. They also say that they should compare and contrast the minimum pay requirements across different platforms and analyze disagreements between annotators of different groups, allowing them to — hopefully — better understand how different perspectives are or aren’t represented. “If we really want to diversify the benchmarks in use, government and corporate players need to create grants for dataset creation and distribute those grants to under-resourced institutions and researchers from underrepresented backgrounds,” Koch and coauthors said. “We would say that there is abundant research now showing ethical problems and social harms that can arise from data misuse in machine learning … Scientists like data, so we think if we can show them how over-usage isn’t great for science, it might spur further reform that can mitigate social harms as well.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,887
2,020
"Quantum computing will (eventually) help us discover vaccines in days | VentureBeat"
"https://venturebeat.com/2020/05/16/quantum-computing-will-eventually-help-us-discover-vaccines-in-days"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Quantum computing will (eventually) help us discover vaccines in days Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The coronavirus is proving that we have to move faster in identifying and mitigating epidemics before they become pandemics because, in today’s global world, viruses spread much faster, further, and more frequently than ever before. If COVID-19 has taught us anything, it’s that while our ability to identify and treat pandemics has improved greatly since the outbreak of the Spanish Flu in 1918, there is still a lot of room for improvement. Over the past few decades, we’ve taken huge strides to improve quick detection capabilities. It took a mere 12 days to map the outer “spike” protein of the COVID-19 virus using new techniques. In the 1980s, a similar structural analysis for HIV took four years. But developing a cure or vaccine still takes a long time and involves such high costs that big pharma doesn’t always have incentive to try. Drug discovery entrepreneur Prof. Noor Shaker posited that “Whenever a disease is identified, a new journey into the “chemical space” starts seeking a medicine that could become useful in contending diseases. The journey takes approximately 15 years and costs $2.6 billion, and starts with a process to filter millions of molecules to identify the promising hundreds with high potential to become medicines. Around 99% of selected leads fail later in the process due to inaccurate prediction of behavior and the limited pool from which they were sampled.” Prof. Shaker highlights one of the main problems with our current drug discovery process: The development of pharmaceuticals is highly empirical. Molecules are made and then tested, without being able to accurately predict performance beforehand. The testing process itself is long, tedious, cumbersome, and may not predict future complications that will surface only when the molecule is deployed at scale, further eroding the cost/benefit ratio of the field. And while AI/ML tools are already being developed and implemented to optimize certain processes, there’s a limit to their efficiency at key tasks in the process. Ideally, a great way to cut down the time and cost would be to transfer the discovery and testing from the expensive and time-inefficient laboratory process (in-vitro) we utilize today, to computer simulations (in-silico). Databases of molecules are already available to us today. If we had infinite computing power we could simply scan these databases and calculate whether each molecule could serve as a cure or vaccine to the COVID-19 virus. We would simply input our factors into the simulation and screen the chemical space for a solution to our problem. In principle, this is possible. After all, chemical structures can be measured, and the laws of physics governing chemistry are well known. However, as the great British physicist Paul Dirac observed: “The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.” In other words, we simply don’t have the computing power to solve the equations, and if we stick to classical computers we never will. This is a bit of a simplification, but the fundamental problem of chemistry is to figure out where electrons sit inside a molecule and calculate the total energy of such a configuration. With this data, one could calculate the properties of a molecule and predict its behavior. Accurate calculations of these properties will allow the screening of molecular databases for compounds that exhibit particular functions, such as a drug molecule that is able to attach to the coronavirus “spike” and attack it. Essentially, if we could use a computer to accurately calculate the properties of a molecule and predict its behavior in a given situation, it would speed up the process of identifying a cure and improve its efficiency. Why are quantum computers much better than classical computers at simulating molecules? Electrons spread out over the molecule in a strongly correlated fashion, and the characteristics of each electron depend greatly on those of its neighbors. These quantum correlations (or entanglement) are at the heart of the quantum theory and make simulating electrons with a classical computer very tricky. The electrons of the COVID-19 virus, for example, must be treated in general as being part of a single entity having many degrees of freedom, and the description of this ensemble cannot be divided into the sum of its individual, distinguishable electrons. The electrons, due to their strong correlations, have lost their individuality and must be treated as a whole. So to solve the equations, you need to take into account all of the electrons simultaneously. Although classical computers can in principle simulate such molecules, every multi-electron configuration must be stored in memory separately. Let’s say you have a molecule with only 10 electrons (forget the rest of the atom for now), and each electron can be in two different positions within the molecule. Essentially, you have 2^10=1024 different configurations to keep track of rather just 10 electrons which would have been the case if the electrons were individual, distinguishable entities. You’d need 1024 classical bits to store the state of this molecule. Quantum computers, on the other hand, have quantum bits (qubits) , which can be made to strongly correlate with one another in the same way electrons within molecules do. So in principle, you would need only about 10 such qubits to represent the strongly correlated electrons in this model system. The exponentially large parameter space of electron configurations in molecules is exactly the space qubits naturally occupy. Thus, qubits are much more adapted to the simulation of quantum phenomena. This scaling difference between classical and quantum computation gets very big very quickly. For instance, simulating penicillin, a molecule with 41 atoms (and many more electrons) will require 10^86 classical bits, or more bits than the number of atoms in the universe. With a quantum computer, you would only need about 286 qubits. This is still far more qubits than we have today, but certainly a more reasonable and achievable number. The COVID-19 virus outer “spike” protein, for comparison, contains many thousands of atoms and is thus completely intractable for classical computation. The size of proteins makes them intractable to classical simulation with any degree of accuracy even on today’s most powerful supercomputers. Chemists and pharma companies do simulate molecules with supercomputers (albeit not as large as the proteins), but they must resort to making very rough molecule models that don’t capture the details a full simulation would, leading to large errors in estimation. It might take several decades until a sufficiently large quantum computer capable of simulating molecules as large as proteins will emerge. But when such a computer is available, it will mean a complete revolution in the way the pharma and the chemical industries operate. The holy grail — end-to-end in-silico drug discovery — involves evaluating and breaking down the entire chemical structures of the virus and the cure. The continued development of quantum computers, if successful, will allow for end-to-end in-silico drug discovery and the discovery of procedures to fabricate the drug. Several decades from now, with the right technology in place, we could move the entire process into a computer simulation, allowing us to reach results with amazing speed. Computer simulations could eliminate 99.9% of false leads in a fraction of the time it now takes with in-vitro methods. With the appearance of a new epidemic, scientists could identify and develop a potential vaccine/drug in a matter of days. The bottleneck for drug development would then move from drug discovery to the human testing phases including toxicity and other safety tests. Eventually, even these last stage tests could potentially be expedited with the help of a large scale quantum computer, but that would require an even greater level of quantum computing than described here. Tests at this level would require a quantum computer with enough power to contain a simulation of the human body (or part thereof) that will screen candidate compounds and simulate their impact on the human body. Achieving all of these dreams will demand a continuous investment into the development of quantum computing as a technology. As Prof. Shohini Ghose said in her 2018 Ted Talk : “You cannot build a light bulb by building better and better candles. A light bulb is a different technology based on a deeper scientific understanding.” Today’s computers are marvels of modern technology and will continue to improve as we move forward. However, we will not be able to solve this task with a more powerful classical computer. It requires new technology, more suited for the task. ( Special thanks Dr. Ilan Richter, MD MPH for assuring the accuracy of the medical details in this article.) Ramon Szmuk is a Quantum Hardware Engineer at Quantum Machines. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,888
2,020
"D-Wave's 5,000-qubit quantum computing platform handles 1 million variables | VentureBeat"
"https://venturebeat.com/2020/09/29/d-wave-advantage-quantum-computing-5000-qubits-1-million-variables"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages D-Wave’s 5,000-qubit quantum computing platform handles 1 million variables Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. D-Wave today launched its next-generation quantum computing platform available via its Leap quantum cloud service. The company calls Advantage “the first quantum computer built for business.” In that vein, D-Wave today also debuted Launch, a jump-start program for businesses that want to begin building hybrid quantum applications. “The Advantage quantum computer is the first quantum computer designed and developed from the ground up to support business applications,” D-Wave CEO Alan Baratz told VentureBeat. “We engineered it to be able to deal with large, complex commercial applications and to be able to support the running of those applications in production environments. There is no other quantum computer anywhere in the world that can solve problems at the scale and complexity that this quantum computer can solve problems. It really is the only one that you can run real business applications on. The other quantum computers are primarily prototypes. You can do experimentation, run small proofs of concept, but none of them can support applications at the scale that we can.” Quantum computing leverages qubits (unlike bits that can only be in a state of 0 or 1, qubits can also be in a superposition of the two) to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. But D-Wave doesn’t sell quantum computers anymore. Advantage and its over 5,000 qubits (up from 2,000 in the company’s 2000Q system) are only available via the cloud. (That means through Leap or a partner like Amazon Braket. ) 5,000+ qubits, 15-way qubit connectivity If you’re confused by the “over 5,000 qubits” part, you’re not alone. More qubits typically means more potential for building commercial quantum applications. But D-Wave isn’t giving a specific qubit count for Advantage because the exact number varies between systems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Essentially, D-Wave is guaranteeing the availability of 5,000 qubits to Leap users using Advantage,” a D-Wave spokesperson told VentureBeat. “The actual specific number of qubits varies from chip to chip in each Advantage system. Some of the chips have significantly more than 5,000 qubits, and others are a bit closer to 5,000. But bottom line — anyone using Leap will have full access to at least 5,000 qubits.” Advantage also promises 15-way qubit connectivity, thanks to a new chip topology, Pegasus , which D-Wave detailed back in February 2019. (Pegasus’ predecessor, Chimera , offered six connected qubits.) Having each qubit connected to 15 other qubits instead of six translates to 2.5 times more connectivity, which in turn enables the embedding of larger and more complex problems with fewer physical qubits. “The combination of the number of qubits and the connectivity between those qubits determines how large a problem you can solve natively on the quantum computer,” Baratz said. “With the 2,000-qubit processor, we could natively solve problems within 100- to 200-variable range. With the Advantage quantum computer, having twice as many qubits and twice as much connectivity, we can solve problems more in the 600- to 800-variable range. As we’ve looked at different types of problems, and done some rough calculations, it comes out to generally we can solve problems about 2.6 times as large on the Advantage system as what we could have solved on the 2000-qubit processor. But that should not be mistaken with the size problem you can solve using the hybrid solver backed up by the Advantage quantum computer.” 1 million variables, same problem types D-Wave today also announced its expanded hybrid solver service will be able to handle problems with up to 1 million variables (up from 10,000 variables). It will be generally available in Leap on October 8. The discrete quadratic model (DQM) solver is supposed to let businesses and developers apply hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (integers from 1 to 500, colors, etc.), expanding the types of problems that can run on Advantage. D-Wave asserts that Advantage and DQM together will let businesses “run performant, real-time, hybrid quantum applications for the first time.” Put another way, 1 million variables means tackling large-scale, business-critical problems. “Now, with the Advantage system and the enhancements to the hybrid solver service, we’ll be able to solve problems with up to 1 million variables,” Baratz said. “That means truly able to solve production-scale commercial applications.” Depending on the technology they are built on, different quantum computers tend to be better at solving different problems. D-Wave has long said its quantum computers are good at solving optimization problems, “and most business problems are optimization problems,” Baratz argues. Advantage isn’t going to be able to solve different types of problems, compared to its 2000Q predecessor. But coupled with DQM and the sheer number of variables, it may still be significantly more useful to businesses. “The architecture is the same,” Baratz confirmed. “Both of these quantum computers are annealing quantum computers. And so the class of problems, the types of problems they can solve, are the same. It’s just at a different scale and complexity. The 2000-qubit processor just couldn’t solve these problems at the scale that our customers need to solve them in order for them to impact their business operations.” D-Wave Launch In March, D-Wave made its quantum computers available for free to coronavirus researchers and developers. “Through that process what we learned was that while we have really good software, really good tools, really good training, developers and businesses still need help,” Baratz told VentureBeat. “Help understanding what are the best problems that they can benefit from the quantum computer and how to best formulate those problems to get the most out of the quantum computer.” D-Wave Launch will thus make the company’s application experts and a set of handpicked partner companies available to its customers. Launch aims to help anyone understand how to best leverage D-Wave’s quantum systems to support their business. Fill out a form on D-Wave’s website and you will be triaged to determine who might be best able to offer guidance. “In order to actually do anything with the quantum processor, you do need to become a Leap customer,” Baratz said. “But you don’t have to first become a Leap customer. We’re perfectly happy to engage with you to help you understand the benefits of the quantum computer and how to use it.” D-Wave will make available “about 10” of its own employees as part of Launch, plus partners. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,889
2,021
"Classiq aims to advance software development for quantum computers | VentureBeat"
"https://venturebeat.com/2021/01/29/classiq-aims-to-advance-software-development-for-quantum-computers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Classiq aims to advance software development for quantum computers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Startups providing the tools to build software that will run on quantum computers are enjoying attention from investors. Classiq , which provides a modeling tool for building algorithms for quantum computers, revealed this week that it has raised $10.5 million. The round was led by Team8 and Wing Venture Capital, with additional participation from Entrée Capital, OurCrowd, and IN Venture, the corporate venture arm of Sumitomo in Israel. Previously, Classiq had raised $4 million in a seed round from Entrée Capital. Algorithms for quantum computers have thus far been built using low-level tools that are specific to each platform. But this approach is painstakingly slow and results in algorithms that can only run on one quantum computing platform, Classiq cofounder and CEO Nir Minerbi said. “It’s like programming was back in the 1950s,” Minerbi said. “Developers are working at the equivalent of the gate level.” Classiq has developed a modeling tool that enables developers to build algorithms for quantum computers at a much higher level of abstraction. That capability not only increases the rate at which those algorithms can be built, Minerbi said, it also enables algorithms to be employed on different quantum computing platforms. The tools Classiq provides are roughly equivalent to the chip design tools for conventional computing systems provided by companies like Cadence Design Systems, Minerbi noted. Quantum computers running experimental applications today are based on quantum circuits that make a qubit available as the atomic unit of computing. Traditional computing systems are based on bits that can be set at 0 or 1. A qubit can be set for 0 and 1 at the same time, which will theoretically increase raw compute horsepower to the point where more complex chemistry problems could be solved to advance climate change research or break encryption schemes that are widely employed to ensure cybersecurity. Experts also expect that quantum computers will advance AI by making it possible to train more complex models much more quickly. The challenge is that qubits are not especially stable when distributed across multiple computing platforms. However, Minerbi said people in his field expect that by 2023 more than 1,000 qubits for various hardware platforms will have been created. The list of companies with quantum computing initiatives based on one or more subsets of those qubits is extensive. It includes: Alphabet, IBM, Honeywell, Righetti Computing, Amazon, Microsoft, D-Wave Systems, Alibaba, Nokia, Intel, Airbus, Hewlett-Packard Enterprise (HPE), Toshiba, Mitsubishi, SK Telecom, NEC, Raytheon, Lockheed Martin, Rigetti, Biogen, Volkswagen, Silicon Quantum Computing, IonQ , Huawei, Amgen, and Zapata. The Chinese government is also known to be funding quantum computing research, as is the U.S. via groups like the National Security Agency (NSA), National Aeronautics and Space Administration (NASA), and Los Alamos National Laboratory. In most cases, there is a fair amount of collaboration between these U.S. entities. For example, the Google subsidiary of Alphabet has created Quantum AI Laboratory in collaboration with the NSA using quantum computers provided by D-Wave Systems. If organizations want to build applications that can take advantage of qubits that might be stable enough to support applications by 2023, they would need to start those efforts this year, Minerbi said. As such, Classiq expects demand for tools used to build quantum algorithms to steadily increase through the rest of this year and into the next. Given the cost of building quantum computers, they will for the most part be made available as another type of infrastructure-as-a-service (IaaS) platform. As quantum computing moves past the experimentation phase, the number of companies providing tools to build and manage software on these platforms will also grow. It may be a while before quantum computing applications are employed in production environments. In the meantime, however, the tools for building those applications are starting to find their way into the hands of researchers now. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,890
2,021
"New workload sharing framework drives breakthroughs in AI-based application performance | VentureBeat"
"https://venturebeat.com/2021/06/16/new-workload-sharing-framework-drives-breakthroughs-in-ai-based-application-performance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored New workload sharing framework drives breakthroughs in AI-based application performance Share on Facebook Share on X Share on LinkedIn Photo of the Statue of Liberty before and after being processed by Topaz Gigapixel AI combining Intel Deep Link Technology with the OpenVINO toolkit. Presented by Intel Today’s PC architecture is tasked with processing chores that once demanded high-end workstations, such as generating photorealistic graphics or running real-time inferencing. Rather than turn to expensive systems, developers and gamers would prefer to elevate the power of a PC’s compute architecture to push the boundaries of their creativity, productivity and gaming performance. Getting more out of existing compute architecture is essential. But how do you accelerate compute-intensive tasks such as neural networks or machine learning, which drive upscaling images, pose recognition, and neural style transfer graphics breakthroughs? The answer is utilizing Intel Deep Link technology, which provides additional computational power CPUs and GPUs (both integrated and discrete) to run AI-powered applications at blazing speeds and without compromising the user experience. Above: Intel Deep Link combines an 11th Generation processor with an Intel Iris Xe integrated GPU and an Intel Iris Xe MAX discrete GPU, and manages the use of multiple GPUs simultaneously. Until recently, gamers were forced to take calculated risks such as overclocking CPUs to attain higher utilization of existing compute. Some developers tried complicated partitioning schemes to maximize available compute resources. But there were several problems with that: Either GPUs and CPUs weren’t tightly coupled or multiple GPUs couldn’t be put to use very efficiently. It was often extremely difficult for developers to partition workloads to harness these compute elements. Partitioning was often done “naively,” and the performance gain didn’t justify the effort. By running Intel Deep Link technology, developers now have the ability to strategically apply computing power that was previously unavailable, assigning tasks to parts of the computer that were otherwise dormant—and to do it efficiently. Deep Link enhances the way CPUs and GPUs interact, harnessing both integrated and discrete GPU capabilities — boosting the performance potential of AI-based applications. The Intel Iris Xe (also referred to as the integrated, or iGPU) and Intel Iris Xe MAX (also referred to as the discrete, or dGPU) can equally split tasks, completing a workload in roughly half the time. Combining Deep Link technology, along with the Intel Distribution of OpenVINO Toolkit, enables the assignment of workloads across processors, helping to drive significant developer productivity gains and performance improvement in AI-based applications requiring upscaling enhancement, video streaming & capture, or video rendering. OpenVINO prevents the overloading and bottle-necking that plagues many applications by simplifying the assignment of workloads across the entire computational platform. The applications not only perform much better, they’re easier to create — and the code can be deployed anywhere in device-specific iterations across CPUs, iGPUs, dGPUs, Vision Processing Units and FPGAs. Let’s look at several examples of how this potent technology framework raises the bar for what’s possible for PC application developers, gamers and those who demand exceptional performance out of mobile processors. Upscaling images Scanning old family photos can help beloved images achieve an eternal, digital life. But it doesn’t always improve their appearance. Developers have long been frustrated by the challenge of upscaling images to enhance their appeal. And consumers would prefer to see these digital image transformations happen instantaneously if possible. Harnessing the power of Deep Link and OpenVINO, new AI-based algorithms are able to enhance these images in ways never before possible on PCs. Code becomes multi-GPU-aware; the end result is the upscaling experience is snappier and sharper than it was without Deep Link. End users enjoy a faster runtime experience enabling them to work through an entire album of vintage photos much faster than ever before. For example, Topaz Labs, a software company known for applying machine learning to editing photographs, released an image upscaling platform called AI GigaPixel, which improves resolution and sharpens small, grainy or out-of-focus images so they can be displayed and printed in large sizes with far less distortion. By applying machine learning to image upscaling, the software optimizes each pixel to create an ideal image. And it works. “Topaz Labs have integrated Intel’s Deep Link Technology into our Gigapixel AI* application, upscaling images up to 6X with AI adding detail and clarity,” says Albert Yang, CTO, Topaz Labs. Video capture and encoding In a yoga class, you expect your instructor to not only watch your poses but also to help you improve them if necessary. But what if, rather than enter a Yoga studio, you prefer to live stream a class into your family room — what type of pose guidance is possible then? An instructor needs help to assess every online student’s poses. And that’s exactly what an Xe Max-enabled laptop can do on the fly utilizing both inferencing and image recognition. The instructor will let you know if you’re not holding your back leg straight when you attempt a Crescent Lunge or many other poses. Above: MixPose uses Intel Deep Link and OpenVINO to capture the position of students, enabling the instructor to guide them into improving their form. Until now, doing all of that advanced processing on a PC notebook, particularly over Wi-Fi and the internet, would have been nearly impossible. Peter Ma, co-founder/CEO of MixPose, created an interactive application for live yoga instruction at home that applies inferencing to assess a Yoga class students’ poses. The MixPose application also utilizes Deep Link, the OpenVINO toolkit and the x264 codec that can be hardware encoded via the Intel Iris Xe MAX GPU. While the discrete GPU manages the pose estimation inference, the CPU tackles encoding. Deep Link-enabled power sharing between the CPU and GPUs make the immersive MixPose experience possible. “OpenVINO allows us to use the GPU to process pose detection and perform video encoding, reserving the CPU purely for the operating system,” says Ma, who developed it. He wanted the inferencing solution to work even when there’s poor Wi-Fi. Ma also enabled the solution to run on light laptops which can perform inferencing without relying upon the Cloud — which is also expensive to set up and maintain. This type of capability could be used for a wide range of remote assessments, not only in sports, but also across a spectrum of other training activities. Neural style transfer Gamers seek ever more immersive and fun experiences and the introduction of neural style transfer unlocks a new realm of possibilities for the gaming community. The idea is to take an existing image, either static or moving, and morph it into another style altogether. It’s a stylistic mashup, a CPU-intensive process that can transform a Picasso into a Monet or a scene from a photo-realistic urban video game into one occurring on another planet in a distant galaxy. Yet, game developers know that executing this transformation is not a trivial problem — it must happen on every frame at 30fps — and you can’t have that process slow down game play by tying up the CPU. And for that matter, gamers don’t want to see their GPU do anything other than focus on the gaming experience. All of which makes power sharing that much more critical to pushing the graphics envelope. Real-time style transfer technology relies on resource-intensive inferencing processes which suit Deep Link’s ability to enable developers to offload sections of the game to different compute units. With Deep Link you can run this process on an integrated GPU while gaming utilizes the discrete GPU. Of course, there are other means of substituting backgrounds with images, overlays, or avatars to protect a gamer’s privacy or personalize a gaming experience. For example, XSplit VCam from Split Media Labs Which utilizes Deep Link and is accelerated by Intel Deep Learning Boost, can put gamers in a game or provide a desired foreground and background blur. XSplit VCam uses iGPUs and multicore CPUs to accomplish this without impacting the performance of games running on discrete GPUs. “The ability to load balance across the two compute engines brings tremendous performance improvements to our products, which translates directly to improved user experiences,” says Henrik Levring, CEO, XSplit, SplitmediaLabs. Making this combination work AI is transforming graphics-intensive application development and pushing the boundaries of what you can do with gaming. But developers may fear that they may either lack the necessary development resources or that such development requires enormous processing overhead. Together, Intel’s Deep Link Technology and OpenVINO provide the tools that help developers to design faster, smarter, more efficient AI-based applications, and to build a customized compute platform that takes advantage of hardware processing space that was previously unavailable. This dynamic power-sharing approach offers significant gains in both performance and efficiency of compute-intensive AI applications, upleveling the processing capabilities of PC notebooks both for developers and gamers who demand world-class compute experiences. Dig deeper: Learn more about the capabilities of Intel Deep Link Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,891
2,021
"Gartner advises tech leaders to prepare for action as quantum computing spreads | VentureBeat"
"https://venturebeat.com/2021/10/21/gartner-advises-tech-leaders-to-prepare-for-action-as-quantum-computing-spreads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner advises tech leaders to prepare for action as quantum computing spreads Share on Facebook Share on X Share on LinkedIn IBM Q System One, the industry’s first fully integrated quantum computing system, is assembled for mechanical testing for the first time at Goppion headquarters in Milan, Italy in July 2018, including a metal frame that supports and integrates different components of the system such as the cryostat, quantum processor, and control electronics. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Quantum computing has hit the radar of technical leaders, because of the huge efficiency it offers at scale. It will take years to develop for most applications, however, even as it makes limited progress in the near term in highly specialized fields of materials science and cryptography. Quantum methods are gaining more rapid attention, however, with special tools for AI , as seen in recent developments around natural language processing that could open up the “black box” of today’s neural networks. Last week’s release of a Quantum Natural Language Processing (QNLP) toolkit by Cambridge Quantum shows the new possibilities. Known as lambeq, the kit takes the form of a conventional Python repository that is hosted on GitHub. It follows the arrival at Cambridge Quantum of noted AI and NLP researchers and affords the chance for hands-on experience in QNLP. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The lambeq package, which takes its name from late semantics researcher Joachim Lambek, is said to convert sentences into quantum circuits, offering a new view into text mining, language translation, and bioinformatics corpora. Using quantum principles, NLP can provide explainability not possible in “bag of words” neural approaches done on classical computers today, according to Bob Coecke, the chief scientist at Cambridge Quantum. QNLP, he said, layers a compositional structure on circuits. As represented on schema, these structures do not look too unlike parsed sentences on grade-school blackboards. Presently popular methods of NLP “don’t have an ability to compose things together to find a meaning,” Coecke told VentureBeat. “What we want to bring in is compositionality in the classical sense — to use the same compositional structure. We want to bring reasoning back.” Quantum computing timelines Cambridge Quantum’s efforts to expand quantum infrastructure got significant backing earlier this year when Honeywell said it would merge its own quantum computing operations with Cambridge Quantum, to form an independent company to pursue cybersecurity , drug discovery, optimization, material science, and other applications, including AI. Honeywell said it would invest between $270 million – $300 million in the new operation. Cambridge Quantum said it would remain independent, working with various quantum computing players, including IBM. The lambeq work is part of an overall AI project that is the longest-term project among the efforts at Cambridge Quantum, said Ilyas Khan, founder, and CEO of Cambridge Quantum, in an e-mail interview. “We might be pleasantly surprised in terms of timelines, but we believe that NLP is right at the heart of AI more generally and therefore something that will really come to the fore as quantum computers scale,” he said. Khan cited cybersecurity and quantum chemistry as the most advanced application areas in Cambridge Quantum’s estimation. What kind of timeline does Khan see ahead for quantum hardware? “There is a very well-informed consensus not only about the hardware roadmap,” he replied, citing Honeywell and IBM among credible corporate players in this regard. These “and the very well amplified statement by Google about having fault-tolerant computers by 2029 are just some of the reasons why we say that the timelines are generally well-understood,” Khan said. The march of quantum Alliances, modeling advances, mergers, and even — in the cases of IonQ and Rigetti — public offerings comprise most of the quantum computing industry advancements of late. Often hybrid couplings of quantum and classical computing features are involved. New developments in the quantum industry include: D-Wave, builders of a quantum annealing computer that carried forward much of the early research in the area, this year added constrained quadratic model solvers to hybrid tooling for problems that run across classical and quantum systems; Rigetti Computing is working with Riverlane and Astex Pharmaceuticals to pair Rigetti’s quantum processors with cloud-based classical computing resources that, in effect, test quantum algorithms for drug discovery on a hybrid platform that mixes classical and quantum processing; IBM said it would partner with European electric utility company E.ON to develop workflow solutions for future decentralized electrical grids using the open-source Qiskit quantum computing SDK and the IBM Cloud; and, Sandbox, at Alphabet, has reportedly launched APIs that let developers use Google Tensor Processing Units to simulate quantum computing workloads. Use case drill down Indications are that, as researchers bounce between breakthroughs and setbacks, a variety of new quantum-inspired algorithms and software tools will appear. Enterprises need to pick targets carefully while treading some novel ground. Gartner analyst Chirag Dekate emphasized that, where applicable, enterprises should begin to prepare for quantum computing. He spoke this week at Gartner IT Symposium/Xpo 2021 Americas. He said companies should be sure not to outsource quantum innovation, but to instead use this opportunity to foster skills via small quantum working groups. “Starting early is the surest form of success,” he said. He said enterprise decision-makers must drill down on very specific use cases, as they prepare for quantum commercialization. “Quantum computing is not a general-purpose technology — we cannot use quantum computing to address all the business problems that we currently experience,” Dekate told the assembled and virtual conference audiences. Gartner’s Hype Cycle for Computing Infrastructure for 2021 has it that more than 10 years will elapse before quantum computing reaches the Plateau of Productivity. That’s the place where the analyst firm expects IT users to truly benefit from employing a given technology. The assessment is the same as it was in 2020, as is quantum computing’s present post on the Peak of Inflated Expectations — Gartner’s designation for rising technologies that are considered overhyped. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,892
2,021
"Multiverse Computing utilizes quantum tools for finance apps | VentureBeat"
"https://venturebeat.com/2021/11/01/multiverse-computing-utilizes-quantum-tools-for-finance-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Multiverse Computing utilizes quantum tools for finance apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Despite great efforts to unseat it, Microsoft Excel remains the go-to analytics interface in most industries — even in the relatively tech-advanced area of finance. Could this familiar spreadsheet be the portal to futuristic quantum computing in finance? The answer is “yes,” according to the principals at Multiverse Computing. This San Sebastián, Spain-based quantum software startup is dedicated to forging forward with finance applications of the quantum kind, and its leadership sees the Excel spreadsheet as a logical means to begin to make this happen. “In finance, everybody uses Excel; even Bloomberg has connections for Excel tools,” said Enrique Lizaso Olmos, CEO of Multiverse Computing, which recently gained $11.5 million in a funding round headed by JME Ventures. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Excel is a key entry point for emerging quantum methods, Lizaso Olmos said, as he described how users can drag and drop data sets from Excel columns and rows into Multiverse’s Singularity SDK, which then launches quantum computing jobs on available hardware. From their desks, for example, Excel-oriented quants can analyze portfolio positions of growing complexity. The Singularity SDK can assign their calculations to the best quantum structure, whether it’s based on ion traps, superconducting circuits, tensor networks, or something else. Jobs can run on dedicated classical high-performance computers as well. Quantum computing for finance Multiverse’s recently closed seed funding round, led by JME, also included Quantonation, EASO Ventures, CLAVE Capital, and others. Multiverse principals have backgrounds in quantum physics, computational physics, mechatronics engineering, and related fields. On the business side, Lizaso Olmos can point to more than 20 years in banking and finance. The push to find ways to immediately start work in quantum applications is a differentiator for Multiverse, claims Lizaso. The focus is to work with available quantum devices that can solve today’s problems in the financial sector. Viewers see quantum computing as generally slow in developing, but the finance sector shows specific early promise, just as it has in the past with a host of emerging technologies. Finance apps drive investments like JME’s in Multiverse. In a recent report, “What Happens When ‘If’ Turns to ‘When’ in Quantum Computing,” Boston Consulting Group (BCG) estimated equity investments in quantum computing nearly tripled in 2020, with further uptick seen for 2021. BCG states “a rapid rise in practical financial-services applications is well within reason.” It’s not surprising, then, that Multiverse worked with both BBVA (Banco Bilbao Vizcaya Argentaria) to showcase both quantum computing in finance and Singularity’s potential to optimize investment portfolio management , as well as Crédit Agricole CIB to implement algorithms for risk management. “We have been working on real problems, using real quantum computers, not just theoretical things,” Lizaso Olmos said. Why quantum-inspired work matters Multiverse pursues both quantum and quantum-inspired solutions for open problems in finance, according to Román Orús, cofounder and chief scientific officer at the company. Such efforts create algorithms that mimic some techniques used in quantum physics, and they can run on classical computers. “It’s important to support quantum-inspired algorithm development because it can be deployed right away, and it’s educating clients about the formalism that they need for moving to the purely quantum ,” Orús said. The quantum-inspired work is finding some footing in quantum machine learning applications, he explained. There, financial applications that could benefit include credit scoring in lending, credit card fraud, and instant transfer fraud detection. “These methods come from physics, and they can be applied to speed up and improve machine learning algorithms, and also optimization techniques,” Orús said. “The first ones to plug into finance are super successful.” Being specific about applications is very important, as both Orús and Lizaso Olmos emphasize. Whether the tools are quantum or quantum-inspired, the applications users in finance pursue must be selected wisely, Orús and Lizaso Olmos said. In other words, this is not your parents’ general-purpose computing. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,893
2,018
"Icon raises $9 million to fight homelessness with 3D-printed homes built in 24 hours | VentureBeat"
"https://venturebeat.com/2018/10/17/icon-raises-9-million-to-fight-homelessness-with-3d-printed-homes-built-in-24-hours"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Icon raises $9 million to fight homelessness with 3D-printed homes built in 24 hours Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Icon has raised $9 million in seed funding to build custom 3D-printed homes in less than 24 hours. In partnership with nonprofit New Story , the company wants to fight homelessness in the developing world. The partners already built a permitted 3D-printed home in Austin, Texas in early 2018. Icon wants to use the new funds to further its mission of revolutionizing homebuilding through robotics, software, and advanced materials. Its long-term goal is to make affordable, resilient, and sustainable homes across the world. “It’s our mission at Icon to re-imagine the approach to homebuilding and construction and make affordable, dignified housing available to everyone throughout the world,” said Jason Ballard, CEO of Austin-based Icon, in a statement. “We’re in the middle of a global housing crisis, and making old approaches a little better is not solving the problem. We couldn’t be happier with the team of global investors who are supporting Icon in our belief that the homebuilding industry needs a complete paradigm shift.” Above: Jason Ballard is CEO of Icon. Oakhouse Partners led the round, with additional investors including Vulcan Capital, the investment arm of Microsoft cofounder and philanthropist Paul Allen (who sadly died this week ); D.R. Horton, the largest homebuilder by volume in the U.S. since 2002; Emaar, the largest developer in the Middle East and creator of the tallest building in the world; Capital Factory, Texas’ premier startup accelerator; CAZ Investments; Cielo Property Group; Engage Ventures; Saturn Five; Shadow Ventures; MicroVentures; Trust Ventures; and Verbena Road Holdings. “What the Icon team has accomplished in such a short period of time is not only a transformational breakthrough in homebuilding, it is an inspiration for the entire world to think outside the box about how humanity will confront the global housing crisis,” said Jason Portnoy, managing partner at Oakhouse Partners, in a statement. “Oakhouse Partners invests in companies that apply innovative technologies to radically improve millions of lives. Icon demonstrates this perfectly through their advanced construction technologies, and we’re proud to support them on this important mission.” The next step for Icon is to deliver strategic, signature projects in the U.S. and abroad, including continued work with the nonprofit organization New Story. The second generation of the Vulcan printer is also underway and will be unveiled in 2019. Icon will also be expanding its team through numerous technical roles, including robotics, advanced materials, and software engineering. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,894
2,021
"Siemens, Dow partner on process manufacturing digital twin testbed | VentureBeat"
"https://venturebeat.com/2021/07/28/siemens-dow-partner-on-process-manufacturing-digital-twin-testbed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Siemens, Dow partner on process manufacturing digital twin testbed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Siemens and Dow have created a testbed to help bring digital transformation to chemical process manufacturing. This will allow frontline chemical process workers to help inform the development of digital twins for process manufacturing. The two industrial system and chemical industry giants hope to inspire new approaches to applications and the digitization of workflows in an industry that may otherwise be left behind by the rapid pace of technology innovation. The effort demonstrates how digital threads can be woven across processes. It will also make it easy for Dow’s digitalization team to bring in frontline workers from across their facility to glean ideas about identifying and implementing new digital twin-powered workflows, Siemens chemical industry manager Iiro Esko told VentureBeat. He said the testbed effort is being orchestrated as part of MxD, a manufacturing incubator that allows manufacturers and technology providers to showcase innovative technologies. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Analog processes go digital In most businesses, process automation refers to streamlining the handoffs between different digital workflows. But in the chemical industry, processes are typically built on top of analog methods and workflows that have remained relatively untouched for the last 30 years. In the industrial context, process automation characterizes most products produced in a continuous stream, instead of in individual units. This includes basic chemicals, plastics, pesticides, fertilizers, medicine, soaps, paper, and beverages. The testbed is intended to demonstrate ways to improve factory control and integrated modular automation and to adopt augmented reality and digital twins for quicker access to safety manuals, maintenance forms, and other resources to boost productivity. Some methods also have uses in R&D and compliance. To prove out methods, the testbed incorporates a variety of state-of-the-art industrial IoT hardware. This includes sensors; automation controllers; networking, power distribution, and power monitoring equipment; and drives and motors. Birthing new digital twins The Siemens/Dow testbed is a big part of Siemens’ strategy for promoting digital twins across industries. The company has dedicated significant investment to coordinating digital twin efforts across industries as diverse as aerospace, electronics, transportation, manufacturing, and medicine. Digital twin uptake may lag unless the value of digital twins is demonstrated in a way that shows how companies can use digital threads to connect previously disconnected workflows. Today, many of these use cases are conceptual prototypes, but testbeds that demonstrate some basic workflows to frontline workers could open ongoing opportunities for improvement down the road. Dow frontline workers will visit MxD to see for themselves which elements of a digital twin deliver the most value, Esko said. “We’re talking about field technicians, site engineers, third-party service providers, maintenance managers, reliability engineers, process automation groups, process operators, and plant managers,” he added. Siemens’ manufacturing testbed work suggests a strategy for inspiring digital transformation beyond IT into a wider range of industries and real-world processes. In some ways, this follows in the footsteps of the Centers of Excellence (COE) concept companies have been implementing to drive digital process automation technologies such as RPA and process mining. In these cases, companies coordinate activities through a single center that helps showcase success to inspire other use cases across the company. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,895
2,021
"Buildots boosts digital twin process mining with $30M | VentureBeat"
"https://venturebeat.com/2021/08/04/buildots-boosts-digital-twin-process-mining-with-30m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Buildots boosts digital twin process mining with $30M Share on Facebook Share on X Share on LinkedIn Architect using a tablet at construction site Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Buildots, a construction digital twin company, garnered a $30 million series B round led by Lightspeed Ventures, bringing its total investment to $46 million. Buildots will use the new funds to double the size of its global team, focusing on sales and R&D to expand its digital twins efforts, which use process mining techniques to improve outcomes as construction trades go digital. “The new funding will support our ambitious growth plans for 2021-2022, including extending our existing sales team and opening new territories,” Buildots cofounder and CEO Roy Danon told VentureBeat. “It will also support additional enhancements to the product, such as supporting more project workflows, integrations with other ecosystem players, and [fine-tuning] our AI to provide more critical insights to our clients,” he continued. Buildots has early customers in 13 different countries, including Build Group in California and Washington state, MBN in Germany, Gammon in Hong Kong, and Wates in the U.K. Previous investors include TLV Partners, Future Energy Ventures, and Tidhar Construction Group. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Operationalizing digital twins While other companies focus on the design or presentation of 3D construction data, Buildots specializes in operationalizing it. Buildots has concentrated on the gap between existing tools for design, scheduling, document management, and process controls that provide visibility into what’s happening on construction sites. The company focuses on higher-frequency updates and greater detail. Founded in 2018, Buildots aims to improve the user experience for workers and managers. Its special sauce lies in streamlining and automating the reality capture process using hardhat-mounted 360-degree cameras. The Buildots tools bring process mining techniques to construction projects. The software is able to track the exact process by which construction projects are built for the first time, Danon said. Connecting these process models with the original design and schedule information is intended to help managers learn more about bottlenecks in their existing processes and how to get them right the first time. In the background, Buildots’ AI algorithm double-checks new work against the plan, tracks progress, and updates an as-built digital twin model. The granularity of information in Buildots enables teams to drill down on any issue found on-site and take immediate actions to keep the project on budget and on schedule. Identifying bottlenecks through process mining A project’s current state is captured on an ongoing basis through cameras while teams make incremental changes. Proprietary AI and computer vision algorithms fuse this data with the latest design and scheduling plans and update the platform’s internal digital twin. For example, one European company using Buildots discovered that its concrete finishing team was proceeding much more slowly than the partition building team. This created a bottleneck for the construction of new floors. The Buildots application alerted managers to the problem. Then it helped them formulate a new plan that diverted workers away from building partitions to finishing the concrete, which reduced delays for everyone. Improving 3D model quality The platform can also identify quality gaps between the plan and what was actually built. It is common for humans to miss some elements when manually comparing building documents to what they see. Manual tracking processes tend to be infrequent; have low granularity; and rely on people’s objectivity, skill, and attention to detail. Once such processes are automated, teams capture details more frequently, which reduces the delays in resolving problems. It is also possible to drill down into construction progress at the level of an individual socket and its different stages of installation. For example, the two images below show a 3D model of the plan on the right and a white outline where the application detected a missing outlet. “While this isn’t a huge deal for any given outlet, on the average project, we spot a missing element for every 50 to 100 square meters,” Danon said. Averting hundreds of those issues can lead to a substantial efficiency improvement. Above: Here, software detected an overlooked power outlet requirement. Transparent AI builds trust The focus on updating and auditing the data trail across the lifecycle of a project is another key feature. Existing market solutions such as PlanGrid, Procore, and others have already paved the way for construction teams now using mobile apps on the construction site. Today’s engineers and managers are generally comfortable using iPads or web applications in their day-to-day work. But all these tools require someone to enter data manually. In contrast, Buildots’ approach to digital twins automates this process and connects the data to an audit trail woven into AI models. This transparency allows construction teams to understand how conclusions about a particular project scheduling problem were reached. “We have built our platform with the principle of transparent AI , meaning that every conclusion the system makes can be drilled down into so that construction managers can develop trust with their new virtual team members,” Danon said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,896
2,021
"Ansys CTO sees simulation accelerating digital twins development | VentureBeat"
"https://venturebeat.com/2021/08/10/ansys-cto-sees-simulation-accelerating-digital-twins-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ansys CTO sees simulation accelerating digital twins development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Long before there were digital twins or the internet of things, Ansys was making simulation tools to help engineering teams design better products, model the real world, and expand the boundaries of science research. VentureBeat caught up with Ansys CTO Prith Banerjee, who elaborated on why interest in digital twins is taking off, how modeling and simulation are undergoing key developments, and how AI and traditional simulation approaches are starting to complement one another. His view is that of a foundational player surveying a robust set of new applications. This interview has been edited for clarity and brevity. VentureBeat: What do executive managers need to know about modeling and simulation today? They both allow us to peer deeper into things, but how do these underlying technologies serve in various contexts to speed up the ability to explore different designs, trade-offs, and business hypotheses? Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Prith Banerjee: Simulation and modeling help companies around the world develop the products that consumers rely on every day — from mobile devices to cars to airplanes and frankly everything in between. Companies use simulation software to design their products in the digital domain — on the computer — without the need for expensive and time-consuming physical prototyping. The best way to understand the advantages of simulation is by looking at an example: One blue chip customer is leveraging simulation technology to kickstart digital transformation initiatives that will benefit customers by lowering development costs, cutting down the time it takes to bring products to market. A more specific example would be a valve in an aircraft engine that regulates pressure in a pipe, or a duct that needs to be modeled in many ways. Through digital modeling , engineers can vary the pressure and temperature of the valve to gauge its strength and discover failure points more quickly. As a result, engineers no longer need to build and test several different configurations. In the past, engineers would build multiple prototypes in hardware, resulting in long times and cost. Now they can build the entire virtual prototype through software simulation and create an optimal design by exploring thousands of designs. VentureBeat: How would you define a digital twin, and why do you think people are starting to talk about them more as a segment? Banerjee: Think of a digital twin as a connected, virtual replica of an in-service physical entity, such as an asset, a plant, or a process. Sensors mounted on the entity gather and relay data to a simulated model (the digital twin) to mirror the real-world experience of that product. Digital twins enable tracking of past behavior of the asset, provide deeper insights into the present, and, most importantly, they help predict and influence future behavior. While digital twins as a concept are not new, the technology necessary to enable digital twins (such as IoT, data, and cloud computing) has only recently become available. So, digital twins represent a distinct new application of these technology components in the context of product operations and are used in various phases — such as design, manufacturing, and operations — and across various industries — like aerospace, automotive, manufacturing, buildings and infrastructure, and energy. Also, they typically impact a variety of business objectives. That could include services, predictive maintenance, yield, and [overall equipment effectiveness], as well as budgets. They also scale with a number of monitored assets, equipment, and facilities. In the past, customers have built digital twins using data analytics from data gathered from sensors using an IOT platform alone. Today, we have demonstrated that the accuracy of the digital twins can be greatly enhanced by complementing the data analytics with physics-based simulation. It’s what we call hybrid digital twins. Above: Ansys CTO Prith Banerjee VentureBeat: In what fundamental ways do you see modeling and simulation complementing digital twins and vice versa? Banerjee: Simulation is used traditionally to design and validate products — reducing physical prototyping and cost, yielding faster time to market, and helping design optimal products. The connectivity needed for products to support digital twins adds significant complexity. That complexity could include support for 5G or increased concerns about electromagnetic interference. With digital twins, simulation plays a key role during the product operation, unlocking key benefits for predictive and prescriptive maintenance. Specifically, through physics, simulation provides virtual sensors, enables “what-if” analysis, and improves prediction accuracy. VentureBeat: AI and machine learning models are getting much press these days, but I imagine there are equally essential breakthroughs in other types of models and the trade-offs between them. What do you think are some of the more exciting advances in modeling for enterprises? Banerjee: Artificial intelligence and machine learning (AI/ML) have been around for more than 30 years, and the field has advanced from concepts of rule-based expert systems to machine learning using supervised learning and unsupervised learning to deep learning. AI/ML technology has been applied successfully to numerous industries such as natural language understanding for intelligent agents, sentiment analysis in social media, algorithmic trading in finance, drug discovery, and recommendation engines for ecommerce. People are often unaware of the role AI/ML plays in simulation engineering. In fact, AI/ML is applied to simulation engineering and is critical in disrupting and advancing customer productivity. Advanced simulation technology, enhanced with AI/ML, super-charges the engineering design process. We’ve embraced AI/ML methods and tools for some time, well before the current buzz around this area. Physics-based simulation and AI/ML are complementary, and we believe a hybrid approach is extremely valuable. We are exploring the use of these methods to improve the runtimes, workflows, and robustness of our solvers. On a technical level, we are using deep neural networks inside the Ansys RedHawk-SC product family to speed up Monte Carlo simulations by up to 100x to better understand the voltage impact on timing. In the area of digital twins, we are using Bayesian techniques to calibrate flow network models that then provide highly accurate virtual sensor results. Early development shows flow rate correlation at multiple test points within 2%. Another great example where machine learning is meaningfully impacting customer design comes from autonomous driving simulations. An automotive customer in Europe leveraged Ansys OptiSLang machine learning techniques for a solution to the so-called “jam-end” traffic problem, where a vehicle in front changes lanes suddenly, [impacting] traffic. According to the customer, they were able to find a solution to this 1,000 times faster than when using their previous Monte Carlo methods. VentureBeat: So, Ansys has been in the modeling and simulation business for quite a while. How would you characterize some of the significant advances in the industry over this period, and how is the pace of innovation changing with faster computers, faster DevOps processes in software and in engineering, and improvements in data infrastructure? Banerjee: Over time, model sizes have grown drastically. Fifty years ago, simulation was used to analyze tiny portions of larger components, yet it lacked the detail and fidelity we rely on today. At that time, those models were comprised of dozens –at most hundreds — of simulation “cells.” Today, simulation is solving massive models that are comprised of millions (and sometimes even billions) of cells. Simulation is now deployed to model entire products, such as electric batteries, automobiles, engines, and airplanes. As a result, simulation is at the forefront of advancing electrification, aerospace, and key sustainability initiatives aimed at solving the world’s biggest problems. The core concepts of simulation were known a decade ago; however, customers were forced to run their simulations using coarse meshing to approximate their simulations to get the results back overnight. Today, with advances in high-performance computing, it is possible to accomplish incredibly accurate simulation of the physics in a very short amount of time. Furthermore, by using AI/ML we are exploring another factor of ten to one hundred times the speed and accuracy that was previously possible, all enabled by HPC on the cloud. VentureBeat: What do you think are some of the more significant breakthroughs in workflows, particularly as you cross multiple disciplines like mechanical, electrical, thermal, and cost analysis for designing new products? Banerjee: The world around us is governed by the laws of physics, and we solve these physics equations using numerical methods such as finite element or finite volume methods. In the past, our customers used simulation to model only a single physics — such as structures or fluids or electromagnetics — at a given time since the computational capabilities were limited. But the world around us is not limited to single physics interactions. Rather, it has multiphysics interactions. Our solvers now support multiphysics interactions quickly and accurately. Ansys Workbench, which allows cross-physics simulation tools to integrate seamlessly, was a key breakthrough in this market. Workbench opened new simulation capabilities that, prior to its inception, would have been nearly impossible. Our LS-DYNA tool supports multiphysics interactions in the tightest manner at each time step. Beyond Workbench, today the market is continuing to expand into areas like model-based systems engineering, as well as broader systems workflows like cloud. Finally, with the use of AI/ML, we are entering a world of generative design, exploring 10,000 different designs to specification, and rapidly simulating all of them to give the best option to the designer. A very exciting future indeed! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,897
2,021
"Construction procurement platform Agora nabs $30M | VentureBeat"
"https://venturebeat.com/2021/08/12/construction-procurement-platform-agora-nabs-30m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Construction procurement platform Agora nabs $30M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Agora , a construction procurement platform, raised an additional $33 million series B funding, bringing total investment to $44 million. Investors include Tiger Global, 8VC, Tishman Speyer, Suffolk Construction, Jerry Yang, Michael Ovitz, LeFrak, and Kevin Hartz. The company helps contractors streamline the paperwork required for ordering and tracking construction materials , an area expected to see an uptick in the wake of the new infrastructure bill in the U.S. The company’s software complements other construction digital twins tech by improving the material management aspects to increase time on tools. The company claims its software can reduce a foreman’s paperwork by 8 hours per week. The investment will help boost its staff and R&D efforts. The construction industry is ripe for digital transformation. The $10 trillion industry employs over 200 million people around the world, according to McKinsey research. Yet labor productivity has only grown by 0.1% per year since 1947, compared to 3% in other industries like agriculture, retail, and manufacturing. Agora CEO and cofounder Maria Rioumine told VentureBeat, “Because we’ve so heavily underinvested in construction technology, millions of construction workers across the U.S. still have to rely on manual, pen-and-paper processes for critical parts of their work.” Simplifying a cumbersome process Forepersons use Agora to find material, check statuses, and stay updated on when parts are arriving. Office teams use Agora Desktop as their mission control station to manage requisitions coming in from different jobs. “By having both office and field teams on the same system, we ensure increased transparency into where materials are and save both teams a huge amount of time that they can spend on their core competency — building,” Rioumine said. Customers sign an annual contract for the service priced according to their size. The company has increased revenue by 760% over the past year and increased the number of requisitions processed by thirty times. Rioumine said that materials management is a cumbersome process that demands a significant amount of office and warehouse teams’ time. Many contractors put blind trust in the supplier to keep track of materials inventory, and others lack a formal tracking process. So Agora is expanding its product with inventory management capabilities that promise to reduce materials waste, prevent overpaying for materials, and help contractors increase profits. Growing a beachhead Procurement is a big industry with many well-established players. For example, SAP acquired Ariba for $3.4 billion in 2012. Rioumine believes Agora creates unique value for the construction industry by improving the user experience for trade-specific workflows. “Building an intuitive platform that contractors love and find easy to use requires a profound understanding of our customers and dedicated focus,” Rioumine said. For example, the company started with the electrical trade, responsible for over $200 billion in U.S. revenues. This gave them a beachhead on construction projects that has eased the transition into other verticals like mechanical. This follows in the footsteps of former Ariba CEO Keith Krach’s beachhead strategy , who later went on to turn DocuSign into a verb. “Having our customers evangelize our product has been a huge driver for our sales growth and customer traction,” Rioumine said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,898
2,021
"Google launches 'digital twin' tool for logistics and manufacturing | VentureBeat"
"https://venturebeat.com/2021/09/14/google-launches-digital-twin-tool-for-logistics-and-manufacturing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches ‘digital twin’ tool for logistics and manufacturing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google today announced Supply Chain Twin, a new Google Cloud solution that lets companies build a digital twin — a representation of their physical supply chain — by organizing data to get a more complete view of suppliers, inventories, and events like weather. Arriving alongside Supply Chain Twin is the Supply Chain Pulse module, which can be used with Supply Chain Twin to provide dashboards, analytics, alerts, and collaboration in Google Workspace. The majority of companies don’t have visibility of their supply chains, resulting in “stock outs” at retailers and aging inventory at manufacturers. In 2020, out-of-stock items alone cost an estimated $1.14 trillion. The past year and a half of supply chain disruptions has further shown the need for insights into operations to dynamically adjust fleet routes and inventory levels. With Supply Chain Twin, companies can bring together data from multiple sources by enabling views of the datasets to be shared with suppliers and partners. The solution supports enterprise business systems that contain an organization’s locations, products, orders, and inventory operations data as well as data from suppliers and partners such as stock and inventory levels and material transportation status. Supply Chain Twin also draws from public sources of contextual data such as weather, risk, and sustainability. “Digital twin” approaches to simulation have gained currency in other domains. For instance, London-based SenSat helps clients in construction, mining, energy, and other industries create models of locations for projects they’re working on. GE offers technology that allows companies to model digital twins of actual machines and closely track performance. And Microsoft provides Azure Digital Twins and Project Bonsai, which model the relationships and interactions between people, places, and devices in simulated environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Siloed and incomplete data is limiting the visibility companies have into their supply chains,” Hans Thalbauer, managing director of supply chain and logistics at Google Cloud, said in a statement. “The Supply Chain Twin enables customers to gain deeper insights into their operations, helping them optimize supply chain functions from sourcing and planning to distribution and logistics.” Supply Chain Pulse Supply Chain Pulse, which was also launched today, offers real-time visibility, event management, and AI-driven optimization and simulation. Leveraging it, teams can drill down into operational metrics with performance dashboards that make it easier to view supply chain status. In addition, they can set alerts that trigger when metrics reach user-defined thresholds and build workflows that allow users to collaborate to resolve issues. Supply Chain Pulse’s AI-driven algorithm recommendations suggest responses to events, flag more complex issues, and simulate the impact of hypothetical situations. In the coming weeks, Google Cloud customers will be able tap data, app, and system integration partners including Climate Engine, Craft, Crux, Project44, Deloitte, Pluto7, and TCS to integrate Supply Chain Pulse and Supply Chain Twin with their existing setups. Renault is among the companies that’s deployed Supply Chain Twin to get a view of inventory, suppliers, and more. Supply Chain Twin and Supply Chain Twin follow the rollout of Google’s Visual Inspection AI , another industrial solution that taps AI to spot defects in manufactured goods. Logistics, manufacturing, retail, and consumer product goods are undergoing a resurgence as business owners look to modernize their factories and speed up operations. According to a 2020 PricewaterhouseCoopers survey , companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion in 2025. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,899
2,021
"Nexar and Las Vegas tackle traffic with digital twins | VentureBeat"
"https://venturebeat.com/2021/09/27/nexar-and-las-vegas-tackle-traffic-with-digital-twins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nexar and Las Vegas tackle traffic with digital twins Share on Facebook Share on X Share on LinkedIn Wynn Las Vegas Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nexar is partnering with Nevada public road transit authorities to create digital twins to reduce traffic and improve safety. In effect, crowdsourced dashcam image data is used to feed digital twins that represent virtual models of road work. The partnership is another sign of consumer dashcam maker Nexar’s pivot to AI-infused digital twin-as-a-service offerings like its CityStream platform for governments and businesses. “Leveraging vision, and in particular crowdsourcing from moving cameras roaming the cities, allows for a rich, live, and equitable digital twin that covers entire cities and not just high-traffic areas,” Nexar CEO Eran Shir told VentureBeat. He said Nexar has developed AI algorithms to automatically extract road features from camera footage while still masking sensitive data. Such data is expected to feed digital twins that model city activity for civil engineering management. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The partnership with Southern Nevada’s Regional Transit Commission (RTC) partnership will weave real-time camera data into a comprehensive digital twin of the Las Vegas area that reflects the impact of work zones, changes in traffic signs, and road quality on traffic patterns. A bet on digital twins Government agencies in the city of Las Vegas and the state of Nevada are pursuing several such partnerships to apply IoT and SmartCity technology. “This technology can help shape behavior and shift to a more proactive mindset by reducing the time in which problems are found, diagnosed, and fixed,” RTC engineering director John Peñuelas told VentureBeat. Peñuelas observed that the RTC is the only agency in the country that oversees public transportation, traffic management, roadway funding, transportation planning, and regional planning efforts under one roof. The RTC works in collaboration with local governments across five cities and the Nevada Department of Transportation to plan and fund roadway projects and traffic signal operations. RTC has been pioneering workflows for ingesting data from many different sources into a comprehensive digital twin of the city built on ESRI’s GIS platform. Now data from Nexar’s new CityStream service has been combined with existing fixed camera data, traffic sensors, and tracking RFIDs built into work zone traffic control elements. The RTC authorizes hundreds of work zones each day and wants to track how they affect traffic. While these projects may all be under a permit, it’s not always possible to foresee the impact on traffic, safety, and other issues. RTC has seen how work zones affect bus lanes and bus stops and can track whether the zones are laid out safely. The new partnership can automatically show when work zone activity gets out of hand and causes congestion and other issues. For its part, Nexar is also looking to tap into the autonomous vehicle market. Dashcams were purpose-built to capture data about accidents, and this vision data can help train AV systems on the corner cases of collisions, near-collisions, and other anomalous driving situations to learn proper road reactions. Nexar is also pioneering AI collision reconstruction techniques to create forensically preserved digital twins of an accident scene. However, many in the industry still consider this type of application to be science fiction. “Insurance professionals are still working on figuring out how to adapt processes to the rapid rise in camera use,” Shir said. Crowdsourcing traffic in tomorrow’s metaverse Nexar was founded in 2015 and has raised at least $100 million in funding to date. The company uses consumer-grade dash cameras to generate a fresh, high-quality, street-level view of the world and transient changes based on crowdsourced vision data from car cameras. Users capture about 130 million miles of road data per month. The company has pivoted beyond its initial service to provide digital twins for road planning and repair, delivery optimization, insurance, and autonomous vehicle training. Video is not as fine-grained as lidar data , but it is easier to collect frequently. In some corners, video data is getting a second look as lidars encounter other issues. In the long run, creating a metaverse that improves traffic will require improved collaboration and data sharing across various parties, including consumers, cities, car companies, and mapping services. Today this is something of a barrier to progress. Nexar, Cisco, and University of Catalonia researchers have proposed one IETF standard to help cars share digital twin representations of road conditions, traffic, and falling debris. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,900
2,021
"Siemens and FDA demo digital twin factory line | VentureBeat"
"https://venturebeat.com/2021/10/01/siemens-and-fda-demo-digital-twin-factory-line"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Siemens and FDA demo digital twin factory line Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Siemens has received a $1.78 million contract with the U.S. Food and Drug Administration to showcase how digital twin representations can improve medical device manufacturing. Results will be closely watched, as much is riding on such manufacturing breakthroughs. While it was already a matter of national concern, government agencies’ ability to quickly and safely approve vital medications and equipment was put under a bright spotlight with the arrival of COVID-19. The pandemic placed new and continued attention on digital twin technology, as the FDA-Siemens contract indicates. The pilot FDA program will demonstrate how medical device manufacturers can use digital twins to enhance product quality, speed up product development, and increase manufacturing capacity. Future goals will highlight best practices for quicker vaccine rollouts and safer drug development. “We hope this will inspire the medical device leadership to think more holistically and strategically about digital transformation and to invest in bringing our industry up to the level of many other industries,” Del Costy, senior vice president and managing director, Americas at Siemens Digital Industries Software, told VentureBeat. “We must continue to push for more digital design and manufacturing to increase accuracy, supply chain resilience, and improve patient outcomes.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Siemens has long been a leader in medical software and digital twins. This partnership will also give the FDA team hands-on experience with cutting-edge tech that could safely speed regulatory processes. The project will also showcase best practices for medical companies to adopt into their development workflows. Carryline USA and Premier Automation will also supply cutting-edge 3D conveyors and robotic systems that automate materials handling. These can be dynamically reconfigured for product variations and different products to support hyperautomation of factory lines. The project could also help FDA teams improve their understanding of new processes and tech to improve industry guidance, develop better regulatory science tools, and prepare for new manufacturing processes. This builds on prior FDA research on 3D printing, which led to several international standards and widespread adoption of the technology. It has also researched continuous manufacturing techniques for drug substances that led to draft international standards and guidance documents. Simplifying regulation One key goal of the program is to demonstrate how digital threads could simplify workflows that cross medical, engineering, quality, and regulatory processes. A digital thread connects multiple data feeds models and representations comprising digital twins of products and factory configurations. “Creating and leveraging digital threads are an invaluable capability for both medical device manufacturers and the FDA,” Costy said For example, digital threads can support integrated modeling and simulation processes that span product design, optimized production, and regulatory approval processes. One goal is to help regulators like the FDA find ways to better visualize the product and manufacturing risks , provide more robust traceability and impact analysis, and enable more comprehensive data sets that are easier and faster to review. This will allow regulators to respond much faster, with more precision and better information, to both emergency and non-emergency needs. The effort was specifically funded by the FDA’s Office of Counterterrorism and Emerging Threats (OCET), which leads FDA efforts to address national and global health security, counterterrorism, and emerging threats. Transforming med production The pandemic was a significant factor in pursuing this kind of collaboration between the FDA and industry. “While the medical device industry has been advancing over the past few years, the pandemic exposed the gaps,” Costy said, “especially when compared with the non-medical device manufacturers that jumped in to help manufacture ventilators and other critical supplies.” Some of the improvements that digital twins can introduce to various types of processes include: Simplify design transfer across product development and manufacturing teams; Bring agility to scale production and transfer products across manufacturing lines; Improve the ability to analyze product and process risks; Transition from paper-based quality processes to digital workflows; and Facility supplier collaboration and visibility. “The promise of digital twins, closed loop production systems, distributed manufacturing, and other advanced technologies is that they will enable more efficient use of resources,” FDA spokesperson Stephanie Caccomo told VentureBeat. That means better access to production where it is needed, and better resilience to disruptions by simulating outcomes and product quality with inputs, she continued. For its part, Siemens plans to configure many different use cases for digital twins workflows such as labeling, supplier collaboration, and designing for service. Down the road, the company hopes to explore new capabilities such as trusted traceability for improving the supply chain. This could help manufacturers rapidly mitigate supply shortages, swap out parts, and reduce counterfeit issues. The initial use cases will focus on medical devices. Down the road, Siemens would like to demonstrate how digital twins could be used for biologics, pharmaceuticals, food and beverage, and cosmetics manufacturing. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,901
2,021
"Gretel.ai, a platform for generating synthetic and privacy-preserving data, raises $50M | VentureBeat"
"https://venturebeat.com/2021/10/07/gretel-ai-a-platform-for-generating-synthetic-and-privacy-preserving-data-raises-30m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gretel.ai, a platform for generating synthetic and privacy-preserving data, raises $50M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gretel.ai , a platform for generating synthetic and privacy-preserving data, today announced that it raised $50 million in a series B led by Anthos Capital with participation from Section 32, Greylock, and Moonshots Capital. The funds bring the company’s total raised to $65.5 million and will be used to support product development, according to CEO Ali Golshan, with a particular focus on expansion into new use cases. Synthetic data, which is used to develop and test software systems in tandem with real-world data, has come into vogue as companies increasingly embrace digitization during the pandemic. In a recent survey of executives, 89% of respondents said synthetic data will be essential to staying competitive. And according to Gartner, by 2030, synthetic data will overshadow real data in AI models. Gretel provides a platform that enable developers to experiment, collaborate, and share data with other teams, divisions, and organizations. Customers can synthesize, transform, and classify data using a combination of tools and APIs, which apply AI techniques to generate synthetic stand-ins for production data. “Gretel’s tools enable developers and data practitioners to remove significant bottlenecks and enable ‘privacy by design,'” Golshan told VentureBeat via email. “[With it, customers can] synthesize data to boost underrepresented data sets for training machine learning and AI models, synthesize data to train machine learning and AI models where the synthesized data produced does not contain sensitive or personally identifiable information data, [and] transform data to power preproduction environments and testing with anonymized data.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gretel, which is headquartered in San Diego, was founded in 2020 by Golshan, Alexander Watson, John Myers, and Laszlo Bock. Bock was the former SVP of people at Google, while Watson led security startup Harvest.ai until it was acquired by Amazon for around $20 million in 2017. Privacy-preserving data According to Golshan, the pandemic has accelerated the trend toward stricter data privacy regulation and compliance — and, subsequently, the demand for privacy tools to mitigate those and other risks related to users’ privacy. Fifty-one percent of consumers surveyed aren’t comfortable sharing their personal information, according to a Privitar survey. And in a Veritas report, 53% of respondents say they would spend more money with trusted organizations, with 22% saying they would spend up to 25% more with a business that takes data protection seriously. This current business environment is also pushing companies to move faster to stay competitive, which also creates risk. Across the board, security experts cite the pace of technology adoption as a major contributing factor to the current cybercrime environment. And research published by KPMG suggests that a large number of organizations have increased their investments in AI during the pandemic to the point that executives are now concerned about moving too fast. Above: Gretel’s platform aims to preserve privacy using synthetic data and other technologies. While synthetic data closely mirrors real-world data, mathematically or statistically, the jury’s out on its efficacy. A paper published by researchers at Carnegie Mellon outlines the challenges with simulation that impede real-world development, including reproducibility issues and the so-called “reality gap,” where simulated environments don’t adequately represent reality. Other research suggests the synthetic data can be as good for training a model compared with data based on actual events or people, however. For example, Nvidia researchers have demonstrated a way to use data created in a virtual environment to train robots to pick up objects like cans of soup, a mustard bottle, and a box of Cheez-Its in the real world. “In the privacy space, there are traditional companies more focused on compliance and regulations, and there are startups focusing on synthetic data for niche applications, but Gretel has taken a much more scalable approach by making forward-looking synthetic data and privacy tools available to developers as APIs,” Golshan said. “Synthetic data is one tool in the suite of privacy tools that we offer, which includes classification and transformation using advanced AI capabilities.” A growing toolset Gretel claims its platform is tech- and vertical-agnostic, compatible with a range of frameworks, apps, and programming languages. It covers tasks such as data labeling through the aforementioned API, as well as report generation for high-level scores and metrics that help assess the quality of Gretel’s synthetic data. Heading off rivals including Tonic, Delphix, Mostly AI, and Hazy, Gretel says it’s working with life sciences, financial, gaming, and technology brands on “transformative” applications, like creating synthetic medical records that can be shared between health care organizations. Gretel is in the beta stage of its release and not currently charging users or customers, but Golshan says that it’s reached proof-of-concept with several prospects and expects these companies to transition into paying customers once the platform enters general availability early next year. “We have almost 75,000 downloads of our open source distribution — Gretel’s ‘open core’ version of its synthesizer,” Golshan said. “We have 20 full-time staff and are expanding rapidly … By year-end 2022, we anticipate hiring 50 to 75 more staff, which will include more engineers and researchers, marketers, product managers, developer advocates, and sales.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,902
2,021
"Nvidia says its GPU chip is a giant leap forward for computing | VentureBeat"
"https://venturebeat.com/2021/11/09/nvidia-says-its-gpu-chip-is-a-giant-leap-forward-for-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia says its GPU chip is a giant leap forward for computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Just how valuable has the GPU been to computing? Paresh Kharya, the senior director of product management and marketing at Nvidia, likes to say that the company’s chips are already driving a “million-fold leap” forward for the industry. The company offered its “big picture” analysis as part of its publicity built around the GTC conference that highlights how Nvidia GPUs can support artificial intelligence applications. The factor of one million is dramatically bigger than the older Moore’s law , which only promised that transistor counts on chips would double every two or so years. Many have noted that the doubling rate associated with Moore’s prediction has slowed recently, attributed to a number of reasons, such as the burgeoning costs of building factories. The doubling has also been less obvious to users because the extra transistors aren’t of much use for basic tasks like word processing. There’s only so much parallelism in daily workflow. Nvidia’s GPUs got 1,000 times more powerful Kharya bases his claim to a factor of several million on the combination of hungry new applications and a chip architecture that is able to feed them. Many of the new applications being explored today depend upon artificial intelligence algorithms, and these algorithms provide ideal opportunities for the massive collections of transistors on an Nvidia GPU. The work of training and evaluating the AI models is often inherently parallel. The speedup is accelerated by the shift from owning hardware to renting machines in datacenters. In the past, everyone was limited by the power of the computer sitting on their desks. Now, anyone can spin up 1,000 machines or more in the datacenter to tackle a massive problem in a few seconds. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As an example, Kharya pointed out that in 2015 a single Kepler GPU from Nvidia took nearly a month to work through a popular computer vision model called ResNet-50. “We train that same model in less than half a minute on Selene , the world’s most powerful industrial supercomputer, which packs thousands of Nvidia Ampere architecture GPUs,” Kharya explained in a blog post. Some speed gain in this example came because of better, faster, and bigger GPUs. Kharya estimates that over the past 10 years, the raw computational power of Nvidia’s GPUs has grown by a factor of 1,000. The other factors come from enabling multiple GPUs in the datacenter to work together effectively. Kharya cited, as just a few examples, “our Megatron software, Magnum IO for multi-GPU and multi-node processing, and SHARP for in-network computing.” The rest came from the expansion of cloud options. Amazon’s Web Services has been a partner of Nvidia’s for years, and it continues to make it easier for developers to rent GPUs for machine learning or other applications. Can GPU growth trajectory match transistor counts? Kharya also offered another data point taken from the world of biophysics, where scientists simulated the action of 305 million atoms that make up SARS CoV-2 (coronavirus). He found that the newest versions of the simulation run 10 million times faster than the original one made 15 years ago. Improvements to the algorithm as well as faster chips contributed to this result. Other companies are pursuing the same massive increases. Google, for instance, is designing custom chips optimized for machine learning. These TPUs, named after the TensorFlow algorithm, have been available on Google’s Cloud platform since 2019. For all the buzz-worthy attention generated by a factor of one million, the only caveat is that we won’t see the same kind of exponential growth as we did with transistor counts. While the raw power of the basic GPUs may continue to grow in speed as chip fabrication continues down the path set by Moore, the boost that comes from moving to a datacenter only comes once. Adding more machines to speed up the process will always be linear in cost. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,903
2,021
"Log4j vulnerabilities, malware strains multiply; major attack disclosed | VentureBeat"
"https://venturebeat.com/2021/12/20/log4j-vulnerabilities-malware-strains-multiply-major-attack-disclosed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Log4j vulnerabilities, malware strains multiply; major attack disclosed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As cybersecurity teams grapple with having to potentially patch their systems for a third time against Apache Log4j vulnerabilities , additional malware strains exploiting the flaws and an attack against a European military body have come to light. Security firm Check Point reported Monday it has now observed attempted exploits of vulnerabilities in the Log4j logging library on more than 48% of corporate networks worldwide, up from 44% last Tuesday. On Monday, the defense ministry in Belgium disclosed that a portion of its network was shut down in the wake of a cyber attack that occurred last Thursday. A spokesperson for the ministry told a Belgian newspaper , De Standaard, that the attack had resulted from an exploitation of the vulnerability in Log4j. VentureBeat has reached out to a defense ministry spokesperson for comment. The report did not say whether or not the attack involved ransomware, but a translation of the report indicates that the Belgian defense ministry initiated “quarantine measures” to isolate the “affected areas” of its network. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Additional malware strains Meanwhile, the Cryptolaemus security research group on Monday reported that it has verified that Dridex, a malware strain that targets financial institutions, has been delivered through an exploit of the vulnerability in Log4j. The Dridex payloads have been delivered onto Windows devices, the research group said on Twitter. Researchers have previously reported that they’ve observed the use of Mirai and Muhstik botnets to deploy distributed denial of service (DDoS) attacks using the Log4j flaw, as well as deployment of Kinsing malware for crypto mining. Cisco Talos previously reported observing email-based attacks seeking to exploit the vulnerability. Akamai Technologies said in a blog post that along with crypto miners and DDoS bots, “we have found certain aggressive attackers performing a huge volume of scans, targeting Windows machines” by leveraging the vulnerability in Log4j. “Attackers were trying to deploy the notorious ‘netcat’ backdoor, a known Windows privilege escalation tool, which is commonly used for subsequent lateral movement or gaining privileges to encrypt the disk with ransomware,” the company’s security threat research team said. Researchers at Uptycs said they’ve observed attacks using the Log4j vulnerability that have involved delivery of botnet malware (Dofloo, Tsunami/Muhstik, and Mirai), coin miners (Kinsing and XMRig), and an unidentified family of Linux ransomware (which included a ransom note). “We can expect to see more malware families, especially ransomware, leverage this vulnerability and penetrate into victims’ machines in the coming days,” Uptycs researchers said in the post Monday. Ransomware threat At the time of this writing, there has been no public disclosure of a successful ransomware breach that exploited the vulnerability in Log4j, though a number of ransomware delivery attempts using the flaw have been observed. Researchers report having seen the attempted delivery a new family of ransomware, Khonsari , as well as an older ransomware family, TellYouThePass , in connection with the Log4j vulnerability. Researchers at Microsoft have also spotted activities by suspected access brokers — looking to establish a backdoor in corporate networks that can later be sold to ransomware operators — while Log4j exploits by ransomware gang Conti have been observed , as well. Notably, Microsoft and cyber firm Mandiant said last week that they’ve observed activity from nation-state groups — tied to countries including China and Iran — seeking to exploit the Log4j vulnerability. Microsoft said that an Iranian group known as Phosphorus, which has previously deployed ransomware, has been seen “acquiring and making modifications of the Log4j exploit.” Patching woes Companies’ patching efforts have been complicated by the vulnerabilities that have been discovered in the first two patches for Log4j over the past week. Apache on Friday released version 2.17 of Log4j — the organization’s third patch for vulnerabilities in the open-source software since the initial discovery of a remote code execution (RCE) vulnerability, known as Log4Shell, on December 9. Version 2.17 addresses a potential for denial of service (DoS) attacks in version 2.16, which had been released last Tuesday. The severity for the vulnerability is rated as “high,” and the bug was independently discovered by several individuals, including researchers at Akamai and at Trend Micro. Version 2.16, in turn, had fixed an issue with the version 2.15 patch for Log4Shell that did not completely address the RCE issue in some configurations. Additionally, a discovery by cybersecurity firm Blumira last week suggests there may be an additional attack vector in the Log4j flaw, whereby not just vulnerable servers, but also individuals browsing the web from a machine with unpatched Log4j software on it, might be vulnerable. (“At this point, there is no proof of active exploitation,” Blumira said.) Widespread vulnerability Many applications and services written in Java are potentially vulnerable due to the flaws in Log4j prior to version 2.17. The RCE flaws can enable remote execution of code by unauthenticated users. Along with enterprise products from major vendors including Cisco, VMware, and Red Hat, the vulnerabilities in Log4j affect many cloud services. Research from Wiz provided to VentureBeat suggests that 93% of all cloud environments were at risk from the vulnerabilities, though an estimated 45% of vulnerable cloud resources have been patched at this point. Thus far, there is still no indicator on whether the widely felt ransomware attack against Kronos Private Cloud had any connection to the Log4j vulnerability or not. The parent company of the business, Ultimate Kronos Group (UKG), said in its latest update Sunday that the question of whether Log4j was a factor is still under investigation — though the company has noted that it did quickly begin patching for the vulnerability. Still, the likelihood of upcoming ransomware attacks that trace back to the Log4j vulnerabilities is high, according to researchers. “If you are a ransomware affiliate or operator right now, you suddenly have access to all these new systems,” said Sean Gallagher, a senior threat researcher at Sophos Labs, in an interview with VentureBeat on Friday. “You’ve got more work on your hands than you know what to do with right now.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,904
2,021
"Microsoft launches new Defender capabilities for fixing Log4j | VentureBeat"
"https://venturebeat.com/2021/12/28/microsoft-launches-new-defender-capabilities-for-fixing-log4j"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches new Defender capabilities for fixing Log4j Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft announced it has rolled out new capabilities in its Defender for Containers and Microsoft 365 Defender offerings for identifying and remediating the widespread vulnerabilities in Apache Log4j. Defender for Containers debuted December 9, merging the capabilities of the existing Microsoft Defender for Kubernetes and Microsoft Defender for container registries and adding new features such as Kubernetes-native deployment, advanced threat detection, and vulnerability assessment. On Monday night, Microsoft disclosed it has updated the Defender for Containers solution to enable the discovery of container images that are vulnerable to the flaws in Log4j, a widely used logging software component. Defender for Containers can now discover images affected by the three vulnerabilities in Log4j that have been disclosed and now patched, starting with the initial report of a remote code execution flaw in Log4j on December 9. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Vulnerability scanning Container images are scanned automatically for vulnerabilities when they are pushed to an Azure container registry, when pulled from an Azure container registry, and when running on a Kubernetes cluster , Microsoft’s threat intelligence team wrote in an update to its blog post about the Log4j vulnerability. The capability that enables scanning for vulnerabilities in container images running on a Kubernetes cluster is powered by technology from cyber firm Qualys, Microsoft noted. “We will continue to follow up on any additional developments and will update our detection capabilities if any additional vulnerabilities are reported,” the team said in the post. Microsoft Defender for Containers supports any Kubernetes clusters certified by the Cloud Native Computing Foundation. Along with Kubernetes, it has been tested with the Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service on Azure Stack HCI, AKS Engine, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or above), VMware Tanzu Kubernetes Grid, and Rancher Kubernetes Engine. Microsoft 365 Defender updates Meanwhile, for Microsoft 365 Defender, the company said it has introduced a consolidated dashboard for managing threats and vulnerabilities related to the Log4j flaws. The dashboard will “help customers identify and remediate files, software, and devices exposed to the Log4j vulnerabilities,” Microsoft’s threat intelligence team tweeted. These capabilities are supported on Windows and Windows Server, as well as on Linux, Microsoft said. However, for Linux, the capabilities require an update to version 101.52.57 or later of the Microsoft Defender for Endpoint Linux client. This “dedicated Log4j dashboard” provides a “consolidated view of various findings across vulnerable devices, vulnerable software, and vulnerable files,” the threat intelligence teams said in the blog post. Additionally, Microsoft said it has launched a new schema in advanced hunting for Microsoft 365 Defender, “which surfaces file-level findings from the disk and provides the ability to correlate them with additional context in advanced hunting.” “These new capabilities integrate with the existing threat and vulnerability management experience and are gradually rolling out,” Microsoft’s threat intelligence teams said in the post. The discovery capabilities cover installed application CPEs (Common Platform Enumerations) that are known to have vulnerabilities to the Log4j RCE, along with vulnerable Log4j Java Archive (JAR) files, the post says. Support coming for macOS Microsoft said it’s working to add support for the capabilities in Microsoft 365 Defender for Apple’s macOS, and said the capabilities for macOS devices “will roll out soon.” The new capabilities to protect against the Log4j vulnerability join other capabilities available in Microsoft offerings for addressing the vulnerability, known as Log4Shell. Those other offerings include Microsoft Sentinel, Azure Firewall Premium, Azure Web Application Firewall, RiskIQ EASM and Threat Intelligence, Microsoft Defender Antivirus, Microsoft Defender for Endpoint, Microsoft Defender for Office 365, Microsoft Defender for Cloud, and Microsoft Defender for IoT. Along with providing some of the largest platforms and cloud services used by businesses, Microsoft is a major cybersecurity vendor in its own right with 650,000 security customers. Microsoft has reported observing activities exploiting Log4Shell such as attempted ransomware deployment, crypto mining , credential theft, lateral movement, and data exfiltration. The company previously said it has observed activities by multiple cybercriminal groups seeking to establish network access by exploiting the vulnerability in Log4j. These suspected “ access brokers ” are expected to later sell that access to ransomware operators. Their arrival suggests that an “increase in human-operated ransomware” may follow against both Windows and Linux systems, the company said. Widespread vulnerability Microsoft and cyber firm Mandiant have also said they’ve observed activity from nation-state groups — tied to countries including China and Iran — seeking to exploit the Log4j vulnerability. An Iranian group known as Phosphorus, which has previously deployed ransomware, has been seen “acquiring and making modifications of the Log4j exploit,” Microsoft said. Additionally, the company previously said it has observed a new family of ransomware, known as Khonsari , used in attacks on non-Microsoft hosted Minecraft servers by exploiting the vulnerability in Apache Log4j. Many enterprise applications and cloud services written in Java are potentially vulnerable due to the flaws in Log4j prior to version 2.17.1, which was released today. The open source logging library is believed to be used in some form — either directly or indirectly by leveraging a Java framework — by the majority of large organizations. Version 2.17.1 of Log4j addresses a newly discovered vulnerability ( CVE-2021-44832 ), and is the fourth patch for vulnerabilities in the Log4j software since the initial discovery of the RCE vulnerability. The newly discovered vulnerability in Log4j “requires a fairly obscure set of conditions to trigger,” said Casey Ellis, founder and chief technology officer at Bugcrowd, in a statement shared with VentureBeat. “So, while it’s important for people to keep an eye out for newly released CVEs for situational awareness, this CVE doesn’t appear to increase the already elevated risk of compromise via Log4j.” Updated to reference the release of version 2.17.1 of Log4j and add comments from Bugcrowd’s Casey Ellis. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,905
2,021
"Patching Log4j to version 2.17.1 can probably wait | VentureBeat"
"https://venturebeat.com/2021/12/29/patching-log4j-to-version-2-17-1-can-probably-wait"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Patching Log4j to version 2.17.1 can probably wait Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A number of security professionals say that the latest vulnerability in Apache Log4j, disclosed on Tuesday, does not pose an increased security risk for the majority of organizations. As a result, for many organizations that have already patched to version 2.17.0 of Log4j, released December 17, it should not be necessary to immediately patch to version 2.17.1, released Tuesday. While the latest vulnerability “shouldn’t be ignored,” of course, for many organizations it “should be deployed in the future as part of usual patch deployment,” said Ian McShane, field chief technology officer at Arctic Wolf, in comments shared by email with VentureBeat. Casey Ellis, founder and chief technology officer at Bugcrowd, described it as a “weak sauce vulnerability” and said that its disclosure seems more like a marketing effort for security testing products than an “actual effort to improve security.” Patching woes The disclosure of the latest vulnerability comes as security teams have been dealing with one patch after another since the initial disclosure of a critical remote code execution (RCE) flaw in the widely used Log4j logging software on December 9. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The latest vulnerability appears in the Common Vulnerabilities and Exposures (CVE) list as 2021-44832 , and has a severity rating of “medium” (6.6). It enables “an attacker with permission to modify the logging configuration file [to] construct a malicious configuration,” according to the official description. However, for teams that have been working nonstop to address Log4j vulnerabilities in recent weeks, it’s important to understand that the risks posed by the latest vulnerability in Log4j are much lower than the previous flaws—and may not be a “drop everything and patch” moment, according to security professionals. While possible that an organization might have the configurations required for exploiting CVE-2021-44832, this would in fact be an indicator of a much larger security issue for the organization. The latest vulnerability is technically an RCE, but it “can only be exploited if the adversary has already gained access through another means,” McShane said. By comparison, the initial RCE vulnerability, known as Log4Shell, is considered trivial to exploit and has been rated as an unusually high-severity flaw (10.0). A niche issue The latest Log4j vulnerability requires hands-on keyboard access to the device running the component, so that the threat actor can edit the config file to exploit the flaw, McShane said. “If an attacker has admin access to edit a config file, then you are already in trouble—and they haven’t even used the exploit,” he said. “Sure, it’s a security issue—but it’s niche. And it seems far-fetched that an attacker would leave unnecessary breadcrumbs like changing a config file.” Ultimately, “this 2.17.1 patch is not the critical nature that an RCE tag could lead folks to interpret,” McShane said. Indeed, Ellis said, the new vulnerability “requires a fairly obscure set of conditions to trigger.” “While it’s important for people to keep an eye out for newly released CVEs for situational awareness, this CVE doesn’t appear to increase the already elevated risk of compromise via Log4j,” he said in a statement shared with VentureBeat. Overhyping? In a tweet Tuesday, Katie Nickels, director of intelligence at Red Canary, wrote of the new Log4j vulnerability that it’s best to “remember that not all vulnerabilities are created equally.” “Note that an adversary *would have to be able to modify the config* for this to work…meaning they already have access somehow,” Nickels wrote. Wherever RCE is mentioned in relation to the latest vulnerability, “it needs to be qualified with ‘where an attacker with permission to modify the logging configuration file’ or you are overhyping this vuln,” added Chris Wysopal, cofounder and chief technology officer at Veracode, in a tweet Tuesday. “This is how you ruin relationships with dev teams.” Log4j 2.17 RCE CVE-2021-44832 in a nutshell pic.twitter.com/GPaHcDHlj0 — Florian Roth ⚡️ (@cyb3rops) December 28, 2021 “In the most complicated attack chain ever, the attacker used another vuln to get access to the server, then got CS running, then used CS to edit the config file/restart the service to then remotely exploit the vuln,” tweeted Rob Morgan, founder of Factory Internet. “Yep, totally the best method!” A widespread vulnerability Many enterprise applications and cloud services written in Java are potentially vulnerable to the flaws in Log4j. The open source logging library is believed to be used in some form — either directly or indirectly by leveraging a Java framework — by the majority of large organizations. Version 2.17.1 of Log4j is the fourth patch for vulnerabilities in the Log4j software since the initial discovery of the RCE vulnerability, but the first three patches have been considered far more essential. Version 2.17.0 addresses the potential for denial of service (DoS) attacks in version 2.16, and the severity for the vulnerability has been rated as high. Version 2.16, in turn, had fixed an issue with the version 2.15 patch for Log4Shell that did not completely address the RCE issue in some configurations. The initial vulnerability could be used to enable remote execution of code by unauthenticated users. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,906
2,021
"Microsoft: Ransomware 'access brokers' now exploiting Log4j vulnerability | VentureBeat"
"https://venturebeat.com/2021/12/15/microsoft-ransomware-access-brokers-now-exploiting-log4j-vulnerability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft: Ransomware ‘access brokers’ now exploiting Log4j vulnerability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft said it has observed multiple cybercriminal groups seek to establish network access by exploiting the vulnerability in Apache Log4j, with the expected goal of later selling that access to ransomware operators. The arrival of these “access brokers,” who’ve been linked to ransomware affiliates, suggests that an “increase in human-operated ransomware” may follow against both Windows and Linux systems, the company said in an update to a blog post on the critical Log4j vulnerability, known as Log4Shell. Nation-state activity In the same post, Microsoft also said it has observed activity from nation-state groups—tied to countries including China, Iran, North Korea, and Turkey—seeking to exploit the Log4j vulnerability. In one instance, an Iranian group known as Phosphorus, which has previously deployed ransomware, has been seen “acquiring and making modifications of the Log4j exploit,” Microsoft said. “We assess that PHOSPHORUS has operationalized these modifications.” The development has followed shortly after the first instances of ransomware payloads exploiting Log4Shell were disclosed. Security researchers at Bitdefender observed an attempt to deploy a new strain of ransomware, Khonsari, using the Log4Shell vulnerability that was revealed publicly last Thursday. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Researchers have also told VentureBeat that they’ve observed attackers potentially laying the groundwork for launching ransomware in a range of ways, such as deploying privilege escalation tools and bringing malicious Cobalt Strike servers online, in recent days. Cobalt Strike is a popular tool for enabling remote reconnaissance and lateral movement in ransomware attacks. Microsoft itself, on Saturday, had reported seeing the installation of Cobalt Strike through the exploitation of the Log4j vulnerability. ‘Ransomware-as-a-service’ Now, Microsoft said it has observed activities by cybercriminals aimed at establishing a foothold inside a network using Log4Shell, with the expectation of selling that access to a “ransomware-as-a-service” operator. In the blog post update, Microsoft’s threat research teams said that they “have confirmed that multiple tracked activity groups acting as access brokers have begun using the vulnerability to gain initial access to target networks.” “These access brokers then sell access to these networks to ransomware-as-a-service affiliates,” the Microsoft researchers said in the post. The researchers noted that they have “observed these groups attempting exploitation on both Linux and Windows systems, which may lead to an increase in human-operated ransomware impact on both of these operating system platforms.” Ransomware-as-a-service operators lease out ransomware variants to other attackers, saving them the effort of creating their own variants. A growing threat According to a previous report from Digital Shadows, “initial access brokers” have had a “growing role” in the cybercriminal space. “Rather than infiltrating an organization deeply, this type of threat actor operates as a ‘middleman’ by breaching as many companies as possible and goes on to sell access to the highest bidder – often to ransomware groups,” Digital Shadows said. Sean Gallagher, a senior threat researcher at Sophos, told VentureBeat on Tuesday that he has been expecting to see targeted efforts to plant backdoors in networks, including by access brokers who would then sell the backdoor to other criminals. “And those other criminals will inevitably include ransomware gangs,” Gallagher said. At the time of this writing, there has been no public disclosure of a successful ransomware breach that exploited the vulnerability in Log4j. Widespread vulnerability All in all, researchers said they do expect ransomware attacks to result from the vulnerability in Log4j, as the flaw is both widespread and considered trivial to exploit. Many applications and services written in Java are potentially vulnerable to Log4Shell, which can enable remote execution of code by unauthenticated users. Researchers at cybersecurity giant Check Point said they’ve observed attempted exploits of the Log4j vulnerability on more than 44% of corporate networks worldwide. “We haven’t necessarily seen direct ransomware deployment, but it’s just a matter of time,” said Nick Biasini, head of outreach at Cisco Talos, in an email Tuesday. “This is a high-severity vulnerability that can be found in countless products. The time required for everything to be patched alone will allow various threat groups to leverage this in a variety of attacks, including ransomware.” The vulnerability comes with the majority of businesses already reporting that they’ve had first-hand experience with ransomware over the past year. A recent survey from CrowdStrike found that 66% of organizations had experienced a ransomware attack in the previous 12 months, up from 56% in 2020. And the average ransomware payment has surged by about 63% in 2021, reaching $1.79 million, the report said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,907
2,021
"Top 5 tips to make your webinars buzzworthy | VentureBeat"
"https://venturebeat.com/2021/12/10/top-5-tips-to-make-your-webinars-buzzworthy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights Top 5 tips to make your webinars buzzworthy Share on Facebook Share on X Share on LinkedIn This article is part of a Virtual Events Insight series paid for by Cvent. When it comes to webinars, the bar for what your audience expects has been raised. From the initial email invite, to producing the presentation, to the post-show follow-up, marketers must seek to create moments that set your webinars apart and drive deeper engagement. Here are 5 tips to make your webinars absolutely buzzworthy. 1. Experiment with formats Is your webinar format interesting to your audience? Knowing your audience and what they are looking for in a webinar is what’s going to get them to show up, lean in, and tell others that you are creating engaging content worth watching. Ask yourself the following format options to see if there is room to shake things up in your program: How long should your webinar last? Could a 30-minute webinar the best option? Would a short webinar with 10 minutes of hyper-specific content with 10 minutes of Q&A be a good fit? What about having small/moderated discussion groups with capped attendance? Does your audience prefer panels, hearing directly from your CEO, or something else? What about a webinar series? Keep people looking forward to your next related session, and keep them on an on-going engagement journey 2. Brand your space It may seem simple but a surprising number of people neglect this step — making sure people know where they are and who’s talking to them. For example, within Cvent Attendee Hub presenters have the capability to set a theme and brand, create a lower thirds banner, as well as place logos within the presentation’s screen using the Cvent Studio. If your current webinar platform does not have these capabilities there are other things you can do. Set up an LED light in your brand colors, put a branded mug in your background, wear a company shirt that displays the logo, or use a virtual background. Take a minute and think about your personal “set” as webinars have evolved to have more intimacy and authenticity. We’re all at home and are getting a behind-the-scenes look into each other’s lives. Make sure that you are a positive representation of your brand and the people who work with you. 3. Interact with your audience Another seemingly simple tip, but always worth stating — interact with your audience. Don’t be afraid to jump into the chat, ask questions, or conduct polls. If done well, all these elements can add extra layers of engagement to your presentation. In-person events have a clear separation between speakers and their audience that virtual events don’t. Chat, Q&A, and polls allow you to be in the moment with people who want to hear from you and create meaningful connections that they will remember for a long time. It’s important to make yourself available as a reliable, relatable resource and break down the walls that keep people from making real, trusting connections with the people behind the brands. 4. Give away great content By giving away great content you get to: Position yourself as resource, thought leader, and industry expert Build reciprocity with your audience Score your attendees engagement and put them on a thoughtful and relevant journey after your webinar; you’ll be more likely to earn their business. Follow up after your webinars by thanking people for attending and offer them not only a recording of the session, but also helpful videos, eBooks, and resources to further their understanding of the topic. 5. Create can’t-miss moments Though there are plenty of advantages with pre-recording, if your audience is willing to take time out of their day to come and listen, then they are worth showing up for. A consequence of people realizing that more and more webinars are pre-recorded, is that they lose the motivation to login on time or at all. At one point or another, or even at multiple points, we’ve all seen a webinar invite and thought “I’d love to learn more about that, but I’m not going to sit through another hour on Zoom… I’ll sign up and let them send me the recording.” Then of course, we never watch the recording. There are ways to combat this. Create and promote can’t-miss moments, like a live Q&A with your CEO, a live critique of something submitted by an audience member, a giveaway, access to a free content for showing up, whatever it is, you might start to see an increase in your webinar attendance. When optimizing your webinar, remember to give yourself time for trial, error, and testing to see what’s working. Nothing is a quick fix and maybe you won’t be buzzworthy overnight, but if you can commit to a new way of thinking, and implement these best practices, the tide will start to turn. Ready to take your webinars to the next level? Learn more with Cvent’s eBook, Next Level Webinars. VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,908
2,021
"Post-event activation: A guide to success | VentureBeat"
"https://venturebeat.com/2021/12/16/post-event-activation-a-guide-to-success"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights Post-event activation: A guide to success Share on Facebook Share on X Share on LinkedIn This article is part of an Events Insight series paid for by Cvent. Do you ever struggle with collecting event data and then using that data to inform and help prioritize your post-event lead follow up? If so, you’re not alone. Here are tips to help ensure your success, from those who have been in your shoes. The meeting and events team for Cvent (which is a leading provider of event marketing and management tech) oversees a significant number of events each year. This team is also responsible for planning the company’s annual user conference, Cvent CONNECT. In 2021 for the first time, this team known as The Connectors, took this flagship event hybrid. In this article we’ll get an insider’s perspective on how they went about activating leads from this event as well as tips you can use for your own event program. Tip #1: Understand the data At Cvent there are three stages in which we categorize our data collection: pre-event, during the event, and post-event, and we’d recommend you do the same. Before the event Pre-event, both segmentation and registration-related data can be captured. Segmenting your audience by interests and demographics allows for you to deliver relevant content and experiences at your event. During the event During the event, there are two groups of data to focus on collecting. First, attendance and sessions will help provide critical information about what is resonating with your audience. The second group of data relates to the engagement elements that let you know which activities and topics your attendees are interested in and most engaged with. After the event The post-event data includes engagement scores, survey responses, event cost, revenue, and the ROI. This will let you know if your event was successful, what worked well, and key areas to focus on when it comes to making improvements. Tip #2: Collect data purposefully For Cvent CONNECT, we dedicated a lot of time to curating the data we collect to ensure we could turn the knowledge from every conversation and footprint throughout the event into the next step for sales. We worked closely alongside the sales team to make sure we asked the right questions during registration, got the correct people registered, and knew who we were working with prior to the event. Tip #3: Measure two sets of goals Creating a hybrid event means you have access to more data and insights. That said, you also need to weigh the value of in-person and virtual data differently. Cvent CONNECT for example, went through a methodical process to identify how to score engagement throughout the event and assign different scores for attending an in-person session versus a virtual session. Tip #4: Enlist help Don’t be afraid to look to your internal teams, technology, and community to help produce a great event. Call on your marketing operations team With the help of both the event marketing and sales teams, our marketing operations team was able to simplify event processes and make sense of event data and proving ROI. The team assisted with multiple out-of-the-box integrations before, during, and after the event while also helping orchestrate the use of various marketing technologies. Tip #5: Use technology to transform data into opportunity Use engagement scoring The use of a single platform capable of engagement scoring allows for you to create a detailed profile of each attendee in addition to the community you’re hosting. This gives you a clear representation of the attendee journey in its entirety. Your marketing and sales teams will also be able to curate different follow-up messages by assigning an engagement score. Create a reporting framework It is essential to have a technology partner with robust reporting capabilities due to the daunting amount of data that is collected at hybrid events. Tip #6: Rely on your community In this new landscape it’s important to get your hands on whatever resources you can to help you gain confidence. At Cvent, we’re gathering whatever knowledge we can and handing it off to our peers, like with this blog post! We would also love to share even more tactical how-to insights with you. So, if you’re up for it, read Cvent’s eBook Keeping Up with the Connectors for a full chapter focused on post-event activation , as well as other chapters covering all the details on how we took Cvent CONNECT to a hybrid event format. VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,909
2,013
"Shadow IT: Why companies are exposing your data -- and what to do about it | VentureBeat"
"https://venturebeat.com/2013/12/23/shadow-it-why-companies-are-exposing-your-data-and-what-to-do-about-it"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Shadow IT: Why companies are exposing your data — and what to do about it Kent Christensen Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Kent Christensen is practice director for cloud and virtual data centers at Datalink. The race to cloud computing is exposing private customer information and sensitive corporate data on an unprecedented scale. The demand for quicker and cheaper application development is driving this trend. Companies are moving at breakneck speed to produce applications that offer a competitive edge and business results. As CMOs crack the whip behind developers, app teams choose to buy cloud services without the knowledge or input of IT, the traditional guardians of data security. As a result, the public cloud has become the sandbox for development. And as customer information and sensitive corporate data is poured into this sandbox, IT is beginning to lose control of its company’s data assets. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As an IT adviser at Datalink, I have witnessed this rise of shadow IT. It comes from pressure to develop apps fast and a myth that the cloud is a Swiss Army knife suited for every application. In reality, not all apps and related data should go on the public cloud. Preventing this abuse of customer data and risk to corporate security requires not merely a change in thinking, but rather the transformation of IT into a true service provider that uses the cloud with the precision of a scalpel. Levels of data exposure Corporations are struggling to keep files stored on public clouds secure, and the scale of data exposure is shocking. “ Avoiding the Hidden Costs of Cloud 2013 ,” a report by Symantec, surveyed 3,236 business and IT executives from 29 countries on their use of the cloud. Among companies that reported “rogue cloud deployments,” like the app developer’s sandbox,“40 percent experienced the exposure of confidential information, and more than a quarter faced account takeover issues, defacement of web properties or stolen goods or services.” A further 40 percent of surveyed organizations had lost data in the cloud, and 68 percent of these organizations experienced recovery failures. Finally, another 23 percent of organizations had been fined for privacy violations in the cloud within the past 12 months Providers of cloud services are not necessarily to blame. In March, Threatpost reported that Rapid7, a vulnerability management firm, analyzed the security of files stored on Amazon S3. Their researchers found that of 12,328 buckets (essentially files containers) owned by Fortune 1000 companies, 1,951 had somehow been reset from a private to public setting, exposing more than 126 billion files. The problem was not Amazon’s fault, the researchers determined, but rather mismanagement by companies and their third party vendors. Among a random sample of 40,000 exposed files, more than half could be used to breach corporate network or offered for sale on the black market. The cloud-blockers Despite these known risks, the flight to the cloud continues, and it is part of what I call a “Swiss Army knife” approach to cloud computing, which holds that cloud is now faster, cheaper, and better for everything, so every system should be cloud-based. The love of cloud is also motivated by IT departments, which have gained a reputation within their organizations for being high spenders, gatekeepers, and enemies of the cloud who slow down all development with security concerns. When a marketing team wants Dropbox and IT says, “No, it’s non-compliant, and we can’t risk leaking corporate data,” IT perpetuates this myth — even if they have good reasons for not using Dropbox. Instead, IT departments need to ask, “Why do you need Dropbox?” If the answer is for smoother collaboration, then it is on IT to find a file-sharing and collaboration platform that can exist strictly within corporate data centers. Saying “no” just drives other departments to find their own solutions, which are clearly fraught with risk. The scalpel approach If rogue cloud deployments are risky but departments legitimately need cloud or cloud-like features, then problem solving is the future of IT — not yes-or-no answers. To keep app development teams agile yet reverse the exposure of sensitive data, corporations need to replace Swiss Army thinking with a scalpel approach, which holds that cloud deployments are precise means to specific ends. The result will be a hybrid of in-house (private) and public cloud solutions. CRM software like Salesforce.com, for instance, is a precise use of the cloud. It makes a lot of sense to enable mobile sales teams to access customer information outside of a business’s four walls. Google, Microsoft and Amazon offer virtual environments that are specifically designed for secure development and testing of applications. So here’s my advice to IT: to actually use a scalpel approach, you need to be one step ahead of marketing, sales, finance and all the other departments that otherwise are going to circumvent you—and risk corporate data and customer information—if you say no to their request. You need to vet and curate solutions that you trust and are able to monitor. Let development teams practically order them off a menu and deploy them as quickly as the CMO hopes. If you can get ahead of other department’s needs — of your client’s needs — you have an opportunity to be seen as a strategic partner and value creator instead of a digital policeman, and you have an opportunity to control the race to the cloud. Your future hinges on you evolving into a cloud services broker. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,910
2,021
"Report: IT security teams struggle to mitigate vulnerabilities | VentureBeat"
"https://venturebeat.com/2021/12/18/report-it-security-teams-struggle-to-mitigate-vulnerabilities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: IT security teams struggle to mitigate vulnerabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Vulcan Cyber ‘s latest research into vulnerability risk prioritization and mitigation programs found that IT security teams are struggling to transition from simple vulnerability identification to meaningful response and mitigation. Because of this, business leaders and IT management professionals are constrained in their ability to gain the important insights needed to effectively protect valuable business assets, rendering vulnerability management programs largely ineffective. Risk without business context is irrelevant. The survey found that the majority of respondents tend to group vulnerabilities by infrastructure (64%), followed by business function (53%) and application (53%). This is concerning as risk prioritization based on infrastructure and application groupings without asset context is not meaningful. The inability to correlate vulnerability data with actual business risk leaves organizations exposed. The vast majority of decision-makers reported using two or more of the following models to score and prioritize vulnerabilities: the common vulnerability scoring system (CVSS) at 71%, OWASP top 10 (59%), scanner reported severity (47%), CWE Top 25 (38%), or bespoke scoring models (22%). To deliver meaningful cyber risk management, a bespoke scoring model that accounts for several industry-standard scoring systems is ideal and most efficient. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The more control over risk scoring and prioritization a security team has, the more effective they can be in mitigating cyber risk. But there is no industry-wide framework for risk-based vulnerability management, which means cyber hygiene continues to fall short and vulnerabilities continue to generate risk. Sensitive data exposure was ranked as the most common enterprise concern resulting from application vulnerabilities, as reported by 54% of respondents. This was followed by broken authentication (44%), security misconfigurations (39%), insufficient logging and monitoring (35%), and injection (32%). Respondents also indicated that the MS14-068 vulnerability, otherwise known as the Microsoft Kerberos unprivileged user accounts, was the most concerning vulnerability to their organizations. Interestingly, this vulnerability was called out over more high-profile vulnerabilities such as MS08-067 (Windows SMB, aka Conficker, Downadup, Kido, etc.), CVE-2019-0708 (BlueKeep), CVE-2014-0160 (OpenSSL, aka Heartbleed), and MS17-010 (EternalBlue). Since this survey was conducted earlier this year, the Log4J or Log4shell vulnerability announced this week was not reflected in the report data. However, Vulcan Cyber is seeing how easy it is to exploit this vulnerability, with ransomware continuing to be a favorite playbook. This, yet again, underscores the importance of collaboration between business leaders and IT teams to effectively reduce cyber risk to their organizations through ongoing cyber hygiene efforts and well-executed vulnerability management programs. Vulcan Cyber’s report is based on a survey of more than 200 enterprise IT and security executives conducted by Pulse. Read the full report by Vulcan Cyber. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,911
2,016
"How Nokia broke into virtual reality with its Ozo camera | VentureBeat"
"https://venturebeat.com/2016/12/02/how-nokia-broke-into-virtual-reality-with-its-ozo-camera"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Nokia broke into virtual reality with its Ozo camera Share on Facebook Share on X Share on LinkedIn Ozo leader Guido Voltolina at Nokia. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nokia has been searching for new businesses to break into ever since it retreated from the smartphone business. And after a few years of research, the Finnish company decided to move into virtual reality 360-degree capture cameras. The company launched its groundbreaking Ozo in March for $60,000, and then it cut the price to $45,000 in August. It is now shipping the devices in a number of markets, and it is rolling out software and services to stoke the fledgling market for VR cameras. We talked with Guido Voltolina, head of presence capture Ozo at Nokia Technologies, at the company’s research facility in Silicon Valley in Sunnyvale, California. Voltolina talked about the advantage the Ozo has in capturing and processing a lot of data at once, and he talked about the company’s plans for expansion in VR. Here’s an edited transcript of our interview. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Ozo in action VentureBeat: Tell me why you moved into making the Ozo VR cameras. Guido Voltolina: The whole project and division is called Presence Capture. The idea is that, as soon as we identified VR was coming — this was before the Oculus acquisition by Facebook — it was clear that one part of VR would be computer-generated experiences, games being the major example. But as we looked at it, we said, “Wait a minute. If this is a new medium, there will be more than just computer-generated experiences. People will want to capture something — themselves, their life, things happening in the world.” We had to look at what would be the device that could capture as much data as possible in order to reproduce the sense of presence that VR allows you to have when you’re fully immersed. As a subset of VR, you also have 2D 360 images. That’s still happening. But that’s almost a side effect of solving the major problem we have to solve, these full three-dimensional audiovisual experiences that reproduce the sense of “being there.” The team started thinking about a device purpose-built for that task. Instead of duct-taping different existing cameras into a rig — many people have done that — we designed a device specifically for the job. The Ozo is not a good 2D camera, but it’s an excellent VR camera. The shape ended up being the same as a skull, very similar dimensions, with the same interocular distance as a human being. It has eight cameras, and the distance is very close, with a huge overlap in the lens field of course. We’re capturing two layers of pixels to feed the right and left eye with the exact interocular distance you’d have yourself. Many rigs have a much wider distance. That creates a problem with objects that are very close to you in VR. The disparity is too great. With this solution, we then integrated eight microphones, so the spatial audio is captured as the scene is happening. When I’m talking to you here, I have no reason to turn around. In most cases, the only reason we’d turn around is if we heard a loud sound, say from over in that corner. We’re very good at turning exactly at the angle that we thought the sound was coming from, even though we don’t have eyes in the back of our heads. Our ears are very good at perceiving the direction of sound. We integrated both 3D audio and 3D video because the full immersive experience needs both. We’re rarely moved to look around by an object moving around us. The best cue is always sound. The way 2D movies tell you a story, they know you’re looking at the screen, and they can cut to a different image on the screen as they go, or zoom in and out as a conversation goes back and forth. In VR the audio is the part that has to make you turn to look at someone else or something else. The concept is capturing live events. People can go to a place that’s normally not accessible to them for whatever reasons — financial reasons, distance, or maybe it doesn’t exist anymore. If something goes crazy and the pyramids in Egypt are destroyed, we’ll never see them again. But if there’s a VR experience of the pyramids, it would be like walking around and seeing the real thing. You can think of it like a time machine aimed at the past. You capture events and then you can go back and revisit them. In 20 years your son or daughter could revisit today’s Thanksgiving dinner, exactly as you did. Above: Ozo in a box VB: Why is this a good move? Voltolina: It’s very similar to what happened with pictures and video. The first black and white photographs were only accessible to a few. Wealthy people would have family pictures once a year. Now we all have a high-resolution camera in our phones. Video came along and people would hire someone to film a wedding, maybe. Then VHS and digital cameras arrived. But the one doesn’t replace the other. Pictures didn’t replace words and video didn’t replace pictures. We still text. We still share pictures. We still post YouTube videos. Different media for different things. VR is just another medium. Being a new medium, we focus on how to capture real life in VR. With that, we also have to consider the technology related to carrying and distributing data for playback. After the Ozo we created Ozo Live and Ozo Player. These are software packages we license to companies in order for them to build their own VR players with a higher quality, or to live stream the signal that’s captured by multiple Ozo cameras. We were at the Austin City Limits concert, for example. A production company there had, I believe, eight Ozos distributed in various positions around the stage. It’s not just one camera. That’s what we were trying at the beginning — the front-row experience, which is great — but I want to go to places I can’t normally access, right? I want to be on stage up there next to Mick Jagger or whoever. I can squeeze thousands of people up there next to him now. In real life, you just couldn’t do that, no matter how much you pay. Above: Ozo has eight different cameras for VR capture. VB: How does it differ from the other 360 cameras out there? Facebook showed off a design for one as well. Voltolina: The majority of the solutions you see announced are a combination of multiple camera modules. Either they have SSD cards or cables. But there’s one SSD card or one cable per camera. If a camera has 25 modules you’ll have 25 SSD cards. When you shoot, you don’t really see what you’re shooting through the camera. Then you have to export all the data, stitch it together, and see what comes out. One of the big differences with Ozo is that, yes, there are eight cameras synchronized together, but we created a brain that takes all this data and combines it in real time. Ozo’s output is one single cable going into either your storage or a head-mounted display. You can visualize what the camera is seeing and direct from its point of view in real time. It’s like a normal viewfinder. For VR cameras, to be able to see what the camera is shooting in real time is key differentiator. The other key characteristic is that it can operate as a self-contained device with a battery and just one internal SSD card. You can mount it on a drone, on a car, in different situations where you need flexibility and the size has to be compact. It’s about the size of a human head. The unobtrusive design is a big advantage. Some of these rigs with 16 or 25 cameras become quite invasive. If you want to capture multiple points of view — let’s say you have a rig with 16 cameras, even small ones like GoPros. What if you need seven of those? What if you need to assemble a hundred and some cameras? One of them might malfunction or fail to synchronize or something. Once you start demanding large numbers of cameras, the delta becomes significant. 1 2 3 4 View All Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,912
2,018
"VR can already help people heal -- and it's just the beginning | VentureBeat"
"https://venturebeat.com/2018/02/10/vr-can-already-help-people-heal-and-its-just-the-beginning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest VR can already help people heal — and it’s just the beginning Share on Facebook Share on X Share on LinkedIn The Parkinson’s virtual support group members have been meeting in the virtual world, Second Life, for seven years Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Fran gracefully glides around the grand ballroom, sparkling pink ball gown flowing at her heals and the firm grip of her son’s arm around her waist. They are surrounded by friends and family as they elegantly move around the room in perfect harmony, looking as though they must have practiced for hours. Fran is celebrating her 90th birthday in style, and although Parkinson’s disease has limited her mobility over the last decade, today technology is enabling the joy of movement she knew when she was 20. “Memories are real. If you’re dancing in a ballroom in a virtual world or in a ballroom in Portland, Oregon — you were dancing in a ballroom. It was an experience,” Donna Z. Davis, Ph.D, the director of the strategic communications program at the University of Oregon. She witnessed the power of virtual environments to heal and help real people like Fran. “This is not about replacing, it is about augmenting. It’s technological augmentation in a way that provides for them beyond the capabilities of the physical world. So somebody without legs or with Parkinson’s can go dance. Someone who lives in isolation can have a social life.” Power to heal Davis has been working in the virtual reality space for over 10 years. The last three years her focus, through the support of a National Science Foundation grant, has been studying embodiment in VR spaces and the role that the body plays in shaping the mind. Her findings along with the results of several other studies indicate that there is a link between our physical selves and our digital selves, or avatars. What we see our bodies do on screen can positively impact what our bodies can do in the real world. Davis was first introduced to this phenomenon while working with Fran and her daughter, Barbie. As Fran enjoyed navigating her virtual world with ease, she began to have the confidence to do more physically demanding tasks in the physical world. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! After meeting Fran and Barbie, Davis and her colleague, Tom Boellstorff at UC Irvine, were invited to join the newly formed virtual support group for others suffering from Parkinson’s. They have been meeting virtually for over seven years and Fran has developed a following of support group participants that refers to this healing power of virtual reality as “the Fran effect.” Although Davis primarily works with the “ability diverse” or those who are challenged in both visible and invisible ways, she believes the benefits are not limited to this population. “How many of us are trapped inside a body or a place that doesn’t allow us to really live our lives in a way that we feel capable of? These technologies may open those doors in really exciting ways.” Above: Donna Davis, far right, celebrates Fran’s 90th birthday with members of their virtual support group New technologies, new opportunities While much of her work over the last decade was done in a 3D environment on a 2D screen, Davis is pioneering therapeutic applications in the more immersive social 3D platforms like Sansar and High Fidelity. While this new medium provides increased immersion and freedom from physical limitations, it also provides additional accessibility challenges. Currently these platforms don’t rely on text chat and instead use voice technology as the primary means of communication. This makes it difficult for someone with speech and hearing impairments to use the platform successfully. Hand controllers coupled with physical movement are also required to navigate these virtual spaces, which is impossible for those suffering from debilitating physical conditions. However, Davis and her research partner and cultural anthropologist, Tom Boellstorff, have been working with the teams developing these platforms to help ensure they support the needs of their users. Above: Tom Boellstorff (center) and Cecii Zapien helps Cody with a headset and controllers in order to experience the 3D virtual world. They are accompanied by Linden Lab executive, Bjorn Laurin. Davis and Boellstorff recently visited Linden Lab , Second Life’s creators, to try to co-opt these new immersive tools for the unique needs of their research population. They were accompanied by Cody, a man who has suffered from severe physical challenges with cerebral palsy resulting from a tragic childhood accident. Cody can’t move his hands or arms which would typically render hand controllers useless, however Cody’s caregiver placed the ‘hand’ controller on Cody’s foot allowing him to experience, for the first time, his real body ‘moving’ his 3D avatar’s arms. Caught on film as part of an upcoming documentary entitled “Our Digital Selves ,” Cody’s joy of experiencing this type of movement was undeniable. The kicking movement required to move his avatar’s arms not only produced a feeling of joy, it is also a vital part of the work he does on a regular basis with this physical therapist. Davis believes making something seen as a chore, such as physical therapy, a joyful experience can be a powerful motivator. “Immersive environments can help motivate patients to do painful or difficult physical therapy movements. Make it something that’s fun, make it joyful. How do you create an opportunity that gets people to go beyond themselves in healthy and supportive ways? Using the virtual world for physical therapy can help create that opportunity.” Above: Donna Davis and Cody during their recent visit to Linden Lab. Additional research Since Davis began her pioneering work almost a decade ago, there have been many additional studies linking virtual reality with healing outcomes and pain management. Several studies have focused on using virtual technologies to help with chronic pain and conditions such as ‘phantom limb pain’ often experienced by amputee patients. One such study determined that VR can “trick” the brain into believing the patient is using the limb in the virtual environment, thereby alleviating the sensory conflict of not having use of the limb in the real world. The increased sense of presence and immersion afforded by newer VR technologies can often be enough of a distraction to help patients manage painful conditions without the use of highly addictive pain medications. This fact has made some established medical institutions in the US slow to ratify the new methods for fear of alienating the powerful pharmaceutical lobby. Commercial opportunities Given the amount of new research showing the potential of VR to heal both emotional and physical conditions, it’s no surprise that many innovative VR companies not bound by traditional methods have stepped up to help find new solutions to old problems. One of the most successful applications is the use of VR to treat PTSD. Virtually Better , a company that Dr. Skip Rizzo and his team out of UCLA founded, developed a simulation that would re-create the conditions that Iraq war veterans experienced. “Virtual Iraq” proved successful, helping treat over 70 percent of PTSD sufferers, and that has now become a standard accepted treatment by the Anxiety and Depression Association of America. They also support applications of VR-based therapy for aerophobia, acrophobia, glossophobia, and substance abuse. Another U.S.-based company, Firsthand , has developed a platform to help manage chronic and acute pain. The 3D immersive, game-like environment uses bio-feedback sensors to help patients regulate physical activities, like breathing, in order to calm the mind and promote mindfulness. Their website claims that “patients can use a technology solution for pain management with no pharmaceutical side effects.” Physical and occupational therapy is another field that benefits from the advancements in VR technology. Companies like Mindmaze and VRHealth offer platforms that help practioners’ administer various types of VR physical therapy treatments. MindMotion, developed by Mindmaze, creates virtual environments therapists can customize for a patient’s preferences and needs. These virtual enhancements motivate them to be more consistent and get the most from their prescribed exercise programs. The platform also allows for real-time multisensory feedback, so patients can monitor their own performance. There are also several companies building platforms to help therapists and counselors leverage these new technologies within their private practice. Limbix offers clinicians a ‘plug and play’ VR therapy solution and Psious offers a monthly subscription package that includes VR therapy training, a platform enabling VR sessions with clients, marketing support and client session reporting. What’s next? We are just beginning to understand the true potential for immersive, VR environments to change how we think and feel. There are those who fear the negative implications of these hyper-real environments and worry they will replace the physical world. Davis sees the virtual world not as a replacement for the physical world but as an enhancement. “That’s the thing about our work that I love most, is that we’re forcing people to look at the positive potential for virtual reality — maybe not even as positive, but normative — as opposed to the dystopic narrative most commonly represented.” Davis believes there is great potential for VR to help revolutionize the health care, retail and fitness industries but more importantly she is hopeful it will transform our values as a society. VR social spaces can help remove cultural, racial, gender and economic barriers that prejudice our interactions in the real world. “When do we start to value somebody’s mind and heart? I think in the VR space you begin to place a value on their mind and their heart rather than physical beauty because those are the things that are driving your interaction with that person.” Lisa Peyton is an immersive media strategist and media psychologist focusing on the business applications of new technologies. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,913
2,021
"What Apple's first mixed reality headset will mean for enterprises | VentureBeat"
"https://venturebeat.com/2021/01/21/what-apples-first-mixed-reality-headset-will-mean-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What Apple’s first mixed reality headset will mean for enterprises Share on Facebook Share on X Share on LinkedIn Apple's first mixed reality headset will apparently resemble Facebook's Oculus Quest 2, but with much greater horsepower for enterprises. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Over the past five years, the clear trend in mixed reality headsets has been “smaller, better, and more affordable,” a process that has yielded multi-million-selling success stories such as Sony’s PlayStation VR and Facebook’s Oculus Quests , alongside an array of niche headsets targeted largely at enterprises. For consumers, the pitch has been simple — wear this headset and teleport to another place — but for enterprises, particularly data-driven ones, adoption has been slower. High prices, narrower use cases, and “build it yourself” software challenges have limited uptake of enterprise mixed reality headsets, though that hasn’t stopped some companies from finding use cases, or deterred even the largest tech companies from developing hardware. Apple’s mixed reality headset development has been an open secret for years , and its plans are coming into sharper focus today, as Bloomberg reports that Apple will begin by releasing a deliberately niche and expensive headset first, preparing developers and the broader marketplace for future lightweight AR glasses. This is similar to the “early access launch” strategy we suggested one year ago, giving developers the ability to create apps for hardware that’s 80% of the way to commercially viable; high pricing and a developer/enterprise focus will keep average consumers away, at least temporarily. For technical decision makers, today’s report should be a wake-up call — a signal that after tentative steps and false starts, mixed reality is about to become a big deal, and enterprises will either need to embrace the technologies or get left behind. Regardless of whether a company needs smarter ways for employees to visualize and interact with masses of data or more engrossing ways to present data, products, and services to customers, mixed reality is clearly the way forward. But the devil is in the details, and Apple’s somewhat confusing approach might seem daunting for some enterprises and developers. Here’s how it’s likely to play out. Mixed reality, not just virtual or augmented reality Virtual reality (VR) and augmented reality (AR) are subsets of the broader concept of “mixed reality,” which refers to display and computing technologies that either enhance or fully replace a person’s view of the physical world with digitally generated content. It’s easy to get mired in the question of whether Apple is focusing on VR or AR, but the correct answer is “both,” and a given product will be limited largely by its display and camera technologies. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! At this point, Apple reportedly plans to start with a headset primarily focused on virtual reality, with only limited augmented reality functionality. This sounds a lot like Facebook’s Oculus Quest , which spends most of its time engrossing users in fully virtual worlds, but can use integrated cameras to let users see basic digital overlays augmenting their actual surroundings. It’s unclear what Apple’s intended VR-to-AR ratio will be for customers, but the company has repeatedly said that it views AR as the bigger opportunity, and if the headset’s being targeted at a high price point, it’s clearly not going to be positioned as a gaming or mass-market entertainment VR product. The initial focus will almost certainly be on enterprise VR and AR applications. It’s worth mentioning that a well-funded startup called Magic Leap favored the term “spatial computing” as a catchall for mixed reality technologies, and though the company had major issues commercializing its hardware, it envisioned a fully portable platform that could be used indoors or outdoors to composite digital content atop the physical world. Apple appears to be on roughly the same page, with a similar level of ambition, though it looks unlikely to replicate the specifics of Magic Leap’s hardware decisions. Standalone, not tethered As Apple’s mixed reality projects have simmered in development, there’s been plenty of ambiguity over whether the first headset would be tethered to another device (iPhone or Mac) or completely standalone. Tethering enables a headset to be lighter in weight but requires constant proximity to a wired computing device — a challenge Facebook’s Oculus Rift tackled with a Windows PC, Magic Leap One addressed with an oversized puck, and Nreal Light answered with an Android phone. Everyone believes that the eventual future of mixed reality is in standalone devices, but making small, powerful, cool-running chips that fit inside “all-in-one” headsets has been a challenge. The report suggests that Apple has decided to treat mixed reality as its own platform — including customized apps and content — and will give the goggles Mac-class processing power and screens “that are much higher-resolution than those in existing VR products.” This contrasts with Facebook, which evolved the standalone Oculus Quest’s app ecosystem upwards from smartphones; Apple’s approach will give enterprises enough raw power on day one to transform desktop computer apps into engrossing 3D experiences. Start planning now for 2022, 2023, and 2024 Apple’s mixed reality hardware timeline has shifted: Back in 2017 , Apple was expected to possibly offer the headset in 2020, a timeline that was still floated as possible in early 2019 , but seemed unlikely by that year’s end as reports instead suggested a 2022 timeframe. The timing is still uncertain — Bloomberg today suggests a launch of the mixed reality goggles in 2022, followed by the lightweight AR glasses “several years” from now — but CIOs shouldn’t ignore the writing on the wall. Just like the iPad, which arrived in 2010 and made tablets a viable platform after years of unsuccessful Microsoft experiments with “ tablet PCs ,” companies that quickly took the new form factor seriously were better prepared for the shift to mobile computing that followed. Assuming the latest timeframes are correct, Apple’s approach will be good for enterprises, giving developers at least a year (if not two) to conceive and test apps based on mixed reality hardware, with no pressure of immediate end user adoption. If the goggles sell for $1,000 or $2,000, they’ll appeal largely to the same group of enterprises that have been trialing Microsoft’s high-priced HoloLens or Google Glass Enterprise Edition , albeit with the near-term likelihood of a more affordable sequel — something Microsoft and Google haven’t delivered. Creation and deployment strategies Enterprises already have some software tools necessary to prototype mixed reality experiences: Apple’s ARKit has been around since 2017 and now is at version 4 , with the latest iPad Pro and iPhone 12 Pro models capable of previewing how mixed reality content will look on 2D displays. The big changes will be in how that content works when viewed through goggles and glasses — a difference nearly any VR user will attest is much larger and more impressive than it sounds. If they’re not already doing so, progressive companies should start thinking now about multiple facets of their mixed reality needs, including: The breadth of the business’ headset adoption needs at various price points, including $2,000, $1,000, and $500 A company’s initial development strategy will be very different if the technology will be universally adopted across the workforce, versus only two total employees using headsets due to price or other considerations Some enterprises are already seeing value in bulk purchases of fairly expensive AR headsets, but use cases with ROI are highly industry-specific Strategies for visualizing the enterprise’s existing 2D data, presentations, and key apps in immersive 3D — has someone already figured this out for a given industry or type of data, or does the enterprise need to invent its own visualization? Hiring or training developers with mixed reality app and content creation experience, with an understanding that rising demand for these specialized workers over the next few years may create hiring and/or retention challenges The customer’s role, including: How to enrich the customer experience using virtual and/or augmented reality Customer expectations for using mixed reality given various hardware price points, such as whether it will be temporarily company-supplied (used at a car dealership for visualizing a vehicle) or owned by the customer and used to access company-offered content at random times of the day and night, like web content At this stage, many enterprises will find that there are far more questions and preliminary thoughts on adopting mixed reality technologies than concrete answers, and that’s OK — assuming Apple kicks off a bigger race by launching something next year, there’s still ample time for any company to develop a plan and move forward. But now is the time for every company to start thinking seriously about how it will operate and present itself in the mixed reality era, as the only major remaining question isn’t whether it will happen, but when. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,914
2,021
"The blockchain-based virtual world that can help usher in the metaverse | VentureBeat"
"https://venturebeat.com/2021/02/04/the-blockchain-based-virtual-world-that-can-help-usher-in-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event The blockchain-based virtual world that can help usher in the metaverse Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. One of the essential features of a working metaverse will be the ability to move your avatar and assets seamlessly and instantaneously from world to world. Blockchain may be the thing that makes that possible. And The Sandbox, a world based on the Etherium cryptocurrency, has an approach to perpetual, player-owned tokens that could point the way forward. “The idea of blockchain in the metaverse is to build a new kind of digital asset, to create based on ownership and governance,” said Arthur Madrid, CEO and co-founder of The Sandbox, in a conversation with Dean Takahashi, Lead Writer at GamesBeat. The panel, dubbed “Blockchain and the Metaverse,” was part of the recent GamesBeat event, Into the Metaverse. The current manifestation of The Sandbox , a virtual world where players can build, own, and monetize their own voxel gaming experiences on the Ethereum blockchain, has just those kinds of assets. Users gain ownership of their creations as non-fungible tokens (NFTs), which means the number of items and the history of their ownership can be tracked, and items can be verified as one-of-a-kind. The value of each individual NFT is also defined by the community. Under the ownership of Animoca Brands, the team’s vision is to offer a deeply immersive metaverse in which virtual worlds and games will be created collaboratively and without centralized authority. The Sandbox features three main components: the VoxEdit NFT builder for building what they call ASSETs, the marketplace for buying and selling ASSETs, and the game maker tool where interactive games can be constructed and shared. These components are intended to allow players to create voxel worlds and game experiences, and the ability to safely store, trade, and monetize their creations through blockchain, allowing creators to benefit from their creations and evolving how digital assets in games are understood, at a fundamental level. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “I think people are truly blown away by the amount of money that players spend in digital assets — hundreds, thousands, and probably millions of dollars spent on digital assets,” he said. “I think making those assets NFTs, building an NFT economy, is going to add a new layer on top of the existing digital economy.” Madrid believes that games and worlds built on this style of gameplay will eventually move from a revenue sharing model, where the game company essentially gets a kickback from player creativity, to one where 100% of the revenue will belong to the players. At that point, the value of the metaverse will come from the service you provide players — and in return, word of mouth from players will bring new users into the fold. Plus, for an avatar-centric metaverse, in which people want to create an identity to travel from one virtual world to the next, making that avatar an NFT could be key, Madrid said, pointing at how you could use ERC-1155 to easily import the assets of a player on your metaverse. Because it’s decentralized, he explained, you could potentially use an API to call any of the items a player used in their gaming session. NFT stock and blockchain can also be used to build a governance system attached to a unique mechanism. “That will probably move the very simple gameplay and gameplay mechanics to much more advanced gameplay mechanics that, in my opinion, match up with the concept of the metaverse,” he said. “When you join a metaverse you understand that it comes with a certain kind of gameplay that makes you be part of it, and based on your engagement, based on what you create, based on the personality and the actions that you’re going to achieve inside the metaverse, makes you able to vote for it.” Blockchain technology isn’t mature enough to be used in the creation of the metaverse quite yet; in a world where you’re looking for 60 frames per second, taking minutes to update is a no-flier. But Madrid points out that you wouldn’t currently add blockchain to a game with 250 million players — it’s better to build a blockchain metaverse from scratch. “We are experimenting with this new way to transact with blockchain and there’s a lot to be improved,” he said. “However, crypto communities always compare the blockchain technology to internet in the very beginning. In 1999, nobody could believe that you could use the internet at such a level of speed. As soon as [a technology] is adopted, there is always a solution to make it efficient.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,915
2,021
"As Meta pushes for the metaverse, it may be a better fit for some, not all | VentureBeat"
"https://venturebeat.com/2021/12/11/as-meta-pushes-for-the-metaverse-it-may-be-a-better-fit-for-some-not-all"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community As Meta pushes for the metaverse, it may be a better fit for some, not all Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Olga Vorobyeva, founder of Vox Consulting In true technical style, there is no single universally accepted definition of the metaverse. Simply put, the metaverse is a vast network of 3D worlds and simulations rendered in real-time for cooperation and participation. This is not a virtual reality experience or a virtual economy with avatars sticking out , but a way to maintain identity, objects, payments, history, and ownership continuity. The metaverse market may reach $783.3 billion in 2024 versus $478.7 billion in 2020, representing a compound annual growth rate of 13.1%, based on studies by Bloomberg with data from Newzoo, IDC, PWC, Statista, and Two Circles. It is set to expand beyond AR and VR content, incorporating live entertainment and taking a share of social-media advertising revenue. According to that same data, the entire metaverse may reach 2.7 times that of just gaming software, services, and advertising revenue. The metaverse enables users to interact virtually; a digital reality that combines various aspects of social media, online games, augmented reality (AR), virtual reality (VR), and cryptocurrency. Augmented reality superimposes visual, sound, and other sensory data into real-world settings to improve the user experience. Virtual reality is entirely virtual, enhancing the fictional reality. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! And the metaverse is already rife with exciting projects, experiences, and platforms. Kalao, for example, is a popular virtual world that combines NFTs, business collaboration, VR experiences, and a marketplace, all built on the Avalanche blockchain. SoftBank (one of the most influential tech companies globally) invested a whopping $93 million in The Sandbox’s metaverse game. And bridging the gap between the real and virtual worlds, Metahero is a 3D-scanning technology that recreates real-world objects, humans included, and recreates them into ultra-HD avatars. According to CoinMarketCap, Metahero already has an operational, 4K public chamber ready for scanning in Doha, Qatar, with plans to place more scanning chambers in Tokyo, Berlin, New York, Seoul, and more. MetaHero has ties with Sony (through exclusive partnerships), which is undoubtedly well-placed to bring the metaverse to homes through its PlayStation brand. But all is not well in the metaverse. A black hole is forming, and its name is Meta. In an interview with The Verge , Mark Zuckerberg turned his head, announcing that Facebook would evolve from a social network to a “metaverse company” over the next five years. Much like Google announced that its name was now Alphabet, mainly for accounting reasons, Facebook renamed to Meta, but did so differently. By revealing the change with fanfare, including demos of its Horizon social VR platform, it made Meta seem more than just an exercise in good money management. So while Facebook’s apparent endorsement of the metaverse sounds positive, it may be the worst thing to happen to the industry since its inception. A parallel universe at your fingertips For many, the metaverse is part of a dream for the future of the internet. Partly, it is an accurate way to reflect some current trends in online infrastructure, including the rise of 3D worlds generated in real-time. The metaverse is a collective of byte-based alternatives to the characteristics of the atomized physical world. While many people think of the metaverse as a version of the dystopian book and movie Ready Player One, much of the metaverse will have much more utilitarian value. In the end, we will always be connected to the metaverse by expanding our senses of sight, hearing, and touch and integrating digital elements with the digital world. Neil Stevenson coined the term “metaverse” in his 1992 novel Snowcrash. He introduced a virtual reality world like the internet, which he called the “metaverse” in which users would interact with digital forms of themselves called “avatars.” In fiction, the utopian metaverse can be portrayed as a new frontier at which we can all rewrite social norms and value systems, free of cultural and economic sclerosis. Unlike Ready Player One, which portrays a metaverse in which the natural world gives way to an infinite virtual world, multiple metauniverses in our world are likely to emerge. But equally important to a flourishing metaverse will be building a community that values the contributions of developers and creators who create resources and experiences for most users, whether they are developing a new game, digital objects, or entire virtual worlds. The metaverse will allow users to create their content, distribute it freely in a widely accessible digital world. The metaverse will be an endless world encompassing both physical and virtual worlds. Blockchain technology will ensure the security and public availability of transactions and identities without supervision from governments, corporations, or another regulatory body. And herein lies the problem with Meta and its ambitions. When Tim Berners-Lee invented the World Wide Web, his first message wasn’t “hello Facebook, Google, Alibaba, Tencent, and Amazon.” It was “hello world.” It was supposed to be the internet by the people, for the people. However, we have allowed the internet to become centralized and controlled by a few massive corporations. The risk in this model is extraordinary. So, what’s next for the metaverse? Currently, according to data from Ethernodes, we know that around 70% of Ethereum nodes are running on hosted services or cloud providers , as we commonly call them. 23% of the Ethereum network sits on one provider; Amazon Web Services. Now imagine what happens if Jeff Bezos decides to launch AmazonCoin, on Amazon’s blockchain and (as I’m confident the terms and conditions allow) chooses to outlaw ETH projects on AWS. While that may not bring about the end of Ethereum, it will undoubtedly have a significant impact on the world’s “Blockchain app store.” So do we believe that Zuckerberg has our best intentions at heart ? Facebook, Instagram, and WhatsApp collect data from their users constantly. They use that personal information to target consumers’ advertising and profit from that information without recompense or compensation. Meta’s market cap stands at $935 billion as of November 2021, with 2.91 billion active monthly user numbers, and advertising comprised 98.8% of the company’s total revenue in 2020. Does anyone believe that Meta will act differently in the near future , embracing the metaverse’s true vision? Or will it continue to gain money and power by leveraging a new class of personal data – eye tracking, mood, physical movements, and even potential or prevalent mental health issues? You could argue that this is just a product of late-stage capitalism. After all, cryptocurrencies, which are supposed to be the bastion of the decentralized economy, have fallen into the same trap that fiat has. As with traditional currency, a small number of people own the majority of crypto tokens, such as Bitcoin. What could be a reset button for the entire internet, finally delivering on the promise of a system by the people and for the people, is likely to become another centralized ecosystem – with a few corporations in control – and governments maintaining the ability to censor content, block access, and put up firewalls to stop citizens from telling their story to the outside world. The good, the bad, and the unknown One positive thing that Facebook’s rebranding is doing for this ecosystem is that it essentially validates the entire concept of a “metaverse” in the mainstream world and validates what builders have been doing in Ethereum over the years. For Facebook, to give weight to the metaverse’s products and services and change the name of the entire company to something relative is a strong signal that we should not ignore (whether you like the company or not). When Mark Zuckerberg took us on a whistle-stop tour of the metaverse – the still hypothetical next phase of the internet – he showed us a future that delivers a single space, fusing digital and physical reality in a single platform. Zuckerberg previously advertised the metaverse would be “the embodied internet in which you are in the experience, not just gaze at it,” he said as he walked through the demonstration. And while the global pandemic has heightened interest in the metaverse as more people work from home or remote locations, it is essential for us all not to lose sight of what is being proposed and by whom. There are concerns that the metaverse, under the control of Meta, will become a centralized, dystopian information grab, designed only to serve its parent company and increase its power beyond measure. Suppose we are to create and virtually live in a place where you can work, play, learn, create, shop, and interact with friends in a virtual online environment. In that case, I want everyone to have a stake in that, gain from it, be compensated for their time and energy, and help each other grow. Left unchecked, all we’ll do, collectively, is allow Meta to become our keepers, and that is inherently inhuman. Olga is the Founder of Vox Consulting, a marketing firm for blockchain, DeFi, and NFT startups, and a former Head of Marketing at SwissBorg – the first crypto wealth management platform (TOP -100 Coinmarketcap). DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,916
2,021
"Rescale raises $50 million to provide high-performance infrastructure-as-a-service to enterprises | VentureBeat"
"https://venturebeat.com/2021/02/02/rescale-raises-50-million-to-provide-high-performance-infrastructure-as-a-service-to-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rescale raises $50 million to provide high-performance infrastructure-as-a-service to enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Rescale , a San Francisco, California-based startup developing a software platform and hardware infrastructure for scientific and engineering simulation, has raised $50 million. The company says the funding, which was announced today, will be put toward R&D and expanding the availability of Rescale’s platform. Cloud adoption in the science and engineering community remains largely on-premises, in private datacenters. Massive markets are powered by high-performance compute (HPC), with total annual spend expected to reach $55 billion by 2024. Workloads in the scientific R&D category often benefit from the advantages of hybrid public cloud and on-premises computing. Powerful computers allow researchers to undertake high volumes of calculations in epidemiology, bioinformatics, and molecular modeling, many of which would take months on traditional computing platforms (or years if done by hand). But less than 20% of HPC workloads run in the cloud today. Rescale was cofounded in 2011 by Joris Poort and Adam McKenzie, former aerospace engineers at Boeing who leveraged AI techniques to optimize the 787’s wing structure. At the University of Michigan while studying mechanical engineering and mathematics, Poort had an opportunity to work on an aerospace project, which soon became his passion. After graduating magna cum laude in mechanical engineering and math at University of Michigan, he later graduated magna cum laude in aeronautics and astronautics at University of Washington. At Boeing, Poort’s and McKenzie’s experience building an HPC simulation environment informed Rescale’s business model: an infrastructure- and software-as-a-service hybrid cloud infrastructure platform tailored for HPC, specifically the R&D and IT community. “Industries like aerospace, jet propulsion, supersonic flight all require massive computer simulations based on AI and specialized hardware configurations. Historically the science community has run these workloads on on-premises data centers that they directly built and maintain,” a spokesperson told VentureBeat via email. “Rescale was founded to bring HPC workloads to the cloud to lower costs, accelerate R&D innovation, power faster computer simulations, and allow the science and research community to take advantage of the latest specialized architectures for machine learning and artificial intelligence without massive capital investments in bespoke new data centers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Rescale enables customers to run jobs on public clouds like Amazon Web Services, Microsoft Azure, Google Cloud Platform, IBM, and Oracle. And to those customers, Rescale makes available a network that spans 8 million servers with over 80 specialized architectures with resources like Nvidia Tesla P100 GPUs, Intel Skylake processors, and over 1TB RAM, delivering a combined a 1,400 petaflops of compute. Whether they leverage compute from Rescale’s infrastructure or from a third-party provider, customers gain access to software that supports more than 600 simulation applications for aerospace, automotive, oil and gas, life sciences, electronics, academia, and machine learning, including desktop and visualization capabilities that let users interact with simulation data whether or not jobs have finished. Rescale provides both on-demand and long-term environments and pricing structures, allowing customers to launch single batch jobs, advanced optimization jobs, and large designs of experiments. Moreover, the platform features an enterprise simulation environment and an administrative portal along with direct integrations and management of on-premises HPC resources, schedulers, and software licenses. Rescale’s file management capabilities support the transfer, organization, and storage of simulation input and output files with unlimited storage. And the company’s API and command-line interface enable the porting of applications and programmatic bursting of compute jobs. Rescale says that over 300 businesses use its hardware and software, among them Amgen, Denso, Airbus, Nissan, Oak Ridge National Labs, Samsung, and the University of Pennsylvania. In 2020, Google and Microsoft kicked off a program with the startup to offer resources at no cost to teams working to develop COVID-19 testing and vaccines. Rescale provides the platform that researchers launch experiments and record results on, while Google and Microsoft supply the backend computing resources. “Rescale is the first HPC cloud platform created specifically for digital R&D empowering the research scientists and engineers that are building the future,” Poort said in a statement. “Rescale gives engineers simple access to thousands of preconfigured software and hardware profiles, the on-demand capacity of the public cloud provider of their choice, and the ability to focus on R&D outcomes and speeding delivery of new innovation, instead of managing HPC infrastructure.” Hitachi Ventures, Microsoft, Nautilus Venture Partners, Nvidia, Republic Labs, and Samsung Catalyst Fund participated in Rescale’s series C announced today. It brings the company’s total raised to date to over $100 million following a $32 million series B round in July 2018. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,917
2,021
"Kong raises $100 million for software that scales cloud infrastructure | VentureBeat"
"https://venturebeat.com/2021/02/08/kong-raises-100-million-for-software-that-scales-cloud-infrastructure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kong raises $100 million for software that scales cloud infrastructure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As APIs and microservices become critical tools to drive innovation and automation for a wider range of companies, they are also creating new management challenges. Enterprises are attracted by their potential to create greater flexibility but must find ways to coordinate these cloud-based services. Kong, one of the new breed of companies trying to address this problem, announced today that it has raised $100 million in venture capital at a valuation of $1.4 billion. Tiger Global Management led the round, which also included investment from Goldman Sachs, Index Ventures, CRV, GGV Capital, and Andreessen Horowitz. The latest round comes almost 2 years after Kong raised $43 million. The company has now raised a total of $171 million. In a blog post announcing the funding , Kong CEO and cofounder Augusto Marietti said APIs have become the linchpin for driving the cloud transformation of enterprises. “Many of the conveniences of our digitally connected, modern lives — like using Amazon’s Alexa to play music over your home speakers from your Spotify account, asking your car’s navigation system to find a less congested route recommended by Google Maps or ordering your favorite meal on Doordash — would be impossible without the APIs that connect companies with their vendors, partners, and customers,” he wrote. “Our digital world is becoming fully programmable with two secular trends: cloud traffic and services are growing exponentially each year, creating decentralized but hyper-connected enterprises.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company began life as Mashape back in 2009 by creating an API marketplace. The company developed Kong, an open source management tool for APIs and microservices in 2015. The product soared in popularity. So the company sold its marketplace business and renamed itself Kong as it focused on building services with Kong at the core. The company’s flagship product, Kong Konnect, is a connectivity gateway that links APIs to service meshes. Using AI, the platform eases and automates the deployment and management of applications while bolstering security. Among its notable customers, Kong claims GE, Nasdaq, and Samsung. The company intends to use the latest funding to expand marketing, grow its customer service team, and accelerate product development. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,918
2,021
"Rocket Software acquires ASG Technologies to boost infrastructure management tools | VentureBeat"
"https://venturebeat.com/2021/04/14/rocket-software-acquires-asg-technologies-to-boost-infrastructure-management-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rocket Software acquires ASG Technologies to boost infrastructure management tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Rocket Software this week announced it intends to acquire ASG Technologies as part of an ongoing effort to expand the reach of its portfolio. Terms of the deal were not disclosed. ASG Technologies is best known for providing a configuration management database (CMDB) that is widely employed in IBM environments. But in recent years, it has expanded its portfolio to include infrastructure management tools, as well as business process management (BPM) and content management software it gained by acquiring Mowbly in 2018 and the acquisition of Mobius Management Systems in 2007. Rocket Software, which is privately held by Bain Capital, has historically focused on middleware and tools that are employed to modernize mainframe environments. With the acquisition of ASG Technologies, the company will expand the scope of its product offerings to include more infrastructure management tools and software further up the application stack, newly appointed Rocket Software president Milan Shetti told VentureBeat. Rocket Software sees the acquisition of ASG Technologies as part of a larger strategy to expand its reach across the enterprise, Shetti noted. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Industry growth The overall size of the mainframe market opportunity, however, continues to grow as the IT platform becomes more tightly integrated with distributed computing platforms running inside and outside of cloud computing environments , Shetti added. In the months ahead, the company will continue looking to expand its portfolio through organic and inorganic acquisitions, Shetti noted. Over the course of its history, Rocket Software has made more than 45 acquisitions, including Zephyr, Shadow, Aldon, and D3. The acquisition of ASG Technologies will present opportunities across a soon-to-be expanded product portfolio, Shetti noted. “We will continue to be an acquisitive company,” he said. Of course, there is no shortage of rival offerings for managing applications and IT infrastructure in what is becoming an extended enterprise computing environment that reaches from public clouds all the way to the network edge. At the same time, the management of data and the applications employed to create that data are becoming more disaggregated. As that trend continues, the need for more sophisticated tools that can manage what is evolving into multiple centers of data gravity will become more pressing. One of the areas Rocket Software will invest in is developing the machine learning models needed to automate the management of a wide range of IT management tasks, Shetti said. The promise of AIOps In effect, an arms race to build next-generation tools for managing enterprise IT environments — also known as AIOps — is now well underway. There is no consensus on how sustainable AIOps is, given the degree to which all IT management tools will employ machine and deep learning algorithms. But the next generation of IT management platforms will continuously learn about the IT environment as changes and updates are made. It’s not likely these tools will replace the need for IT administrators as IT environments continue to become more complex. However, the amount of time spent trying to discover the root cause of a specific IT issue should be sharply reduced. In addition, those platforms will enable IT organizations to compensate for a current shortage of IT skills that limits the degree to which such environments can scale. In theory, an IT team should be able to leverage AI platforms to manage several orders of magnitude more workloads as IT becomes more automated. It may be a while before the promise of AIOps is fully realized, but the future of IT management can already be seen in large enterprises. It’s just a question of how long it might take for those AI capabilities to be pervasively applied across all enterprises. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,919
2,021
"Linux Foundation launches open source agriculture infrastructure project | VentureBeat"
"https://venturebeat.com/2021/05/05/linux-foundation-launches-open-source-agriculture-infrastructure-project"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Linux Foundation launches open source agriculture infrastructure project Share on Facebook Share on X Share on LinkedIn Smart farming agricultural technology and smart arm robots harvesting vegetables Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The Linux Foundation has lifted the lid on a new open source digital infrastructure project aimed at the agriculture industry. The AgStack Foundation , as the new project will be known, is designed to foster collaboration among all key stakeholders in the global agriculture space, spanning private business, governments, and academia. As with just about every other industry in recent years, there has been a growing digital transformation across the agriculture sector that has ushered in new connected devices for farmers and myriad AI and automated tools to optimize crop growth and circumvent critical obstacles, such as labor shortages. Open source technologies bring the added benefit of data and tools that any party can reuse for free, lowering the barrier to entry and helping keep companies from getting locked into proprietary software operated by a handful of big players. Founded in 2000, the Linux Foundation is a not-for-profit consortium that supports and promotes the commercial growth of Linux and other open source technologies. The organization hosts myriad individual projects spanning just about every sector and application, including automotive , wireless networks , and security. The AgStack Foundation will be focused on supporting the creation and maintenance of free and sector-specific digital infrastructure for both applications and the associated data. It will lean on existing technologies and agricultural standards; public data and models; and other open source projects, such as Kubernetes, Hyperledger, Open Horizon, Postgres, and Django, according to a statement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Current practices in AgTech are involved in building proprietary infrastructure and point-to-point connectivity in order to derive value from applications,” AgStack executive director Sumer Johal told VentureBeat. “This is an unnecessarily costly use of human capital. Like an operating system, we aspire to reduce the time and effort required by companies to produce their own proprietary applications and for content consumers to consume this interoperably.” Open agriculture There are a number of existing open source technologies aimed at the agricultural industry, including FarmOS , which is a web-based application for farm management and planning that was created by a community of farmers, researchers, developers, and companies. But with the backing of the Linux Foundation and a slew of notable industry stakeholders, the AgStack Foundation is well positioned to accelerate interoperable technologies that are free to use and extend upon. “Just like an operating system, we feel there will be a whole universe of applications that can be built and consumed using AgStack,” Johal added. “From pest prediction and crop nutrition to harvest management and improved supply-chain collaboration, the possibilities are endless.” Members and contributors at launch include parties from across the technology and agriculture spectrum. Among these is Hewlett Packard Enterprise (HPE), which already runs a number of agricultural initiatives, including a partnership with global food security research group CGIAR to help model food systems. Other members include Purdue University/OATS & Agricultural Informatics Lab, the University of California Agriculture and Natural Resources (UC-ANR), and FarmOS. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,920
2,021
"As Nvidia pushes for leadership in metaverse, here’s everything it announced at GTC 2021 | VentureBeat"
"https://venturebeat.com/2021/11/09/as-nvidia-pushes-for-leadership-in-metaverse-heres-everything-it-announced-at-gtc-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages As Nvidia pushes for leadership in metaverse, here’s everything it announced at GTC 2021 Share on Facebook Share on X Share on LinkedIn Following on the heels of announcements from both Facebook and Microsoft, Nvidia became the third major tech company over the past few weeks to announce its push for the metaverse. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At its 2021 GTC conference , today Nvidia, became the third tech giant to formally announce and detail plans to innovate for the metaverse. The following is a recap of all major announcements the company made today, with links to VB’s coverage diving into each one in-depth. Following on the heels of announcements from both Facebook and Microsoft , Nvidia revealed dozens of new tools and leaps forward in its already-developed technologies like GPUs , all designed with an eye toward solving top societal problems — like battling climate change and fighting forest fires — in a virtual world like Nvidia’s AI Omniverse or the broader metaverse, and applying the solutions to the real world. Ultimately, the major takeaway from Nvidia’s announcements is its intent to increase accessibility to AI technologies and immersive experiences available for any enterprise that wants to pursue innovation in these manners. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Becoming a full-stack company requires full-time focus and integration with the metaverse The Santa Clara, California-based, $766 billion market cap company is a worldwide leader in graphics processors and media communications devices for both gaming and professional technologies. At GTC 2021, the company repeatedly shared promises to innovate all of its technology resources for a faster, more efficient, and sustainable future. By transforming into a full-stack tech company , Nvidia can extend its resources and capabilities to further strengthen its already top-of-the-line products like GPUs , driverless automobiles, edge computing , and more — allowing it to perfect its assets and pursue innovation for the metaverse with full force. Expanding its horizons to develop one of the world’s largest language models for enterprises to serve new domains and languages, building avatars to populate its intricate Omniverse , implementing, and optimizing the supply chain for more efficient deliveries (including speedier pizza deliveries), and even AI models that train systems to use the laws of physics to model the behavior of systems like climate science and protein engineering. Today, Nvidia also detailed that it is expanding its LaunchPad program to ten new locations. LaunchPad is a worldwide initiative of the company to assist global enterprises with quickly determining proper AI requirements on the same stack they can purchase and deploy. A history of industry leadership Since the company’s founding in 1993, Nvidia has proven itself as one of the toughest tech competitors in the market. According to the company’s website , in 1993 there were more than two dozen companies in the graphic chip space. In just three years’ time, that number vastly expanded to around 70 different companies competing in this niche sector of the tech industry. However, by 2006, Nvidia outlasted them all and was the only independent one still operating. Could Nvidia extend that independence and become a leader in the metaverse and AI spaces in the future? The company’s CEO, Jensen Huang, seems to believe so. Tuesday, in his keynote speech, Huang announced major news that all of Nvidia’s tools, features, and technologies will be utilized for going forward: Earth 2, a digital twin of Earth itself. The news came with a promise to use technology innovations to save the world. “We will build a digital twin to simulate and predict climate change,” Huang said in his streamed keynote speech. “This new supercomputer will be E2, Earth 2, the digital twin of Earth, running Modulus-created AI physics at a million times speeds in the Omniverse. All the technologies we’ve invented up to this moment are needed to make E2 possible. I can’t imagine greater and more important news.” While the metaverse boom ramps up, only time will tell if Nvidia proves its fortitude once again and outlasts its competitors in this space. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,921
2,021
"Tellius brings real-time analytics within data warehouses | VentureBeat"
"https://venturebeat.com/2021/12/14/tellius-brings-real-time-analytics-within-data-warehouses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tellius brings real-time analytics within data warehouses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Reston, Virginia-based Tellius , a startup that offers an AI-driven intelligence platform to uncover the ‘what’ and ‘why’ in business data, has announced a new feature for enterprises, Live Insights. The feature, according to a statement, is aimed at helping organizations generate powerful data analyses within cloud data warehouses such as Snowflake and Amazon Redshift. It uses the native compute power of the cloud data platform to provide AI-guided insights from terabytes of unaggregated data, without requiring the information to be extracted or moved from the system. The development comes as more and more enterprises continue to adopt the modern data stack and look for solutions to run analytic queries inside of their data warehouse. Live Insights: An upgrade to Tellius Founded in 2015, Tellius’ platform sits on top of the modern data stack and uses AI to help business analysts understand what is happening with business metrics, why metrics change, and how to drive desired business outcomes using natural language queries. The company has built its position around three primary analytics capabilities: data visualization and exploration that supports a natural language search, Guided Insights that automates data analysis using ML and statistical algorithms, and predictive modeling with automated ML. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With the new feature, Tellius is extending all these offerings, allowing organizations to gain and interpret actionable insights within cloud data platforms in a point-and-click manner without coding. “Prior to the Live Insights release, only the first set of capabilities could directly leverage the customer’s data warehouse for processing queries,” Ajay Khanna, CEO and founder of Tellius, told VentureBeat. “The new Live Insights capability satisfies customer needs to get advanced data analysis from Tellius Guided Insights while keeping data in place and getting real-time answers from their data warehouse. It is about providing more capabilities that customers want to utilize while leveraging their data warehouse as much as possible.” The feature will allow companies to perform multiple analyses within the cloud data warehouse, including automatic identification of trend drivers, comparison of cohorts, and detection of anomalies. “The impact is providing insights to decision-makers from across terabytes of data in minutes, while eliminating human bias or confirmation bias from the analysis. Without an automated insights engine such as Tellius, analysts would have to come up with multiple hypotheses and test each one manually by writing and evaluating a new query every time (which is manual and time-consuming),” Khanna explained. General availability on the way Currently, the company is testing the new feature with select customers, including a Fortune 500 organization on Snowflake that is already seeing results. It plans to make the capability generally available in the first quarter of 2022. “The modern data stack is not just about re-tooling for the cloud; it presents an opportunity to transform how organizations approach analytics by moving beyond decades-old processes of limiting business users to pre-built dashboards and data specialists to manually querying data. Live Insights is another step in modernizing the analytics experience with AI-driven automation and natural language interfaces that allow our customers to stand up a complete modern data stack and go from data to insights in just a few minutes, Khanna said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,922
2,021
"The top 5 enterprise analytics stories of 2021 (and a peek into 2022) | VentureBeat"
"https://venturebeat.com/2021/12/30/the-top-5-enterprise-analytics-stories-of-2021-and-a-peek-into-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The top 5 enterprise analytics stories of 2021 (and a peek into 2022) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2021, everything from databases, to baseball, no-code AI for data scientists, graph analytics, and even events got an analytics makeover this year. Heading into 2022, Chris Howard, the chief of research at Gartner, and his team wrote in its Leadership Vision for 2022 report on the Top 3 Strategic Priorities for Data and Analytics Leaders that “progressive data and analytics leaders are shifting the conversation away from tools and technology and toward decision-making as a business competency. This evolution will take time to achieve, but data and analytics leaders are in the best position to help orchestrate and lead this change.” In addition, Gartner’s report predicts that adaptive governance will become more prominent in 2022: “Traditional one-size-fits-all approaches to data and analytics governance cannot deliver the value, scale, and speed that digital business demands. Adaptive governance enables data and analytics leaders to flexibly select different governance styles for differing business scenarios.” The enterprise analytics sector this year foreshadowed much of what’s to come. Here’s a look back at the top stories in this sector from 2021, and where these themes may carry the industry towards next. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Databases get real-time analytics capabilities and integrations Rockset integrated its analytics database with both MySQL and PostgreSQL relational databases to enable organizations to run queries against structured data in real time. Rather than having to shift data into a cloud data warehouse to run analytics, organizations can now offload analytics processing to a Rockset database running on the same platform. The company’s approach is designed to analyze structured relational data, as well as semi-structured, geographical, and time-series data in real time. Complex analytical queries can also be scaled to include JOINS with other databases, data lakes, or event streams. In addition to integrations with open source relational databases, the company also provides connectors to MongoDB, DynamoDB, Kafka, Kinesis, Amazon Web Services, and Google Cloud Platform, among others. What stood out most about this advancement, though, isn’t specific to Rockset. “As the world moves from batch to real-time analytics,” the company stated in its press release , “and from analysts running manual queries to applications running programmatic queries, traditional data warehouses are falling short.” This trend in real-time analytics is further propelled by the swift move several companies made to a virtual and all-online infrastructure due to the pandemic. Real-time analytics in the virtual space will allow companies to more accurately index, strategize, and create new applications using their data. Popular baseball analytics platform moves to the cloud It’s well-known to baseball fans that the data now made available by the MLB goes beyond the traditional hits, runs, and errors — it’s a sport both increasingly as complex in its data and statistics as it is in its ever-growing list of new time-limits and league rules. Fans now regularly consult a raft of online sites that use this data to analyze almost every aspect of baseball: top pitching prospects, players who hit the most consistently in a particular ballpark during a specific time of day, and so on. One of those sites is FanGraphs , which has transitioned the SQL relational database platform it relies on to process and analyze structured data to a curated instance of the open source MariaDB database, which is deployed on the Google Cloud Platform. FanGraphs uses the data it collects to enable its editorial teams to deliver articles and podcasts that project, for example, playoff odds for a team based on the results of the SQL queries the company crafts. These insights can assist a baseball fan participating in a fantasy league, someone who wants to place a more informed wager on a game at a venue where gambling is legalized, or a game developer creating the latest MLB The Show video game. All of the above require high volumes of data. One of the things that attracted FanGraphs to MariaDB is the level of performance that it could attain using a database-as-a-service (DBaaS) platform. “On top of [Maria DB’s] SkySQL’s ease and performance, the exceptional service from our SkyDBAs have enabled us to completely offload our database responsibilities. That help goes far beyond day-to-day maintenance, backup, and disaster recovery. We find our SkyDBA looks at things we wouldn’t necessarily keep an eye on to secure and optimize our operations,” David Appelman, founder, and CEO of FanGraphs stated in a press release. The explosion of data calls for an explosion of efficiency to manage it, and it’s a trend the industry can expect to see more of heading into 2022. Data scientists will soon get a hand from no-code analytics SparkBeyond, a company that helps analysts use AI to generate new answers to business problems without requiring any code, released SparkBeyond Discovery. The company aims to automate the job of a data scientist. Typically, a data scientist looking to solve a problem may be able to generate and test 10 or more hypotheses a day. With SparkBeyond’s machine, millions of hypotheses can be generated per minute from the data it leverages from the open web and a client’s internal data, the company says. Additionally, SparkBeyond explains its findings in natural language, so a no-code analyst can understand it. The company says its auto-generation of predictive models for analysts puts it in a unique position in the marketplace of AI services. Most AI tools aim to help the data scientist with the modeling and testing process once the data scientist has already come up with a hypothesis to test. The significance here essentially comes down to “time is money.” For example, the more time a data scientist can save solving problems and testing hypotheses, the more money a company saves in turn. “Analytics and data science teams can now leverage AI to uncover hidden insights in complex data, and build predictive models with no coding required [while leveraging the] AI-driven platform to make better business decisions, faster,” SparkBeyond stated in an October press release. A service with the capacity to explore such a vast amount of hypotheses per minute based on internal and external data sources to reveal previously unrecognized drivers of business and scenario outcomes, and explains its findings in natural language to individuals that may not even need to code whatsoever, is quite the breakthrough in the analytics space. Notable companies using SparkBeyond Discovery include McKinsey, Baker McKenzie, Hitachi, PepsiCo, Santander, and others. Life is increasingly split between virtual and in-person – analytics must follow Hubilo, a platform that helps businesses of all sizes host virtual and hybrid events and gain access to real-time data and analytics, raised $23.5 million in its series A funding round earlier this year. Investments in companies like Hubilo that integrate tools for virtual and in-person tasks, events, meetings, and activities will likely continue into 2022 as the world enters into year two of a global pandemic. Digital conferences, meetups, and events can be scaled more easily and with fewer resources than their brick-and-mortar counterparts, and the shift to hybrid and virtual platforms generates a significant amount of data in-person events otherwise may not have, which can prove valuable to companies for tracking and correlating business objectives. Hubilo’s promises its customers enhanced data and measurability capacities. Event organizers using Hubilo’s platform can access engagement data on visitors, including the number of logins and new users versus active users. Additionally, event sponsors can also determine whether a visitor is likely to purchase from them based on engagement with their virtual booth. Data includes the number of business cards received, profile views, file downloads, and more. The platform can also track visitors’ activities, such as attending a booth or participating in a video demonstration, and then recommend similar activities. From a business perspective, a sponsor or sales personnel can use these features to access potential prospects through a feature Hubilo calls “potential leads.” Its integration capabilities are also key for companies now operating in a hybrid or fully remote capacity. Hubilo features a one-click approach for common “go-to-market platforms including HubSpot, Salesforce, and Marketo, enabling companies to demonstrate ROI through event data integrated with their existing workflows,” its press release stated. Integrating analytics tools with CRM and sales platforms is a vital trend that will continue to evolve as the world navigates not to get things back in-person, but rather, if they should do so, and what they can gain from hybrid approaches and tools instead. Graph database gets a revamp What do the Panama Papers researchers, NASA engineers, and Fortune 500 leaders have in common? They all heavily rely on graphs and databases. Neo4j, a graph database company that claims to have popularized the term graph database and aims to be a leader in the graph database industry, has shown signs through its growth this year that graphs are becoming a foundational part of the technology stack. Across industry sectors, graph databases serve a variety of use cases that are both operational and analytical. A key advantage they have over other databases is their capability to intuitively generate models and rapidly generate data models and queries for highly interconnected domains. In an increasingly interconnected world, that is proving to be of value for companies. What was then an early-adopter game has snowballed to the mainstream, and it’s still growing. “Graph Relates Everything” is how Gartner put it when including graphs in its top 10 data and analytics technology trends for 2021. At this year’s Gartner Data & Analytics Summit 2021, graphs were, unsurprisingly, front and center. Interest from tech and data decision-makers is continuously expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine. With the noted increase in the volume of data that companies are now storing and processing in an increasingly digital world, tools that provide flexibility for interpreting, modeling, and using data will be key and their usage is sure to increase going forward. According to Neo4j , that is precisely what it’s capable of providing to its users. “ A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it,” the press release reads. So what’s ahead for 2022? The analytics landscape will become increasingly complex in its capabilities, while simultaneously becoming even more user-friendly for researchers, developers, data scientists, and analytics professionals alike. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,923
2,017
"Amazon sells AWS cloud assets in China amid tightening regulation | VentureBeat"
"https://venturebeat.com/2017/11/14/amazon-sells-aws-cloud-assets-in-china-amid-tightening-regulation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon sells AWS cloud assets in China amid tightening regulation Share on Facebook Share on X Share on LinkedIn Amazon building in Santa Clara, California. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — Amazon.com is selling off the hardware from its public cloud business in China, amid tightening regulation over online data that is creating a hurdle for technology firms operating in the world’s second-largest economy. Beijing Sinnet Technology, Amazon’s China partner, said in a filing late on Monday that it would buy the U.S. firm’s Amazon Web Services (AWS) public cloud computing unit in China for up to 2 billion yuan ($301.2 million). “In order to comply with Chinese law, AWS sold certain physical infrastructure assets to Sinnet,” an AWS spokesman said on Tuesday, adding AWS would still own the intellectual property for its services worldwide. “We’re excited about the significant business we have in China and its growth potential.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Chinese regulators are tightening rules on foreign data and cloud services, implementing new surveillance measures and increasing scrutiny of cross-border data transfers. Laws that came into effect in June require firms to store data locally. “This move is mostly around regulatory compliance,” said Charlie Dai, Beijing-based analyst at Forrester Research. He added the move was necessary for AWS to build up its other business areas in the market. AWS has a separate hardware venture in partnership with the Ningxia provincial government in China’s northwest. Amazon said on its website that its public cloud services in the country are exclusively managed by Sinnet. Amazon’s cloud business in China already faced tougher rules due to China’s tight internet controls. In August, Sinnet told customers it would shut down VPNs and other services on its networks that allow users to circumvent China’s so-called Great Firewall system of censorship, citing direct instructions from the government. The move casts a shadow over similar foreign ventures in the country. Microsoft Corp, Oracle Corp and IBM Corp are also facing tough new regulatory challenges in localizing their data storage units. Global firms in China, including Apple Inc, have this year transferred data to Chinese ventures overseen by local authorities. Microsoft operates its Azure cloud services unit in partnership with China-based 21Vianet Group. “We expect other foreign players, such as Oracle and IBM, will also ensure regulatory compliance as long as they want to provide public cloud services in China,” said Dai. Microsoft, Oracle and IBM did not immediately respond to request for comment on Tuesday. Cloud services have become a crowded and competitive field in China in recent years, with Alibaba Group Holding’s cloud unit opening over a dozen overseas data centers since 2016. Chinese firms account for roughly 80 percent of total cloud services revenue in China, and roughly half of the data center market in 2017, according to Synergy Research Group. ( Reporting by Cate Cadell; Editing by Stephen Coates and Christopher Cushing ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,924
2,019
"Russia opens civil cases against Facebook and Twitter over failure to comply with local data laws | VentureBeat"
"https://venturebeat.com/2019/01/21/russia-opens-civil-cases-against-facebook-and-twitter-over-failure-to-comply-with-local-data-laws"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Russia opens civil cases against Facebook and Twitter over failure to comply with local data laws Share on Facebook Share on X Share on LinkedIn 3D-printed Facebook and Twitter logos are seen in this picture illustration made in Zenica, Bosnia and Herzegovina on January 26, 2016. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — Russia’s communication watchdog said on Monday it was opening administrative proceedings against Twitter and Facebook for failing to explain how they plan to comply with local data laws, the Interfax news agency reported. Roskomnadzor, the watchdog, was quoted as saying that Twitter and Facebook had not explained how and when they would comply with legislation that requires all servers used to store Russians’ personal data to be located in Russia. The agency’s head, Alexander Zharov, was quoted as saying the companies have a month to provide information or else action would be taken against them. Russia has introduced tougher internet laws in the last five years , requiring search engines to delete some search results, messaging services to share encryption keys with security services and social networks to store Russian users’ personal data on servers within the country. At the moment, the only tools Russia has to enforce its data rules are fines that typically only come to a few thousand dollars or blocking the offending online services, which is an option fraught with technical difficulties. However, sources in November told Reuters that Moscow plans to impose stiffer fines on technology firms that fail to comply with Russian laws. ( Reporting by Maria Kiselyova; Writing by Tom Balmforth; Editing by Raissa Kasolowsky ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,925
2,021
"Starburst raises $100 million to take on data lake rivals | VentureBeat"
"https://venturebeat.com/2021/01/07/starburst-raises-100-million-to-take-on-data-lake-rivals"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Starburst raises $100 million to take on data lake rivals Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Starburst Data has raised $100 million as the data analytics company continues to ride the surge in data lakes. Andreessen Horowitz led the round , which included Index Partners, Coatue, and Salesforce’s venture capital arm. The funding comes just six months after Starburst raised $42 million , bringing its total to $164 million for a valuation of $1.2 billion. And the latest announcement came on the same day another data lake company, Dremio, announced it had raised $100 million. So what’s this arms race all about? As companies grapple with growing amounts of information, data lakes allow them to pool structured and unstructured data in one spot, which then facilitates the movement and processing of that data. “We believe we are solving the biggest problem that the big data era couldn’t: offering fast access to data, regardless of where it lives,” Starburst CEO Justin Borgman wrote in a blog post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the case of Starburst, it’s built on Presto, an open source project developed at Facebook. Indeed, three of Starbursts’ cofounders are from Facebook, where they worked on the project. Starburst began life as Hadapt, a startup founded by Borgman. Teradata acquired Hadapt in 2014 but spun Starburst off in 2017. Along the way, Hadapt-Starburst shifted its focus from Hadoop to Presto. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,926
2,021
"Why data gravity won't stop the move to multicloud | VentureBeat"
"https://venturebeat.com/2021/03/06/why-data-gravity-wont-stop-the-move-to-multicloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why data gravity won’t stop the move to multicloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. You’ve probably heard of “data gravity” and how it can inhibit a hybrid strategy. The idea is that, as you amass and store data in one particular cloud, this body of data exerts a gravitational pull on the apps and services that orbit around it, making it impossible for you to then move that data to another cloud. But data gravity doesn’t have to stymie an organization from adopting a multicloud or hybrid-cloud strategy. In fact, the opposite is true. In the oft-used analogy, if compute infrastructure is the machinery of today’s world, then data is the oil — meaning infrastructure is not productive without it. It does not make sense for applications and services to run where they do not have quick and easy access to data, which is why data exerts such a gravitational pull on them. When services and applications are closer to the data they need, users experience lower latency and applications experience higher throughput, leading to more useful and reliable applications. Simplistically, one could be tempted to locate all data, and the applications within its orbit, in a single location. But regulatory and latency concerns are two reasons why this is not realistic for most global enterprises. A single public cloud is a pipedream The idea that a single public cloud will solve all of your problems is a pipedream that no organization can realistically make work (nor would they want to). Sure, it may sound easier in theory to work with only one vendor, with only one bill to pay and the same underlying infrastructure for all of your applications. But between the demand for edge computing, the need to comply with data sovereignty regulations and the general need to be nimble and flexible, one cloud for all of your data is just not practical for a business to compete in today’s market. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s true that some processing is best done in a global central location — model training for artificial intelligence and machine learning (AI/ML) applications, for example, thrive on having massive amounts of data to analyze because it increases model accuracy. However, the inference of AI/ML applications frequently can’t be done at the core and needs to be at the edge instead. For example, a manufacturer in Thailand relying on data generated from heat sensors on the factory floor needs to be able to analyze that data in real time in order for it to have value. That decision-making needs to be done close to where the data is generated to meet the business requirements and make an impact on operations. The challenges of data gravity One of the most obvious challenges of data gravity is vendor lock-in. As you amass more data in one location, and more of your apps and services rely on that data, it becomes increasingly difficult, not to mention costly, to move out of that original location. In our Thai factory edge example, some apps must move to where data is generated in order to meet latency requirements. Latency is essentially the time budget you have available to process the data for it to have an impact or be needed by the end user. Where the data is located must be within the latency budget for it to be useful. When apps and data are separated by too great of a latency time budget, the corresponding reduction in responsiveness can greatly hinder your organization. Take for instance, a smart cities application such as license plate recognition for border control. Once a license plate is scanned, some apps must produce a near real-time response to fit within the latency time budget (Amber alerts, stolen vehicles, etc.). If the latency for this analysis exceeds the latency time budget, the data is much less meaningful. If the data and apps are too far away from each other, the scanning becomes useless. Beyond the business requirements and expectations for quick response times, data sovereignty laws also regulate where data can be stored and whether it can move across jurisdictional boundaries. In many countries it’s against the law to export certain types of data beyond the borders of the country. The average Global 2000 company operates in 13 countries. If that is the reality for your organization, you can’t easily move data while abiding by those laws. If you try to take the time to anonymize that data to meet sovereignty requirements, your latency budget goes out the window, making it a no-win situation. Address data gravity with the hybrid cloud Inevitably, wherever you store data, it will pull apps and services to it. But data sovereignty and latency time budgets for edge applications almost guarantee that multinational companies cannot operate with a simple single-cloud strategy. With a hybrid cloud infrastructure, organizations can spread out apps and services to where their data is, to be closer to where they need it, addressing any latency problems and data sovereignty requirements. The key to making this work is to use a common operating environment across these various clouds and data center locations, such as a Kubernetes platform. If your organization is maintaining applications for many different operating environments, the associated complexity and costs may kill you competitively. Organizations can use a mix of AWS, Azure, Google Cloud Platform, VMware, on-premises, and more — but there needs to be a way to make the apps and services portable between them. With a common operating system, you can write applications once, run them where it makes sense, and manage your whole environment from one console. Setting up for success now In the coming years, datasets are only going to continue to grow (exponentially), especially as organizations depend increasingly on AI and machine learning applications. According to IDC, worldwide data creation will grow to an enormous 163 zettabytes by 2025. That’s ten times the amount of data produced in 2017. That means any challenges brought on by the gravitational pull of the data you use are only going to be exacerbated if you don’t set yourself up for success now. With a hybrid cloud infrastructure, you can set up a constellation of data masses to comply with increasing global data sovereignty laws and process and analyze data in edge locations that make sense for the business and end users. Brent Compton is Senior Director of Data Services and Cloud Storage at Red Hat. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,927
2,021
"Open source can boost EU economy and digital autonomy, study finds | VentureBeat"
"https://venturebeat.com/2021/09/06/open-source-can-boost-eu-economy-and-digital-autonomy-study-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source can boost EU economy and digital autonomy, study finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. A new report from the European Commission (EC) sheds light on the impact open source software (OSS) and open source hardware (OSH) could have on the European Union (EU) economy. The report, titled “The impact of Open Source Software and Hardware on technological independence, competitiveness, and innovation in the EU economy” ( PDF ), was carried out on behalf of the EC by Fraunhofer ISI and OpenForum Europe. It affirms that EU-based governments and private companies invested around €1 billion ($1.2 billion) in open source software in 2018, resulting in an EU economy benefit of between €65 billion and €95 billion. Crucially, the report predicted that a 10% increase in contributions to OSS projects would generate an extra 0.4% to 0.6% GDP and lead to an additional 600 technology startups across the region. However, Europe’s institutional OSS capacity is “disproportionately smaller” than the scale of value created by OSS, according to the report. To counter this, it offers a number of public policy recommendations aimed at helping both the OSS and OSH spheres flourish, including actively engaging in a “transition toward more openness in its [the EU’s] political and investment culture.” More specifically, the report concludes that an EC-funded network of open source program offices (OSPOs) would help bolster institutional capacity and accelerate the “consumption, creation, and application of open technologies.” OSPOs, in a nutshell, bring formality and order to open source programs and have become an integral part of businesses ranging from tech titans like Google to venture capital-backed startups. The report also recommends the EU develop an “open source industrial policy” and include it in its major policy frameworks, as it has done with the European Green Deal and the AI Act. Digital autonomy The benefits of open source software are well understood , as the technology intersects with everything from web servers and automobiles to air traffic control and medical devices. In fact, the web’s growth over the past 30 years has been fueled in large part by OSS, as it lowers the barrier to entry and reduces the cost of ownership compared to proprietary alternatives. As the report notes, OSS also helps avoid vendor lock-in, increasing an organizations’ digital autonomy, or “technological independence,” as the report calls it. Digital autonomy essentially means an organization retains full control over its tech stack and data, even when it works with external companies and service providers. Countless open source platforms have gone to market with that exact promise to would-be enterprise customers — more flexibility to deploy software as they see fit while keeping all their data in-house by hosting on their own infrastructure. This includes myriad open source Slack alternatives like Element , open source customer engagement platforms like Chatwoot , and open source feature management tools like Unleash. Put simply, Europe is promoting open source as a means to achieve sovereignty, free from the proprietary shackles of billion-dollar Silicon Valley companies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,928
2,021
"Google taps T-Systems to offer a 'sovereign cloud' for German organizations | VentureBeat"
"https://venturebeat.com/2021/09/08/google-taps-t-systems-to-offer-a-sovereign-cloud-for-german-organizations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google taps T-Systems to offer a ‘sovereign cloud’ for German organizations Share on Facebook Share on X Share on LinkedIn T-Systems headquarters in Frankfurt Main Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google has announced the first major partnership to advance its previously stated mission of offering businesses more data sovereignty. This effort is aimed at organizations that require more flexibility and control over their data, don’t want to rely on a single cloud provider based in an entirely different country, or simply want to avoid vendor lock-in. The internet giant has revealed a new joint service with Deutsche Telekom’s IT services and consulting subsidiary T-Systems , which will “manage sovereignty controls and measures” such as encryption and identity management of the Google Cloud Platform for Germany-based customers who need it. Moreover, T-Systems — an existing Google Cloud technology partner — will oversee other integral parts of the Google Cloud infrastructure as part of the new T-Systems Sovereign Cloud. This includes supervising physical or virtual access to sensitive infrastructure, as in routine maintenance and upgrades. Control It’s all about enabling German enterprises and public sector bodies, including health care organizations, to assert more control over their sensitive data on an independently managed platform while still benefiting from the power of Google’s technology and expertise. Data sovereignty has emerged as a thorny issue in the world of cloud computing, with consumers and businesses alike increasingly concerned about where their data lives, who actually owns it, and who they might be sharing it with. Throw into the mix the growing array of regional regulations — like GDPR — that dictate where data must be stored and where it can or can’t be transferred, and it’s clear that while cloud computing spend might be going through the roof , the landscape has become trickier to navigate. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As such, major cloud providers (including Google) continue to extend their datacenter reach to new regions , which not only helps improve data transfer speeds and reduce latency, but allows them to offer enterprises some data residency controls. But that still doesn’t go far enough for certain industries and markets that require a tighter control over how their data is handled, particularly as pertains to personally identifiable information (PII). The open source factor This is a problem open source players have also set out to solve, as myriad commercial open source SaaS platforms emerge to offer more data control and autonomy than their proprietary rivals. Europe has also been pushing open source as a way to free itself from the shackles of Silicon Valley companies. Just this week, the European Commission (EC) published a report that sheds light on the impact open source technology has had on the European economy, noting that a 10% increase in contributions to open source projects could generate an extra 0.4% to 0.6% GDP and lead to an additional 600 technology startups across the region. With that in mind, Google’s extended partnership with T-Systems has specific provisions for “openness and transparency,” including supporting easy integrations with existing IT environments and serving up access to Google’s “open source expertise that provides freedom of choice and prevents lock-in,” according to a press release. The first fruits of Google’s partnership with T-Systems are expected to land in mid-2022, with Google noting that this is the first of several such tie-ups with local technology providers in Europe. This is a similar model to what we’ve seen in other regions around the world, including in China, which introduced a law in 2017 stipulating that foreign companies must not only store data locally, but also work directly with local partners. Amazon, for example, had to partner with two local entities when it expanded its AWS cloud unit into China. By contrast, Google’s decision to partner with T-Systems isn’t a legal or regulatory requirement, but it goes some way toward meeting data sovereignty expectations, particularly in tightly regulated industries with strict data protection needs. “The sovereign cloud solution we are partnering with T-Systems to create will provide public and private-sector organizations with an additional layer of technical and operational measures and controls that ensure German customers can meet their data, operational, and software sovereignty requirements,” Google Cloud CEO Thomas Kurian said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,929
2,021
"1Password hires its first CTO to scale in the enterprise and beyond | VentureBeat"
"https://venturebeat.com/2021/09/23/1password-hires-its-first-cto-to-scale-in-the-enterprise-and-beyond"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 1Password hires its first CTO to scale in the enterprise and beyond Share on Facebook Share on X Share on LinkedIn 1Password's new CTO Pedro Canahuati Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 1Password has hired its first chief technology officer (CTO) as the password management and credentials security platform doubles down on the enterprise growth that has netted big-name customers like Slack, IBM, Shopify, and GitLab. The Canadian company has come a long way since it launched its first password manager for consumers some 15 years ago. Founded out of Toronto in 2005 ahead of its official release a year later, 1Password has increasingly chased the enterprise dollar, doubling its number of paying business customers to more than 90,000 in the past two years and hitting annual recovering revenue (ARR) of $120 million. Things appear to be going swimmingly for 1Password, so why hire a CTO now? “In a word? growth,” 1Password CEO Jeff Shiner told VentureBeat over email. Although 1Password had grown organically and been profitable since its inception, the company’s decision to accelerate its enterprise push was enabled in large part by its gargantuan $200 million series A round back in 2019 , its first institutional investment. Off the back of this raise, 1Password expanded into secrets management to help companies secure their infrastructure, launched a new API for security teams to funnel 1Password sign-in data directly into their cybersecurity applications, and introduced a new Linux desktop app for DevOps teams. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A few months back, 1Password raised another $100 million at a $2 billion valuation. Above: 1Password for Linux On top of all that, 1Password recently announced its first chief financial officer (CFO), chief product officer (CPO), chief marketing officer — and now its first CTO. “We recently crossed 500 employees, and it became clear to the leadership that the company would benefit from a single leader in place to prioritize technology innovation and look around the corner at what the market needs next,” Shiner said. “There’s a lot we could do, but what should we do to advance our business and our mission?” That’s where Pedro Canahuati enters the fray, joining 1Password to head up its technology endeavors after nearly 12 years at Facebook, where he most recently spearheaded the social network’s security and privacy efforts. Hypergrowth For context, Canahuati had been with Facebook since it had a measly 175 million users , all the way through its IPO and on to becoming one of the biggest companies in the world with more than 3 billion users across its properties. Behinds the scenes, this translates into growing from a “single datacenter and a few dozen engineers managing thousands of servers to dozens of datacenters, millions of servers, and over a thousand production engineers,” Canahuati told VentureBeat. So Canahuati knows a thing or two about scaling engineering and security at hypergrowth organizations, including the inherent challenges. “Facebook is a structured environment built from the ground up — we had to build a lot of the underlying technologies and infrastructure ourselves,” Canahuati explained. “With that, the company also became a pretty big target as it became the platform for several billion users. One of the biggest hurdles, from a security perspective, was keeping up with the growth of the company, the user base, and the ever-changing threat landscape. The problems became more complex over time, and we had to build tools like static and dynamic analysis software that now finds over 50% of security bugs through automation. We built code-level abstractions that solved some of the OWASP top 10 industry problems so our software developers could focus more on rapid experimentation than on security. This isn’t even scratching the surface of what we built.” Canahuati said he could “probably write a book” about the lessons he learned during his Facebook tenure. “I learned a ton about building feature-rich, secure infrastructure and products with high availability,” he said. “I was part of building a world-class infrastructure leadership team — the best in the big tech world, in my opinion. I had to learn how to become a stronger leader, build strong leadership teams that helped us be resilient to new requirements and move sustainably fast on stable, secure infrastructure with an ever-increasing demand.” It’s probably fair to say Canahuati could have left Facebook for any number of big tech companies, but he was particularly interested in moving up to the CTO role. “I spent a lot of time thinking about what kind of role and company I wanted to join,” Canahuati said. “It was important to me to find the cross-section of solving meaningful problems for people, strong leadership, a loved brand, and where my skills and experiences could help the company become even stronger. I prefer companies that take a consumer-first approach to building products because they tend to build more user-friendly applications.” Above: 1Password: Managing family members What’s next It’s important to note that while 1Password does toot its own enterprise horn, it’s still very much a consumer service company. This presents an extra challenge, as 1Password has to deal with myriad expectations and requirements ranging from individual users and families to small businesses and enterprises. Such a product roadmap has the potential to get messy without due care. “1Password is at an inflection point in its transition — one that began a few years ago — from a pure consumer company to one that also offers solutions to businesses,” Canahuati said. “Our fan base is passionate and has strong opinions about our products, and we’ll need to balance that against our priorities. I’ll be taking a holistic view of the products we offer and the products that businesses and families want and will help thread the needle between the two. It’s going to be a challenge for sure, but it’s one that I embrace.” It’s widely acknowledged that the vast majority of data breaches are due to compromised passwords, which is why 1Password has managed to infiltrate both the consumer and enterprise spheres with a platform that enables users to store passwords securely and access myriad online services with a single click, while it can also be used to store other private documents, such as software licenses, credit card details. More recently, 1Password has started to manage and safeguard infrastructure “secrets,” such as API tokens, keys, and certificates. Above: 1Password: Secrets automation The world has rapidly transitioned to remote work over the past 18 months, a trend that shows little sign of reversing. This has opened a can of worms for workplace security, in terms of employees signing into myriad cloud systems and applications on their own networks and devices. This is partly why the global password management market is gearing up to become a $3 billion industry in the next five years, up from $1.2 billion last year. To prepare for this boom, Canahuati said he will be focused on supporting all the technology teams across the company, including engineering, security, production environments, data, and IT. “As 1Password has grown tremendously over the past few years, I’ll be focused on ensuring that we can scale up the teams, our infrastructure, and capabilities to build more awesome technology,” he said. “This will help us be more nimble while building a diverse suite of products that help families and businesses.” More specifically, Canahuati hinted that more third-party integrations were in the pipeline, after having already unveiled a handful of partnerships in the past year. These include a tie-up with Privacy.com to let users create virtual payment cards and a duo of enterprise integrations with Slack and Rippling. “We’ll continue to go after similar opportunities that make it easier for businesses and families to stay safe,” Canahuati added. Data sovereignty A quick peek across the broader SaaS sphere reveals a growing array of software that embraces an open source model and is designed to attract industries that require full autonomy and sovereignty over their data. This is particularly true in highly regulated sectors such as finance, government, or health care that manage a lot of personally identifiable information (PII). Elsewhere, some companies or countries might even block access to online services like 1Password. Having the freedom and flexibility to deploy software on a company’s own infrastructure is clearly a selling point for some — so is this something 1Password might consider in the future? 1Password is in fact currently seeking feedback on this very question, though Canahuati wouldn’t confirm whether this idea would be greenlighted. “Currently, we believe that a 1Password membership is the best way to store, sync, and manage your passwords and other important information,” he said. “However, we’re constantly looking into new avenues to make sure we always offer what’s best for our customers. Right now, we’re in the exploratory phase of investigating a self-hosted 1Password. We’ll assess the demand for this as we gather results.” With a $2 billion valuation, most of the C-level bases now covered (CFO, CMO, CPO, and CTO) and a roster of high-profile investors that includes Accel, Slack, Ashton Kutcher’s Sound Ventures, and Atlassian’s founders, it seems fair to ask — is 1Password gearing up to become a public company anytime soon? “I can’t speak to our long-term business outcomes, but our mission is to help any company embrace security and privacy,” Shiner added. “We’ll pursue any product or business strategy that helps us achieve that goal.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,930
2,021
"Starburst launches fully-managed cross-cloud analytics | VentureBeat"
"https://venturebeat.com/2021/11/29/starburst-launches-fully-managed-cross-cloud-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Starburst launches fully-managed cross-cloud analytics Share on Facebook Share on X Share on LinkedIn Galaxies and deep space dust Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Starburst , the commercial entity behind the open source Presto -based SQL query engine Trino , has announced a new fully-managed, cross-cloud analytics product that allows companies to query data hosted on any of the “big three’s” infrastructure — without moving the data from its original location. While many of the big cloud data analytics vendors support the burgeoning multicloud movement by making their products available for each platform, problems remain in terms of making data stored in multiple environments easy to access. Companies still have to find a way to “pool” data from these different silos, be it through moving data to a single cloud or data warehouse, which is not only time-consuming but can also incur so-called “ egress ” fees for transferring data. And this is what Starburst is now addressing, by extending its fully-managed software-as-a-service (SaaS) product to allow its customers to analyze data across the major clouds with a single SQL query. From Presto to Trino Starburst has followed a rather circuitous route to where it is today. The company’s foundations can be traced back to 2012 when a group of Facebook engineers developed a distributed SQL query engine called Presto to help its in-house data scientists and data analysts run faster queries on huge data sets. Facebook open-sourced Presto the following year, but following an ongoing disagreement with the powers-that-be at Facebook, the Presto creators eventually departed the social network and launched a fork called PrestoSQL — which was rebranded as Trino last December. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As with many similar open source projects , Trino now has a commercial counterpart known as Starburst, whose founders include the original Presto creators among other early Presto adopters. Initially, Starburst was offered in a single “enterprise” flavor that could be self-managed and hosted on-premises or any public cloud. Earlier this year, Starburst launched a new fully-managed SaaS offering called Starburst Galaxy, which features an integrated SQL editor out-of-the-box for querying data and connectors for integration with data sources. Above: Starburst Galaxy: Connecting a new data source Starburst Galaxy was originally only available for AWS , but to support Starburst’s push into cross-cloud analytics, the company is now extending support to Microsoft’s Azure and Google Cloud Platform (GCP). It’s worth noting that Starburst had previously introduced a cross-cloud analytics product called Stargate for the self-managed incarnation. Now Starburst is bringing this same functionality to its fully-managed service, where it handles all the infrastructure and the customer doesn’t have to worry about what’s going on under the hood. “This allows us to extend cross-cloud analytics capabilities to anyone and any department without the help of central IT,” Starburst cofounder Matt Fuller told VentureBeat. “This allows domain experts to take ownership of the data they know best and deliver it as a product to the rest of the organization.” Multicloud mayhem So what is the big brouhaha over multicloud anyway? Isn’t it easier for companies to pick a public cloud and stick with it? In some cases, that might well be true, but companies will often pursue a multicloud approach for any number of reasons. Some clouds are better at certain things than others, in which case it might make sense to use GCP for one thing, and AWS for another. Moreover, cost and compliance considerations might also lead a company down a multicloud or hybrid-cloud approach, mixing up on-premises infrastructure with one or more public clouds. And sometimes, companies can find themselves in a multicloud world by happenstance, through acquiring companies that use different clouds or where different internal departments select the cloud that best suits their needs. Cross-cloud analytics goes some way toward helping these companies circumvent data silos that all these various scenarios create. “By having data in these different clouds, it creates a further extension of the data silo problem where data not only exists in different data sources, but it is also now in very different locations,” Fuller said. “That is why cross-cloud analytics is needed — otherwise, data has to be moved to a single cloud. Much like the previous solution to the problem of attempting to move all data into a single data warehouse.” It’s also worth noting that even in situations where a company does use a single cloud provider, the company may have to store data in different cloud “regions” to satisfy local data residency requirements. In such cases, using solutions that involve transferring data between systems or locations isn’t an option — which is where Starburst’s latest solution could really shine. “Cross cloud analytics allow for processing to be pushed to the region where the data resides and only have aggregated insights leave,” Fuller explained. “If restricted data must leave, it can be masked to adhere to the requirements.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,931
2,021
"Open source Calendly alternative Cal.com promises greater data control | VentureBeat"
"https://venturebeat.com/2021/12/21/open-source-calendly-alternative-cal-com-promises-greater-data-control"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source Calendly alternative Cal.com promises greater data control Share on Facebook Share on X Share on LinkedIn Cal.com: Open source scheduling infrastructure Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. It’s a scenario most people recognize — endless back-and-forth emails, Slack messages, and phone calls trying to figure out a suitable time and date to host that all-important client meeting. The more people that are involved, across locations and time zones, the more difficult it is to find that elusive slot where everyone can finally discuss project developments — be that in-person, or virtually. In truth, this is a problem that numerous companies have set out to solve, with the likes of automated meeting scheduling platform Chili Piper this year raising $33 million from notable backers including Google’s AI-focused Gradient Ventures, while the perennially popular Calendly secured $350 million at a chunky $3 billion valuation. The latest company to throw its hat into the proverbial scheduling ring is Cal.com , which is pitching its open source approach as a major selling point and differentiator — one that enables companies to retain full control over their data. Founded back in June initially as Calendso , Cal.com offers what it calls “scheduling infrastructure for everyone,” and can be used by anyone from yoga instructors and SMEs all the way through to enterprises. Three months on from its formal launch and rebrand , Cal.com recently announced that it has raised $7.4 million in seed funding from a slew of angel investors and institutional backers, including lead investor OSS Capital and YouTube’s cofounder and former CEO Chad Hurley. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While it’s still early days for the startup, Cal.com claims to facilitate some 3,500 monthly bookings and 10,000 users, including from big VC-backed companies such as Klarna and Deel. And in the decade ahead, cofounders and co-CEOs Bailey Pumfleet and Peer Richelsen, who are based in the U.K. and Germany respectively, have major growth ambitions. “Our mission is to connect a billion people by 2031 through calendar scheduling, and this fresh funding ensures we have enough runway to pursue our goal,” Richelsen wrote in a blog post announcing the cash injection. Under the hood Similar to Calendly, meeting organizers use Cal.com to share a scheduling link with invitees, who are then asked to choose from a set of time slots — the slot that everyone can make is then added to everyone’s calendar. Above: Cal.com in action Cal.com ships with a bunch of pre-built integrations, including Google Calendar, Outlook Calendar, and Apple Calendar, while it also offers support for Stripe, so service providers such as teachers can easily accept payments. Elsewhere, an open API enables users to integrate Cal.com with their own platform. Data sovereignty As an open source product available via GitHub , companies can remain in full control of all their data through self-hosting, while also managing the entire look-and-feel of their Cal.com deployment via its white-label offering. If companies don’t want the hassle of self-hosting, Cal.com is available as a fully-hosted service, too. The concept of data sovereignty has become increasingly pertinent across the technology spectrum, as companies face a growing array of privacy regulations in addition to increased expectations from customers that their data will be treated with kid gloves. Enabling organizations to choose where and how their data is hosted allows them to choose which region’s laws apply to its governance, while also ensuring that they don’t pass private data through unnecessary third-party SaaS apps and servers. This is particularly important in highly-regulated industries such as health care, or even in nation-states. With that in mind, Cal.com also offers a specific hosting option for both AWS GovCloud and Google Cloud for Government , and says that it is fully HIPAA-compliant. “Transparency and control of companies’ data is what can make or break their choice in which software they use,” Pumfleet told VentureBeat. “We’ve spoken with many companies who simply cannot use any other solution out there — due to the inability to self-host, a lack of transparency, and other data protection related characteristics which Cal.com has. This is absolutely vital for industries like health care and government, but an increasing number of non-regulated industries are [also] looking at how their software products treat and use their data.” A slew of commercial open source software (COSS) startups have gone to market in recent times with the exact same data control promise. Chatwoot , for example, is an open source customer engagement platform that companies can host on their own infrastructure, while Element brings something similar to the team communication sphere , as does PostHog for product analytics. Cal.com’s three core pricing plans span basic scheduling on the free tier, through to $39/month for the enterprise incarnation, which includes the white label option, video conferencing, premium integrations, audit logs, and single sign-on (SSO) support. On top of that, Cal.com also offers a separate infrastructure pricing plan starting at $449/month, which allows companies to build their own scheduling products on top of Cal.com while leveraging all the premium features of the enterprise plan. With $7.4 million in the bank, Cal.com is well-financed to build out new enterprise-focused features, and launch what it calls an App Store for Time , which will allow any developer to build and monetize apps on top of Cal.com. “Cal.com is designed to be a platform that you can build a whole business upon — you have the flexibility to white-label the whole experience, the ability to self-host, and the ability to extend Cal.com and modify it in any way you like,” Pumfleet added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,932
2,021
"Elastic CEO reflects on Amazon spat, license switch, and the principles of open source | VentureBeat"
"https://venturebeat.com/2021/12/27/elastic-ceo-reflects-on-amazon-spat-license-switch-and-the-principles-of-open-source"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elastic CEO reflects on Amazon spat, license switch, and the principles of open source Share on Facebook Share on X Share on LinkedIn Elastic's IPO day in October, 2018 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. While the value of open source software (OSS) is clear for most to see , community-driven software often finds itself at the center of many heated debates, spanning everything from security deficiencies to license changes. Way back at the start of 2021, one of those big “hot potato” OSS talking points reared its head when Elastic revealed that it was transitioning its database search engine Elasticsearch , alongside the Kibana visualization dashboard, from an open source Apache 2.0 license to a duo of proprietary “source available” licenses. The move was a long time coming, and it followed a host of other formerly “open source” companies that made similar switches to protect their business interests — MongoDB in 2018 , and CockroachDB a year later , to name a couple. With the dust now (mostly) settled in the wake of Elastic’s relicensing brouhaha, VentureBeat caught up with cofounder and CEO Shay Banon to get his thoughts on everything that went on: why they made the license change; what impact — if any — it has had on business, and what being a “ free and open ” company (vs. “open source”) really means. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! License to search Companies typically use Elasticsearch for any application that relies on the access and retrieval of data or documents — it’s a search engine, one that’s used by some of the world’s biggest companies from Netflix to Slack. As the project’s core developer, Elastic — which is dual-headquartered in the Netherlands and U.S. — sells premium features and fully managed services on top of Elasticsearch. The “problem” with a pure, fully permissive open source license is that anyone — such as large cloud vendors — can take that software and more or less do what they like with it. This includes selling premium or hosted services on top of the open source project, cutting the core creator and project maintainer (e.g. Elastic) out altogether. This does make sense on many levels, as it helps the cloud vendor create a stickier platform and enables its customers to get all their computing services from a single provider. But for the core project maintainer, it means it’s putting the lion’s share of the spadework into the project, including security and feature upgrades, without getting any of the rewards. But when a third-party builds a commercial service on top of an open source project, it can also create a lot of confusion, with end-users often not clear on which version of a product they are actually using. And that gets to the crux of what Elastic’s move was all about — it was about avoiding confusion between Elastic’s own commercial Elasticsearch offering and Amazon’s. Amazon launched the Amazon Elasticsearch Service way back in 2015. At the time, Amazon’s chief technology officer announced — in a tweet that was later deleted — that it was in partnership with Elastic. This is something that Banon has previously taken great umbrage at, noting that there was no such partnership in place. In a blog post back in January, Banon wrote : Imagine our surprise when the Amazon CTO tweeted that the service was released in collaboration with us — it was not. And over the years, we have heard repeatedly that this confusion persists. Above: Amazon CTO tweet, which was later deleted What’s in a name? The road to Elastic’s big license change in January was a long one. In early 2019, AWS announced it was launching a new “open distro” for Elasticsearch, with participation from other notable companies including Netflix, which was pitched as a “value-added” distribution (i.e. not a fork) that was 100% open source and supported by AWS — it also came with the promise that it would continue to send any code contributions and security patches back upstream to the original Elasticsearch project. But why launch this distro when Elasticsearch was already open source? For that, we have to go further back. In 2018, Elastic announced that it was making the code from a proprietary product called X-Pack openly available for anyone to inspect and contribute to. This is generally known as “source available,” rather than “open source,” but it served to “muddy the waters” between open source and proprietary code, according to Amazon VP Adrian Cockcroft, who wrote in a 2019 blog post : Since June 2018, we have witnessed significant intermingling of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. For example, neither release notes nor documentation make it clear what is open source and what is proprietary. Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to breach of license, and could lead to immediate termination of rights. And it was the culmination of these shenanigans that, ultimately, led Elastic to change the license for Elasticsearch and Kibana in January — and it didn’t have to wait long for a response. Just one week later, Amazon revealed that it would begin work on an open source Elasticsearch fork, which would ship under a completely new name, OpenSearch , which eventually went to market in July. “Bringing clarity was a big part of why we made the [license] change — it was just painful,” Banon told VentureBeat. “The problem was that lots of customers don’t necessarily focus on the details and distinctions between Elastic’s Elasticsearch offering and other third-party ‘as-a-service’ offerings.” So while Elastic might push an update or patch out for its Elasticsearch, that wouldn’t necessarily find its way into the third-party offering immediately. By forcing a clearer distinction between the two versions of Elasticsearch, Elastic was safeguarding its brand and reputation. “Amazon used the Apache 2.0 (open source) license and provided a service, they are totally entitled to do it — it’s perfectly fine and legal,” Banon added. “There were a few things that we did have a problem with that extended beyond the usage of the software. Calling the service ‘Amazon Elasticsearch Service’ — you go to any trademark lawyer, they’ll tell you that’s trademark infringement, and that created confusion in the market. Especially as Amazon and AWS grew more, [the confusion] just became massive. And that’s problematic.” But if it was just a case of trademark infringement, couldn’t Elastic just tell Amazon not to use the Elasticsearch name in its own service? That was, in fact, a recourse the company was actively pursuing — Elastic filed a trademark infringement lawsuit against Amazon back in 2019. But the problem was, lawsuits take a long time to resolve, consuming significant resources in the process; changing the license was a way to speed things up, according to Banon, and get Amazon to stop using the Elasticsearch brand in its own product offering. “The legal process was dragging its feet — to be honest, I was really frustrated with the progress,” Banon said. “The wheels of justice will take their turn, and they’ll happen, but at the end of the day, we had users that were confused — users and customers sometimes that didn’t know which one [Amazon’s Elasticsearch or Elastic’s Elasticsearch] is which, which features are being used where. They’d go to the Amazon Elasticsearch Service and think that it was something that we back.” Believe it or not, this is a fairly common problem in the open source sphere — PrestoDB fork PrestoSQL was forced to change its name to Trino last year after Facebook asserted trademark ownership over the “Presto” name. And just last month, livestreaming software provider Streamlabs OBS had to drop “OBS” from its name after it was called out by the open source OBS project on which it is built. Ultimately, it was all about avoiding brand confusion, with the OBS project’s Twitter account revealing that some of its support volunteers had encountered “angry users” seeking refunds when it was actually the commercial Streamlabs product they had paid for. We’re often faced with confused users and even companies who do not understand the difference between the two apps. Support volunteers are sometimes met with angry users demanding refunds. We've had interactions with several companies who did not realize our apps were separate. — OBS (@OBSProject) November 17, 2021 The principles of open source There were few surprises when Amazon announced its plans for the Elasticsearch fork — “we totally expected it to happen,” Banon said — but Elastic had already bolstered its commercial offering to protect it against any future open source kerfuffles. This mostly involved investing in its proprietary IP, such as its security and application performance management (APM) capabilities, which it had already released under a “free and open” license, rather than an OSI-approved Apache license. Put simply, customers weren’t necessarily using Elastic because of its open source license, which is why its revenues have continued to grow in the 12 months since it made the switch. “I think that people were engaging with Elastic because of the quality of the products, and the quality of the community that we built around Elastic, not around one permissive license, to be completely honest,” Banon said. It is worth noting, however, that some companies other than AWS did decide to ditch Elasticsearch, including CrateDB developer Crate.io , which revealed in February that it was transitioning from Elasticsearch to a “fully open source fork of Elasticsearch.” Whenever any company switches its open source license, it nearly always riles at least some in the community. But Banon said that despite some of the naysaying, he’s noticed no real impact from a business or community perspective. “I think the vast majority of the open source community were perfectly fine with our change,” Banon said. “[And] from the metrics that we track, like number of downloads, meetups, community engagement, things along those lines, everything remains the same — and it’s actually going up.” This gets to the heart of what Banon said is the most important principle of open source — it’s not about the license. “As a company, we never treated open source as a business model — open source is not a business model,” he said. “The first principle of open source is around engaging on GitHub, for example — you use open source to engage with the community, you use open source as a way to create communities, you use open source to collaborate with people.” To the casual observer, it might appear that Elastic is against any third-party offering Elasticsearch “as-a-service,” but that isn’t the case. Other companies, including Google and Alibaba , already offer Elasticsearch-as-a-service in direct partnership with Elastic. “If you take the software and provide it as a service, then I think it’s healthy for both of us to have skin in the game,” Banon said. “That means that when we fix a vulnerability, which has huge implications if you’re providing it ‘as-a-service’, we’ll reach out to you and work together with a vendor to do that. That’s so easy to do, because SaaS is the tide that lifts all boats.” So does Banon care at all that its core product is no longer “open source” in the purest sense of the word? “I don’t think it matters — ‘free and open’, I’m fine with that,” he said. “These things can be so distracting, and then you end up losing the things that really matter. Are we still engaging with our community the same way? Are we still engaging with them on GitHub? If these things are still true, then I’m perfectly fine.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,933
2,020
"How Hubilo pivoted from physical to virtual events in 20 days (and nabbed big-name backers) | VentureBeat"
"https://venturebeat.com/2020/10/26/how-hubilo-pivoted-to-virtual-events-during-the-pandemic-and-nabbed-big-name-investors"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature How Hubilo pivoted from physical to virtual events in 20 days (and nabbed big-name backers) Share on Facebook Share on X Share on LinkedIn Hubilo cofounder and CEO Vaibhav Jain (center), flanked by CTO Mayank Agarwal (right) and product manager John Peter (left) Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. While some industries have enjoyed a boom during global lockdowns, others have floundered. Ecommerce and telehealth , for example, were well-positioned to thrive during the pandemic, whereas brick-and-mortar stores and businesses built around conferences, concerts, and other real-world events have teetered on the brink of collapse , or even toppled over the edge. One event tech startup decided to pivot from offline to online events shortly after lockdowns kicked in, retooling itself from the ground up and reemerging with a minimal viable product (MVP) after a dizzying 20-day development period. Through salary deferments and mandatory weekend work, Hubilo not only survived, but went on to quadruple its headcount and hit its two-year revenue target in just six months. Off the back of its pivot, which Hubilo said has led to a nearly $10 million run rate for bookings, the company revealed it has raised $4.5 million in a seed round of funding. The round was led by Lightspeed Venture Partners, the Menlo Park-based venture capital firm behind a number of notable startups, including Snap, Grubhub, AppDynamics, and Mulesoft. Angel backers include existing investor Girish Mathrubootham, who is also the cofounder and CEO of Alphabet-backed customer service software giant Freshworks , and SlideShare cofounder Jonathan Boutelle. Founded in 2015 in Ahmedabad, the largest city in the Indian state of Gujarat, Hubilo launched as an event management software company with tools to help companies run events. It enabled customers to set up event websites and dedicated mobile apps for navigating, networking, and scheduling meetings, along with offering tools for managing registrations and ticketing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over its four-year history, Hubilo had become particularly adept at supporting large-scale events hosted by corporations and governments. But these functions were early casualties of the pandemic. “Those were the first ones to pull the plug for physical events — by February, we knew that all physical events globally were getting postponed or canceled,” Hubilo CEO Vaibhav Jain told VentureBeat. A Hail Mary pivot Hubilo initially agreed to extend its events contracts by six months at no extra cost in hopes of weathering the storm — but this wasn’t enough to retain the majority of its clients, who canceled their contracts or suspended them indefinitely. In February, Hubilo netted a grand total of zero dollars in revenue, and with cash reserves to last just three months and 30 employees on its books, the company had to make some tough decisions. “We had to either shut down the entire business or come up with a Hail Mary,” Jain said. “It was not an easy decision at all, but we have always loved being in the events business and saw this as an opportunity to reinvent ourselves and go for broke.” Any pivot carries risks, but Hubilo didn’t have much choice. And it wasn’t exiting the events business, just changing the way it helped companies deliver events. It was also in a strong position compared to newer entrants, as it had an existing customer base it could garner feedback from, and hopefully transition to its new product. “At that time, there was nothing much on the internet that allowed organizers to host events virtually, except a few webinar-based platforms,” Jain continued. “To test our idea, we sent out a mailer to our blog subscription list and got a very healthy reply rate, stating that they would be interested in hosting virtual events with us.” To see it through its pivot, Hubilo had to reduce costs by 60%, with employees taking a 30% salary cut, a figure that rose to 70% for senior leadership. But then came the serious spadework: how to turn an offline events platform into one capable of hosting events remotely? Timing was crucial, as many companies were looking to move events online and a number of newer players were emerging. “I gave my team a very short window of 20 days to come up with an MVP virtual event platform, as we did not want to enter the market very late,” Jain said. The original Hubilo platform had a web-based networking tool for event attendees. According to Jain, customers had rarely used this feature, but it proved useful for Hubilo’s pivot. “We used this as our base and started with a simple Zoom integration, wherein an event with multiple sessions could use us instead of sharing multiple Zoom links with attendees,” Jain added. Hubilo hosted its first virtual event on March 16, 2020. And while the conference didn’t reap huge financial rewards, it served as the bedrock for Hubilo’s April launch. “Post-beta, we got a lot of feature requests, and we just kept building them at supersonic speed,” Jain said. Turning around a product so quickly required sacrifices beyond salary cuts. For the first 90 days, the entire company worked every day, according to Jain. This helped reduce Hubilo’s typical sprint cycle from two weeks to just five days as feature requests came in. Eventually, the company managed to give the whole team back pay, according to Jain. “We came up with blogs, landing pages, sales collateral, product decks and videos, and support articles while the technology team was building the product,” he said. “We were ranking high on our SEO keywords, as we were quite early in this space compared to other competitors.” Hubilo was well-positioned because it acted swiftly in the early days of the pandemic. It hadn’t had to make huge layoffs, which meant it had a workforce that could make the transition from its original offline-focused platform to the new virtual incarnation. Once the virtual events business began to gain momentum, the company had to actually grow its engineering team to build the platform out. At its pivot in March, Hubilo had around 30 employees, a figure that grew to 40 by the time it signed its term sheet with Lightspeed in mid-August. In the two months since, Hubilo has added another 80 people to its global headcount. So, could this forced pivot have been a blessing in disguise for Hubilo? “Yes, we were able to achieve our two-year revenue target in six months,” Jain said. “Though the revenue ramp-up has been exciting, what drives us the most is this opportunity to redefine and architect the marketing landscape that will emerge around virtual events and unlock massive value for key stakeholders, such as CMOs, event organizers, and sponsors.” Engagement Hubilo features several tools designed to replicate real-world event scenarios, including live sessions, breakout rooms, and virtual exhibitor booths. Above: Hubilo: Virtual rooms But the company hopes to stand out by employing gamification, making events more engaging for attendees who may be tuning in from their kitchen or garage. Its offering includes live polls and short quizzes, as well as a leaderboard to encourage competition. According to Jain, the leaderboard is Hubilo’s most-used feature. Attendees can gain points by completing various “engagement” actions within the platform, such as watching a session, visiting a virtual booth, or messaging a fellow delegate. Attendees are given the rules in advance, and the most-engaged attendees can win goodie bags. “We have seen organizers giving away free memberships, MacBooks, iPhones, and other stuff as part of the gifts for top engaged people,” Jain said. Above: Hubilo: A leaderboard mockup What could potentially make Hubilo and its kind indispensable for events organizers, marketers, and sales professionals is that it generates a wealth of measurable data compared to its offline counterpart. “One of the holy grails in event marketing has been to track how attendees experience the event and what exact trigger leads to a preferred outcome, such as a sale, a media mention, a new connection made, and so on,” Lightspeed partner Hemant Mohapatra told VentureBeat. “Just like how offline media spend was digitized over the web and social media, we believe that much of the event marketing, branding, and entertainment budget will be brought online in the next 10 years once people see how much more [powerfully] and easily the return-on-investment for these activities can be tracked, measured, and attributed online.” Hubilo said it has already attracted some notable names across a range of industries. Clients include the United Nations, Roche, Fortune, and Dubai-based consumer tech trade show Gitex. Competitive landscape At least three new virtual event startups have come to fruition over the past year or so, including Mountain View, California-based Run The World, which raised a $10.8 million series A round in the midst of the pandemic; London-based Hopin, which announced a $40 million series A round in June; and India’s Airmeet, which closed a $12 million series A round last month. Between them, they managed to attract some of the biggest names in the VC world, including Andreessen Horowitz, Founders Fund, Will Smith’s Dreamers Fund, Salesforce Ventures, Accel, IVP, Slack Fund, Sequoia, and Northzone. Virtual events might look like a temporary response to lockdowns, but it’s clear that some of the biggest movers and shakers in the technology and investment sphere disagree. According to Mohapatra, Lightspeed’s position as a global franchise puts it in a position to observe other markets, which played a big part in its decision to invest in Hubilo. “Lightspeed had the unique advantage of being able to peek into the future by observing what was happening in China post-COVID-lockdown through our Lightspeed China team,” Mohapatra said. “[It] turns out that even after many lockdowns have lifted, over 70-80% of corporate events stayed online. Not only that, once companies and organizers realized just how convenient and profitable online events are, many started to have a lot more events online or running both offline and online events in parallel — a new format called ‘ hybrid events. '” While it’s difficult for event organizers to charge as much for online events as they do for their offline counterparts, there’s no ceiling on the number of attendees for virtual events, so any shortfalls can be recouped. “This additional volume of attendees is far more important to organizers, due to sponsorship and brand opportunities rather than ticket sales,” Mohapatra added. London-based VC firm Northzone has invested in Hubilo rival Hopin twice in the past year, both at its series A round four months ago and at the $6.5 million seed round announced in February. Although that seed round closed just as the pandemic was taking hold, Northzone general partner Paul Murphy told VentureBeat that the deal was already pretty much concluded in November 2019. “There has long been a need for a sustainable and efficient solution to attending events and — on a bigger scale — running a global workforce more flexibly,” Murphy said. “A remote event solution not only presents obvious benefits from a time and carbon perspective, but it also democratizes access to the content and networking one receives from these events. For these reasons, Northzone is extremely bullish on this space and has been looking for a bet to make here for some time.” As more companies commit to remote working , virtual event platforms can also be leveraged to connect employees. Platforms like Hopin and Hubilo can be used for just about any virtual meetup, and companies and investors are waking up to this reality. Zoom recently launched an integrated platform for online classes and events, while Los Angeles-based Wave secured $30 million in funding to help artists stage live concerts virtually. And Tel Aviv-based Strigo raised $8 million for a platform that helps companies deliver software training to their clients remotely. “Hopin can be the answer for a distributed workforce that needs to plan a company offsite; the event organizer that needs to coordinate a conference with stages, workshops, and networking; or a subject matter expert that wants to generate revenue by hosting a high-quality, paid experience for those interested in learning,” Murphy said. “Every organizer we spoke to while doing research in this space recognized the future was digital. So there is a very big prize for whoever wins.” All of this suggests virtual events are here to stay. But that doesn’t mean physical events won’t return at some point. Before this turbulent year, the global business events industry was pegged at around $1.5 trillion , while recent figures from Grand View Research predict the virtual events industry will grow from $78 billion to nearly $780 billion over the next decade. How these dollars will be split between the physical and virtual worlds 10 years from now isn’t clear. But Hubilo is preparing for a hybrid world where data plays an integral part in bridging the offline and online divide. “Virtual events are a new marketing stack wherein marketers will be able to engage their customers or audiences in a far more targeted way,” Jain said. “We have seen the power it has added to organizations in terms of collecting intelligent data, which wasn’t possible in an offline engagement (e.g., who spoke to whom at an event leading to what outcome in sales or brand outreach, and so on). When the world returns to normality, we believe the online world will live alongside the offline. Hubilo will build integrations that allow event managers to collect the offline data, pair it with online engagement data, and see it within a single pane of glass to make the best marketing decisions using Hubilo.” Remote control For Hubilo, another challenge in building a platform during lockdown was the fact that employees had to get used to working virtually too. “We built, sold, marketed, and supported the product remotely since we started working on the product,” Jain said. Hubilo no longer has an official headquarters, instead operating remotely across the U.S. and India. But when restrictions start to ease, the company plans to transition its formal HQ to San Francisco, where it is already making hires across leadership, design, and product departments. The company will also retain a significant team in India, spanning engineering, support, account management, and sales. While Hubilo has for now elected to keep its old name in order to reap the SEO benefits, the company confirmed that it will eventually showcase its pivot by way of a total rebrand, complete with a new name. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,934
2,021
"Hubilo raises $23.5 million to power virtual events with real-time data and analytics | VentureBeat"
"https://venturebeat.com/2021/02/24/hubilo-raises-23-5-million-to-power-virtual-events-with-real-time-data-and-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hubilo raises $23.5 million to power virtual events with real-time data and analytics Share on Facebook Share on X Share on LinkedIn Hubilo founders: Mayank Agarwal (CTO) and Vaibhav Jain (CEO) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hubilo , a platform that helps businesses of all sizes host virtual and hybrid events and gain access to real-time data and analytics, has raised $23.5 million in a series A round led by Lightspeed Venture Partners. The round saw participation from the U.K.’s Balderton Capital and Microsoft chair John W. Thompson, among other angel investors. The investment comes as people around the world prepare for a semblance of normality after a year of pandemic-induced social distancing. But while physical events will likely return in some form in the next year, online events are widely expected to stay — either exclusively or as part of a hybrid format. Digital conferences, meetups, and events can be scaled far more easily and with fewer resources than their brick-and-mortar counterparts, and they also generate a lot of data that can prove valuable for tracking and correlating business objectives. A slew of virtual events platforms raised substantial sums of cash from big-name investors last year. One of those was Hubilo — an Indian startup that was founded back in 2015 as an offline event management software company that offered tools to set up event websites and manage tickets and registration, along with dedicated mobile apps for navigating, networking, and scheduling meetings. Hit by the pandemic’s impact on the physical events industry, in March Hubilo embarked on a dizzying 20-day pivot to online events that saved it from oblivion. Over the rest of the year, Hubilo grew from 30 employees to more than 100, hit its two-year revenue target in months, and secured $4.5 million in a seed round of funding led by the Indian investment arm of Lightspeed Venture Partners. Today Hubilo says 40% of its clients are U.S.-based enterprises, including Amazon Web Services (AWS), with big-name clients from elsewhere including Siemens, Roche, and the United Nations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Hubilo: Breakout room The main event Hubilo offers many of the tools you would expect from an online events platform, including video livestreaming and breakout rooms. But the platform’s core promise to companies considering virtual or hybrid events is enhanced data and measurability. For example, organizers can access engagement data around visitors, including the number of logins and new users versus active users. And sponsors can determine whether a visitor is likely to purchase from them based on engagement with their virtual booth. Data includes the number of business cards received, profile views, file downloads, and more. Hubilo can also track visitors’ activities, such as attending a booth or participating in a video demonstration, and then recommend similar activities. This tracking is where things get interesting from a business perspective, as sponsors or sales personnel can access potential prospects through a feature Hubilo calls “potential leads.” “The potential leads feature in Hubilo provides the sponsors and exhibitors with hot leads based on the activities performed by attendees,” Hubilo cofounder and CEO Vaibhav Jain told VentureBeat. “This enables the key stakeholders to generate maximum return on investment from the event and also match with the right people. They can then connect with these attendees via one-to-one video meetings and exchange their contact information.” Elsewhere, speakers can glean insights into the quality of their sessions in terms of views, interactions, and ratings and dig down into the popularity of each breakout room, including average session lengths, number of participants, raised hands, and more. Above: Hubilo: Data on speakers Above: Hubilo: Data on breakout rooms Online events also allow businesses and conference organizers to encourage engagement through gamification, something Hubilo enables by giving out points to attendees based on sessions watched or visits to a virtual booth, with the most highly engaged participants able to win prizes. Above: Hubilo: Gamification can help engagement Hubilo can integrate with an array of enterprise tools spanning CRM, marketing, video, and event tech. These include Salesforce, Zapier, Hubspot, Mailchimp, Marketo, Zoom, Cisco’s Webex, and YouTube. And the company has bigger plans on the analytics and data integration front, including building out native integrations with CRMs and other tools, introducing multi-event analytics, and bringing sponsored ads to the mix. “Native integrations with CRMs and other applications will enable seamless transfer of data between Hubilo and other enterprise systems, [while] multi-event analytics will help enterprises efficiently leverage event data from multiple events and retrieve holistic insights from them,” Jain said. “Sponsored ads will provide new avenues for event organizers to monetize their event and will help sponsors and exhibitors get more visibility and generate additional leads.” Hubilo was founded six years ago out of Ahmedabad, the largest city in the Indian state of Gujarat, but in the wake of the global pandemic the company ditched its HQ, embracing a remote workforce distributed across India and the U.S. However, the company said it’s gearing up to launch a new HQ in San Francisco this spring to support its burgeoning U.S. growth. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,935
2,021
"FanGraphs' advanced baseball analytics has a new cloud home: MariaDB | VentureBeat"
"https://venturebeat.com/2021/04/01/fangraphs-advanced-baseball-analytics-has-a-new-cloud-home-mariadb"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FanGraphs’ advanced baseball analytics has a new cloud home: MariaDB Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. With the 2021 Major League Baseball season opening today, fans will be filling out their scorecards as they return to stadiums for the first time since the COVID-19 pandemic took hold last spring. Of course, the data that is now regularly made available by the MLB goes well beyond the hits, runs, and errors fans typically record in a scorecard they purchase at a game. MLB has made the Statcast tool available since 2015. It analyzes player movements and athletic abilities. The Hawk-Eye service uses cameras installed at ballparks to provide access to instant video replays. Fans now regularly consult a raft of online sites that uses this data to analyze almost every aspect of baseball: top pitching prospects, players who hit the most consistently in a particular ballpark during a specific time of day, and so on. One of those sites is FanGraphs , which has transitioned the SQL relational database platform it relies on to process and analyze structured data to a curated instance of the open source MariaDB database that has been deployed on the Google Cloud Platform (GCP) as part of a MariaDB Sky cloud service. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! MariaDB provides IT organizations with an alternative to the open source MySQL database Oracle gained control over when it acquired Sun Microsystems in 2009. MariaDB is a fork of the MySQL database that is now managed under the auspices of a MariaDB Foundation that counts Microsoft, Alibaba, Tencent, ServiceNow, and IBM among its sponsors, alongside MariaDB itself. FanGraphs uses the data it collects to enable its editorial teams to deliver articles and podcasts that project, for example, playoff odds for a team based on the results of the SQL queries the company crafts. These insights might be of particular interest to a baseball fan participating in a fantasy league, someone who wants to place a more informed wager on a game at a venue where gambling is, hopefully, legalized, or those making baseball video games. The decision to move from MySQL to MariaDB running on GCP was made after a few false starts involving attempts to lift and shift the company’s MySQL database instance into the cloud, FanGraphs CEO David Appelman said. One of the things that attracted FanGraphs to MariaDB is the level of performance that it could attain using a database-as-a-service (DBaaS) platform based on MariaDB and that it provides access to a columnstore storage engine that might one day be employed to drive additional analytics, Appelman said. In addition, MariaDB now manages the underlying database FanGraphs uses. Appleman said he previously handled most of the IT functions for FanGraphs, including the crafting of SQL queries. Now he will have more time to create SQL queries and monitor the impact they have on the performance of the overall database, Appelman said. “I like to see where the bottlenecks created by a SQL query are,” he added. FanGraphs plans to eventually take advantage of the data warehouse service provided by MariaDB, Appelman noted. It’s not likely any of the analytics capabilities provided by FanGraphs and similar sites will one day be able to predict which baseball team will win on any given day. However, the insights they surface do serve to make the current generation of baseball fans a lot more informed about the nuances of the game than Abner Doubleday probably could have imagined. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,936
2,021
"EA acquires Super Mega Baseball dev Metalhead | VentureBeat"
"https://venturebeat.com/2021/05/05/ea-acquires-super-mega-baseball-dev-metalhead"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EA acquires Super Mega Baseball dev Metalhead Share on Facebook Share on X Share on LinkedIn Super Mega Baseball. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Electronic Arts announced today that it has acquired Metalhead Software, the studio behind the Super Mega Baseball series. “EA Sports and Metalhead are teaming up to grow the Super Mega Baseball franchise as well as develop new gaming and sports experiences for players worldwide,” EA notes in a press release. EA Sports has giant hits with the Madden and FIFA franchises, but it hasn’t released an MLB game since its MVP Baseball series ended in 2005. The first Super Mega Baseball debuted in 2014. The latest entry, Super Mega Baseball 3, came out in 2020. The series has received praise from fans and critics, but it does not have an MLB license, so it can’t use MLB teams or players. That could change now that the series has EA’s backing. Right now, Sony’s MLB The Show franchise dominates baseball gaming. Its latest entry, MLB The Show, came out on April 20 and is the first game in the series to also launch on Xbox. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,937
2,021
"Graphs as a foundational technology stack: Analytics, AI, and hardware | VentureBeat"
"https://venturebeat.com/2021/05/28/graphs-as-a-foundational-technology-stack-analytics-ai-and-hardware/'"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Graphs as a foundational technology stack: Analytics, AI, and hardware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. How would you feel if you saw demand for your favorite topic — which also happens to be your line of business — grow 1,000% in just two years’ time? Vindicated, overjoyed, and a bit overstretched in trying to keep up with demand, probably. Although Emil Eifrem never used those exact words when we discussed the past, present, and future of graphs, that’s a reasonable projection to make. Eifrem is chief executive officer and cofounder of Neo4j , a graph database company that claims to have popularized the term “ graph database ” and to be the leader in the graph database category. Eifrem and Neo4j’s story and insights are interesting because through them we can trace what is shaping up to be a foundational technology stack for the 2020s and beyond: graphs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Graph analytics and graph databases Eifrem cofounded Neo4j in 2007 after he stumbled upon the applicability of graphs in applications with highly interconnected data. His initiation came by working as a software architect on an enterprise content management solution. Trying to model and apply connections between items, actors, and groups using a relational database ended up taking half of the team’s time. That was when Eifrem realized that they were trying to fit a square peg in a round hole. He thought there’s got to be a better way, and set out to make it happen. When we spoke for the first time in 2017 , Eifrem had been singing the “graphs are foundational, graphs are everywhere” tune for a while. He still is, but things are different today. What was then an early adopter game has snowballed to the mainstream today, and it’s still growing. “Graph Relates Everything” is how Gartner put it when including graphs in its top 10 data and analytics technology trends for 2021. At Gartner’s recent Data & Analytics Summit 2021, graph also was front and center. Interest is expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine. Panama Papers researchers, NASA engineers, and Fortune 500 leaders: They all use graphs. According to Eifrem, Gartner analysts are seeing explosive growth in demand for graph. Back in 2018, about 5% of Gartner’s inquiries on AI and machine learning were about graphs. In 2019, that jumped to 20%. From 2020 until today, 50% of inquiries are about graphs. AI and machine learning are in extremely high demand, and graph is among the hottest topics in this domain. But the concept dates back to the 18th century, when Leonhard Euler laid the foundation of graph theory. Euler was a Swiss scientist and engineer whose solution to the Seven Bridges of Königsberg problem essentially invented graph theory. What Euler did was to model the bridges and the paths connecting them as nodes and edges in a graph. That formed the basis for many graph algorithms that can tackle real-world problems. Google’s PageRank is probably the best-known graph algorithm, helping score web page authority. Other graph algorithms are applied to use cases including recommendations, fraud detection, network analysis, and natural language processing, constituting the domain of graph analytics. Graph databases also serve a variety of use cases, both operational and analytical. A key advantage they have over other databases is their ability to model intuitively and execute quickly data models and queries for highly interconnected domains. That’s pretty important in an increasingly interconnected world, Eifrem argues: When we first went to market, supply chain was not a use case for us. The average manufacturing company would have a supply chain two to three levels deep. You can store that in a relational database; it’s doable with a few hops [or degrees of separation]. Fast-forward to today, and any company that ships stuff taps into this global fine-grained mesh, spanning continent to continent. All of a sudden, a ship blocks the Suez Canal, and then you have to figure out how that affects your business. The only way you can do that is by digitizing it, and then you can reason about it and do cascading effects. In 2021, you’re no longer talking about two to three hops. You’re talking about supply chains that are 20, 30 levels deep. That requires using a graph database — it’s an example of this wind behind our back. Knowledge graphs, graph data science, and machine learning The graph database category is actually a fragmented one. Although they did not always go by that name, graph databases have existed for a long time. An early branch of graph databases are RDF databases, based on Semantic Web technology and dating back about 20 years. Crawling and categorizing content on the web is a very hard problem to solve without semantics and metadata. This is why Google adopted the technology in 2010, by acquiring MetaWeb. What we get by connecting data, and adding semantics to information, is an interconnected network that is more than the sum of its parts. This graph-shaped amalgamation of data points, relationships, metadata, and meaning is what we call a knowledge graph. Google introduced the term in 2012, and it’s now used far and wide. Knowledge graph use cases are booming. Reaching peak attention in Gartner’s hype cycle for AI in 2020, applications are trickling down from the Googles and Facebooks of the world to mid-market companies and beyond. Typical use cases include data integration and virtualization, data mesh, catalogs, metadata, and knowledge management, as well as discovery and exploration. But there’s another use of graphs that is blossoming: graph data science and machine learning. We have connected data, and we want to store it in a graph, so graph data science and graph analytics is the natural next step, said Alicia Frame, Neo4j graph data science director. “Once you’ve got your data in the database, you can start looking for what you know is there, so that’s your knowledge graph use case,” Frame said. “I can start writing queries to find what I know is in there, to find the patterns that I’m looking for. That’s where data scientists get started — I’ve got connected data, I want to store it in the right shape. “But then the natural progression from there is I can’t possibly write every query under the sun. I don’t know what I don’t know. I don’t necessarily know what I’m looking for, and I can’t manually sift through billions of nodes. So, you want to start applying machine learning to find patterns, anomalies, and trends.” As Frame pointed out, graph machine learning is a booming subdomain of AI, with cutting edge research and applications. Graph neural networks operate on graph structures, as opposed to other types of neural networks that operate on vectors. What this means in practice is that they can leverage additional information. Neo4j was among the first graph databases to expand its offering to data scientists, and Eifrem went as far as to predict that by 2030, every machine learning model will use relationships as a signal. Google started doing this a few years ago , and it’s proven that relationships are strong predictors of behavior. What will naturally happen, Eifrem went on to add, is that machine learning models that use relationships via graphs will outcompete those that don’t. And organizations that use better models will outcompete everyone else — a case of Adam Smith’s “invisible hand.” The four pillars of graph adoption This confluence of graph analytics, graph databases, graph data science, machine learning, and knowledge graphs is what makes graph a foundational technology. It’s what’s driving use cases and adoption across the board, as well as the evolution from databases to platforms that Neo4j also exemplifies. Taking a decade-long view, Eifrem noted, there are four pillars on which this transition is based. The first pillar is the move to the cloud. Though it’s probably never going to be a cloud-only world, we are quickly going from on-premises first to cloud-first to database-as-a-service (DBaaS). Neo4j was among the first graph databases to feature a DBaaS offering, being in the cohort of open source vendors Google partnered with in 2019. It’s going well, and AWS and Azure are next in line, Eifrem said. Other vendors are pursuing similar strategies. The second pillar is the emphasis on developers. This is another well established trend in the industry, and it goes hand-in-hand with open source and cloud. It all comes down to removing friction in trying out and adopting software. Having a version of the software that is free to use means adoption can happen in a bottom-up way, with open source having the added benefit of community. DBaaS means going from test cases to production can happen organically. The third pillar is graph data science. As Frame noted, graph really fills the fundamental requirement of representing data in a faithful way. The real world isn’t rows and columns — it’s connected concepts, and it’s really complex. There’s this extended network topology that data scientists want to reason about, and graph can capture this complexity. So it’s all about removing friction, and the rest will follow. The fourth pillar is the evolution of the graph model itself. The commercial depth of adoption today, although rapidly growing, is not on par with the benefits that graph can bring in terms of performance and scalability, as well as intuitiveness, flexibility, and agility, Eifrem said. User experience for developers and data scientists alike needs to improve even further, and then graph can be the No. 1 choice for new applications going forward. There are actually many steps being taken in that direction. Some of them may come in the form of acronyms such as GraphQL and GQL. They may seem cryptic, but they’re actually a big deal. GraphQL is a way for front-end and back-end developer teams to meet in the middle, unifying access to databases. GQL is a cross-industry effort to standardize graph query languages , the first one that the ISO adopted in the 30-plus years since SQL was formally standardized. But there’s more — the graph effect actually goes beyond software. In another booming category, AI chips, graph plays an increasingly important role. This is a topic in and of its own, but it’s worth noting how, from ambitious upstarts like Blaize , GraphCore and NeuReality to incumbents like Intel, there is emphasis on leveraging graph structure and properties in hardware, too. For Eifrem, this is a fascinating line of innovation, but like SSDs before it, one that Neo4j will not rush to support until it sees mainstream adoption in datacenters. This may happen sooner rather than later , but Eifrem sees the end game as a generational change in databases. After a long period of stagnation in terms of database innovation, NoSQL opened the gates around a decade ago. Today we have NewSQL and time-series databases. What’s going to happen over the next three to five years, Eifrem predicts, is that a few generational database companies are going to be crowned. There may be two, or five, or seven more per category, but not 20, so we’re due for consolidation. Whether you subscribe to that view, or which vendors to place your bets on, is open for discussion. What seems like a safe bet, however, is the emergence of graph as a foundational technology stack for the 2020s and beyond. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,938
2,021
"AI Weekly: Announcing our 'AI and the future of health care' special issue | VentureBeat"
"https://venturebeat.com/2021/01/29/ai-weekly-announcing-our-ai-and-the-future-of-health-care-special-issue"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Announcing our ‘AI and the future of health care’ special issue Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence and health care both deal heavily with issues of complexity, efficacy, and societal impact. All of that is multiplied when the two intersect. As health care providers and vendors work to use AI and data to improve patient care, health outcomes, medical research, and more, they face what are now standard AI challenges. Data is difficult and messy. Machine learning models struggle with bias and accuracy. And ethical challenges abound. But there’s a heightened need to solve these problems when they’re couched within the daily life-and-death context of health care. Then, in the midst of the AI’s growth in health care, the pandemic hit, challenging old ways of doing things and pushing systems to their breaking points. In our upcoming special issue, “ AI and the future of health care ,” we examine how providers and vendors are tackling the challenges of this extraordinary time. The biggest hurdle has to do with data. Health care produces massive amounts of data, from electronic health records (EHR) to imaging to information on hospital bed capacity. There’s enormous promise in using that data to create AI models that can improve care and even help cure diseases, but there are barriers to that progress. Privacy concerns top the list, but worldwide health care data also needs standardization. There are still too many errors in this data, and the medical community must address persisting biases before they become even more entrenched. When humans rely on AI to help them make clinical decisions like injury or disease diagnoses, they also have to be aware of their own biases. Because bias exists in the data AI models are built upon, practitioners have to be careful not to fall into the trap of automation bias, relying too much on model output to make decisions. It’s a delicate balance with profound impacts on human health and life. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The pandemic has also challenged the practical day-to-day functions of health care systems. As COVID-19 cases threaten to overwhelm hospitals and patients and doctors risk infection during in-person visits, providers are figuring out how to deliver patient care remotely. With more doctors shifting to telemedicine, chatbots and other tools are helping relieve some of the burden and allowing patients to access care from the safety of their own homes. For particularly vulnerable populations, like senior citizens, remote care may be necessary, especially if they’re in locked-down residential facilities or can’t easily get to their doctor. The technologies involved in monitoring such patients include wearables that track vitals and even special wireless tech that offers no-touch, personalized biometric tracking. These are sea changes in health care, and because of the pandemic, they’re coming faster than anyone expected. But a certain optimism persists — a sense that despite unprecedented challenges to the medical field, careful and responsible use of AI can enable permanent, positive changes in the health care system. The astonishing speed with which researchers developed a working COVID-19 vaccine offers ample evidence of the way necessity spurs medical innovation. The best of the technologies, tools, and techniques that health care providers are employing now could soon become standard and lead to more democratized, less expensive, and overall better health care. You can get this special issue delivered straight to your inbox next week by signing up here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,939
2,021
"Applied AI takes the spotlight at Build 2021 | VentureBeat"
"https://venturebeat.com/2021/05/27/applied-ai-takes-the-spotlight-at-build-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Applied AI takes the spotlight at Build 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Among the 100 updates announced at Microsoft’s developer event Build this week , an area that stood out above the noise was artificial intelligence (AI) and, in particular, Microsoft’s growing push into higher-level services for applied AI and business scenarios. Microsoft has been gradually embedding business logic into more of its AI in an attempt to help enterprises get out of the “pilot purgatory” that has so often characterized AI projects over the past few years. But at Build this year, it took a big leap forward. Let’s take a closer look at the major moves that came out of Build and what they mean for the market. The need for more business-centric AI According to a survey my team conducted , more than 80% of companies are now trialling or putting AI into production in their organizations, up from 55% in 2019, and we’ve seen adoption accelerate dramatically in several narrow areas such as contact centres, chat bots, and fraud detection. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, fewer than 20% deploying AI have fully put it into company-wide processes. And at most one in five AI solutions currently becomes operational. In effect, enterprises can’t scale AI, held back by many challenges, including the time it takes for an AI project to bear fruit. Over the past few years, cloud providers have celebrated advancements in what I call general-purpose AI, areas such as speech, language, or image classification, for example. These have no doubt propelled the technology into the limelight, but the progress has been in many ways rather meaningless to the average company, which is often saddled with the complexity of customizing the technology for their particular business purpose. Microsoft responds This is why there’s a growing need for more business-centric AI that clusters general-purpose AI to solve specific business problems. Along with the aim of speeding up business scenarios for developers, this was the prime motive for some of Microsoft’s major AI updates at Build 2021. Microsoft made a wave of announcements focused on Cosmos DB, Azure Machine Learning, and Power Platform, but an area that stood front and center in applied AI was Cognitive Services, Microsoft’s suite of machine learning algorithms and API services that help developers embed AI into their apps. Microsoft released into general availability Azure Metrics Advisor, which was announced at Ignite in March 2021. Metrics Advisor ingests time-series data and uses machine learning to proactively monitor metrics to detect anomalies and diagnose issues in business operations in sales or manufacturing processes. It also released Azure Video Analyzer, which brings Live Video Analytics and Video Indexer into a single service to help developers quickly build AI-powered video analytics from both stored and streaming videos. According to Microsoft, the new service will enable business uses in workplace safety, in-store retail, or digital asset management. The new capabilities also complement its Spatial Analysis AI service, launched in 2020. This aggregates information from multiple cameras to assess how many people are in a room and how close together they are to help with social distancing measures. The missing piece of the puzzle Along with its Cognitive Search, Form Recognizer, and Immersive Reader AI services, Microsoft is clearly positioning its AI as a set of turnkey services that offer more built-in business logic and address common business scenarios such as document processing, customer service, and workplace safety. This emphasis has been a missing piece in Microsoft’s AI puzzle, which in the past has largely focused on platform and horizontal AI technologies for developers. Together with its recent Nuance acquisition , the moves form a pattern of concentrating more on business applications for AI and helping developers with limited machine learning experience get over the hurdles of AI and with more task-specific solutions. A warning shot across rivals’ bows What also stands out is that these moves are a strong competitive attack on Amazon Web Services (AWS) and Google Cloud, which have been similarly investing in the area for several years now. After re:Invent in December 2020, I highlighted the steps AWS was taking to expand up the stack into higher-level services and solutions for businesses and vertical markets, including in the fields of business operations, business intelligence, and contact centers. AWS’ headline launches over the past 12 months have targeted precisely this area, most notably solutions for industrial sectors aimed at improving assembly line production, quality management, and remote operations in factories and warehouses. Similarly, Google Cloud has released several business solutions in its AI portfolio over the past few years targeted at contact centers, document understanding, demand forecasting, and product recommendations and search, among others. These have become critical spearheads in a committed vertical strategy that has emerged under CEO Thomas Kurian. Accelerating along the path to business value For machine learning to reach its potential in the enterprise market, it needs to be far more pervasive among businesses and users who have little to no expertise with the technology. It’s this gap in applied and business-centric AI that growing competition between Microsoft, AWS, and Google Cloud is starting to bridge. One of the hallmarks of the pandemic is that AI is now no longer viewed as an experimental, longer-term source of innovation for companies; rather, it’s a technology that can deliver quick transformational and business value. But companies can no longer afford to have investments tied up in longer-term AI projects and proofs of concept that yield limited business value as many did in 2019. Microsoft is tapping into this trend and facing the competition head on as it continues to answer the challenges of the developer community with its portfolio of AI developer tools. It will be fascinating to see how the market responds to the announcements from Build 2021 in the coming months. Nick McQuire is Chief of Enterprise Research at CCS Insight. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,940
2,021
"How SambaNova Systems is tackling dataflow-as-a-service | VentureBeat"
"https://venturebeat.com/2021/08/11/how-sambanova-systems-is-tackling-dataflow-as-a-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How SambaNova Systems is tackling dataflow-as-a-service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SambaNova Systems , winner of VentureBeat’s 2021 Innovation in Edge award , is a significant contender in the global edge computing market. The startup raised $676 million in April 2021 and is moving from its origins as an AI-specific chip company to one that provides comprehensive dataflow-as-a-service to its clients. Research firm MarketsandMarkets forecasts an impressive 34% compounded annual growth rate for the market and anticipates its value reaching $15.7 billion by 2025. Genesis in software-driven hardware Traditional central processing units (CPUs) and graphics processing units (GPUs) are based on transactional processing, which needs accuracies to the nth degree for computations. As a result, related chip substrates are made for endless caches and stores of data. AI is more about training models with datasets and handling neural networks. Traditional chip substrates are insufficient for AI, SambaNova VP Marshall Choy told VentureBeat. “AI is really probabilistic computing rather than deterministic. What you actually need is greater flexibility and performance to run these workloads,” Choy said. “Traditional chip companies start with a chip and hope that software will automatically be generated by the ecosystem,” Choy added. “That’s a very hopeful but unrealistic approach.” SambaNova’s software-driven chip design and solutions development turns that theory on its head, with a focus on AI computing needs and eliminating redundancies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Stanford University professors Kunle Olukotun and Chris Ré, who are also cofounders of SambaNova, helped shape the startup’s initial focus on developing a software-defined chip led by a software stack. The main attraction is the Cardinal SN10 reconfigurable dataflow unit (RDU), which facilitates continuous learning at the edge, saving days, if not weeks, spent reconfiguring and retraining models when new datasets are introduced. Eight Cardinal chips, packaged with an AMD processor and 12 terabytes of DDR4 memory constitute SambaNova’s DataScale SN10-8R, the startup’s primary hardware offering. Dataflow-as-a-service SambaNova’s move into dataflow-as-a-service stemmed from customer demands, although the startup does not share its client list. “Customers told us they wanted to focus on business outcomes and objectives, not on integrating infrastructure and building large data science teams to deal with AI model selection, optimization, tuning, and maintenance,” Choy said. And so SambaNova has reinvented itself as a company that also offers machine learning services. It offloads the complexity of machine learning by augmenting customer expertise and attends to model selection, training (with customer datasets), maintenance, and other data services so customers can focus on outcomes and business value, Choy said. An implementation for the manufacturing industry involves defect detection. SambaNova has trained AI models with the highest resolution images at scale to deliver an overall higher model quality, Choy said. “We provide best-in-class accuracy while eliminating the need for additional, labor-intensive hand-labeling of images without downsizing image resolution,” he explained. The end result is a model that is low-effort for users and offers high image fidelity so it can recognize defects easily, he said. The company prepared early for the global chip shortage that has adversely affected the hardware industry, Choy said. SambaNova continues to rely on internal supply chain experts to navigate the crisis. SambaNova is currently focused on amping up its dataflow-as-a-service offering, well as the DataScale system. “We’re at a transformation in computing with AI,” Choy said. “This is not a hardware problem. This is not a software problem. It’s a complete technology stack problem to be solved for. We’re building complete technology stacks to deliver services and products to solve customers’ pressing needs.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,941
2,021
"SambaNova Systems releases enterprise-grade GPT AI-powered language model | VentureBeat"
"https://venturebeat.com/2021/10/19/sambanova-systems-releases-enterprise-grade-gpt-ai-powered-language-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SambaNova Systems releases enterprise-grade GPT AI-powered language model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SambaNova Systems , a company that builds advanced software, hardware, and services to run AI applications, announced the addition of the Generative Pre-trained Transformer (GPT) language model to its Dataflow-as-a-Service™ offering. This will enable greater enterprise adoption of AI, allowing organizations to launch their customized language model in much less time — less than one month, compared to nine months or a year. “Customers face many challenges with implementing large language models, including the complexity and cost,” said R “Ray” Wang, founder and principal analyst of Constellation Research. “Leading companies seek to make AI more accessible by bringing unique large language model capabilities and automating out the need for expertise in ML models and infrastructure.” Natural language processing The addition of GPT to SambaNova’s Dataflow-as-a-Service increases its Natural Language Processing (NLP) capabilities for the production and deployment of language models. This model uses deep learning to produce human-like text for leveraging large amounts of data. The extensible AI services platform is powered by DataScale®, an integrated software, and hardware system using Reconfigurable Dataflow Architecture™, as well as open standards and user interfaces. OpenAI’s GPT-3 language model also uses deep learning to produce human-like text, much like a more advanced autocomplete program. However, its long waitlist limits the availability of this technology to a few organizations. SambaNova’s model is the first enterprise-grade AI language model designed for use in most business and text- and document-based use cases. Enterprises can use its low-code API interface to quickly, easily, and cost-effectively deploy NLP solutions at scale. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Enterprises are insistent about exploring AI usage for text and language purposes, but up until now it hasn’t been accessible or easy to deploy at scale,” said Rodrigo Liang, CEO, and cofounder of SambaNova. “By offering GPT models as a subscription service, we are simplifying the process and broadening accessibility to the industry’s most advanced language models in a fraction of the time. We are arming businesses to compete with the early adopters of AI.” GPT use cases There are several business use cases for Dataflow-as-a-Service equipped with GPT, including sentiment analysis, such as customer support and feedback , brand monitoring, and reputation management. This technology can also be used for document classification, such as sorting articles or texts and routing them to relevant teams, named entity recognition and relation extraction in invoice automation, identification of patient information and prescriptions, and extraction of information from financial documents. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,942
2,021
"LinkedIn and Intel tech leaders on the state of AI | VentureBeat"
"https://venturebeat.com/2021/11/30/linkedin-and-intel-tech-leaders-on-the-state-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn and Intel tech leaders on the state of AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Disclosure: The author is the managing director of Connected Data World. AI is on a roll. Adoption is increasing across the board, and organizations are already seeing tangible benefits. However, the definition of what AI is and what it can do is up for grabs, and the investment required to make it work isn’t always easy to justify. Despite AI’s newfound practicality, there’s still a long way to go. Let’s take a tour through the past, present, and future of AI, and learn from leaders and innovators from LinkedIn, Intel Labs, and cutting-edge research institutes. Connecting data with duct tape at LinkedIn Mike Dillinger is the technical lead for Taxonomies and Ontologies at LinkedIn’s AI Division. He has a diverse background, ranging from academic research to consulting on translation technologies for Fortune 500 companies. For the last several years, he has been working with taxonomies at LinkedIn. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! LinkedIn relies heavily on taxonomies. As the de facto social network for professionals, launching a skill-building platform is a central piece in its strategy. Following CEO Ryan Roslanski’s statement, LinkedIn Learning Hub was recently announced , powered by the LinkedIn Skills Graph, dubbed “the world’s most comprehensive skills taxonomy. ” The Skills Graph includes more than 36,000 skills, more than 14 million job postings, and the largest professional network with more than 740 million members. It empowers LinkedIn users with richer skill development insights, personalized content, and community-based learning. For Dillinger, however, taxonomies may be overrated. In his upcoming keynote in Connected Data World 2021, Dillinger is expected to refer to taxonomies as the duct tape of connecting data. This alludes to Perl, the programming language that was often referred to as the duct tape of the internet. “Duct tape is good because it’s flexible and easy to use, but it tends to hide problems rather than fix them,” Dillinger said. A lot of effort goes into building taxonomies, making them correct and coherent, then getting sign-off from key stakeholders. But this is when problems start appearing. Key stakeholders such as product managers, taxonomists, users, and managers take turns punching holes in what was carefully constructed. They point out issues of coverage, accuracy, scalability, and communication. And they’re all right from their own point of view, Dillinger concedes. So the question is — what gives? Dillinger’s key thesis,, is that taxonomies are simply not very good as a tool for knowledge organization. That may sound surprising at first, but coming from someone like Dillinger, it carries significant weight. Dillinger goes a long way to elaborate on the issues with taxonomies, but perhaps more interestingly, he also provides hints for a way to alleviate those issues: “The good news is that we can do much better than taxonomies. In fact, we have to do much better. We’re building the foundations for a new generation of semantic technologies and artificial intelligence. We have to get it right,” says Dillinger. Dillinger goes on to talk about more reliable building blocks than taxonomies for AI. He cites concept catalogs, concept models, explicit relation concepts, more realistic epistemological assumptions, and next-generation knowledge graphs. It’s the next generation, Dillinger says, because today’s knowledge graphs do not always use concepts with explicit human-readable semantics. These have many advantages over taxonomies, and we need to work on people, processes, and tools levels to be able to get there. Thrill-K: Rethinking higher machine cognition The issue of knowledge organization is a central one for Gadi Singer as well. Singer is VP and director of Emergent AI at Intel Labs. With one technology after another, he has been pushing the leading edge of computing for the past four decades and has made key contributions to Intel’s computer architectures, hardware and software development, AI technologies, and more. Singer said he believes that the last decade has been phenomenal for AI, mostly because of deep learning, but there’s a next wave that is coming: a “third wave” of AI that is more cognitive, has a better understanding of the world, and higher intelligence. This is going to come about through a combination of components: “It’s going to have neural networks in it. It’s going to have symbolic representation and symbolic reasoning in it. And, of course, it’s going to be based on deep knowledge. And when we have it, the value that is provided to individuals and businesses will be redefined and much enhanced compared to even the great things that we can do today”, Singer says. In his upcoming keynote for Connected Data World 2021 , Singer will elaborate on Thrill-K, his architecture for rethinking knowledge layering and construction for higher machine cognition. Singer distinguishes recognition, as in the type of pattern-matching operation using shallow data and deep compute at which neural networks excel, from cognition. Cognition, Singer argues, requires understanding the very deep structure of knowledge. To be able to process even seemingly simple questions requires organizing an internal view of the world, comprehending the meaning of words in context, and reasoning on knowledge. And that’s precisely why even the more elaborate deep learning models we have currently, namely language models, are not a good match for deep knowledge. Language models contain statistical information, factual knowledge, and even some common sense knowledge. However, they were never designed to serve as a tool for knowledge organization. Singer believes there are some basic limitations in language models that make them good, but not great for the task. Singer said that what makes for a great knowledge model is the capability to scale well across five areas of capabilities: scalability, fidelity, adaptability, richness, and explainability. He adds that sometimes there’s so much information learned in language models, that we can extract it and enhance dedicated knowledge models. To translate the principles of having a great knowledge model to an actual architecture that can support the next wave of AI, Singer proposes an architecture for knowledge and information organized at three levels, which he calls Thrill-K. The first level is for the most immediate knowledge, which Singer calls the Giga scale, and believes should sit in a neural network. The next level of knowledge is the deep knowledge base, such as a knowledge graph. This is where intelligible, structured, explicit knowledge is stored at the Terascale, available on demand for the neural network. And, finally, there’s the world information and the world knowledge level, where data is stored at the Zetta scale. Knowledge, Singer argues, is the basis for making reasoned intelligent decisions. It can adapt to new circumstances and new tasks. That’s because the data and the knowledge are not structured for a particular task, but it’s there with all their richness and expressivity. It will take concerted effort to get there, and Intel Labs on its part is looking into aspects of NLP, multi-modality, common sense reasoning, and neuromorphic computing. Systems that learn and reason If knowledge organization is something that both Dillinger and Singer value as a key component in an overarching framework for AI, for Frank van Harmelen it’s the centerfold in his entire career. Van Harmelen leads the Knowledge Representation & Reasoning Group in the Computer Science Department of the VU University Amsterdam. He is also Principal investigator of the Hybrid Intelligence Centre , a $22.7 million, (€20 million), ten-year collaboration between researchers at six Dutch universities into AI that collaborates with people instead of replacing them. Van Harmelen notes that after the breakthroughs of machine learning (deep learning or otherwise) in the past decade, the shortcomings of machine learning are also becoming increasingly clear: unexplainable results, data hunger, and limited generalisability are all becoming bottlenecks. In his upcoming keynote in Connected Data World 2021 , Van Harmelen will look at how the combination with symbolic AIin the form of very large knowledge graphs can give us a way forward: Towards machine learning systems that can explain their results, that need less data, and that generalize better outside their training set. The emphasis in modern AI is less on replacing people with AI systems, and more on AI systems that collaborate with people and support them. For Van Harmelen, however, it’s clear that current AI systems lack background knowledge, contextual knowledge, and the capability to explain themselves, which makes them not very human-centered: “They can’t support people and they can’t be competent partners. So what’s holding AI back? Why are we in this situation? For a long time, AI researchers have locked themselves into one of two towers. In the case of AI, we could call these the symbolic AI tower and the statistical AI tower”. If you’re in the statistical AI camp, you build your neural networks and machine learning programs. If you’re in the symbolic AI camp, you build knowledge bases and knowledge graphs and you do inference over them. Either way, you don’t need to talk to people in the other camp, because they’re wrong anyway. What’s actually wrong, argues Van Harmelen, is this division. Our brains work in both ways, so there’s no reason why approximating them with AI should rely exclusively on either approach. In fact, those approaches complement each other very well in terms of strengths and weaknesses. Symbolic AI, most famously knowledge graphs, is expensive to build and maintain as it requires manual effort. Statistical AI, most famously deep learning, requires lots of data, plus oftentimes also lots of effort. They both suffer from the “performance cliff” issue (, i.e. their performance drops under certain circumstances, but the circumstances and the way differ). Van Harmelen provides many examples of practical ways in which symbolic and statistical AI can complement each other. Machine learning can help build and maintain knowledge graphs, and knowledge graphs can provide context to improve machine learning: “It is no longer true that symbolic knowledge is expensive and we cannot obtain it all. Very large knowledge graphs are witness to the fact that this symbolic knowledge is very well available, so it is no longer necessary to learn what we already know. We can inject what we already know into our machine learning systems, and by combining these two types of systems produce more robust, more efficient, and more explainable systems,” says Van Harmelen. The pendulum has been swinging back and forth between symbolic and statistical AI for decades now. Perhaps it’s a good time for the two camps to reconcile and start a conversation. To build AI for the real world, we’ll have to connect more than data. We’ll also have to connect people and ideas. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,943
2,021
"FTC sues to block Nvidia's $75B bid to buy Arm | VentureBeat"
"https://venturebeat.com/2021/12/02/ftc-sues-to-block-nvidias-75b-bid-to-buy-arm"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FTC sues to block Nvidia’s $75B bid to buy Arm Share on Facebook Share on X Share on LinkedIn The Nvidia Selene is a top 10 supercomputer. Nvidia said it plans to make a new supercomputer with Arm. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The Federal Trade Commission on Thursday sued to block Nvidia ‘s $75 billion takeover of Arm Holdings for antitrust reasons. The lawsuit signals the start of more aggressive FTC antitrust enforcement since President Joe Biden appointed Lina Khan as chair of the federal agency. It also follows a historic period of consolidation where big chip companies gobbled up smaller ones. Nvidia’s rival Advanced Micro Devices has a pending $35 billion deal to buy chip design firm Xilinx. The cash and stock deal for Arm was originally valued at $40 billion, and now the stock price has soared and boosted the value to $75 billion. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The FTC said the deal would give Nvidia unlawful control over computing technology and designs that rivals need to develop their own competing chips. It fears the combined entity could stifle next-generation technologies used to run datacenters and self-driving cars. “As we move into this next step in the FTC process, we will continue to work to demonstrate that this transaction will benefit the industry and promote competition,” Nvidia said in a statement. “Nvidia will invest in Arm’s R&D, accelerate its roadmaps, and expand its offerings in ways that boost competition, create more opportunities for all Arm licensees and expand the Arm ecosystem. Nvidia is committed to preserving Arm’s open licensing model and ensuring that its IP is available to all interested licensees, current and future.” The commission voted 4-0 to sue, which means that both the two Democrats and the two Republicans on the commission agreed on the lawsuit. Nvidia and Arm stock are down slightly in after-hours trading. A trial is set for August 2022. Acquisition and strife In 2018, China’s regulators blocked Qualcomm’s pending $44 billion acquisition of NXP, and the U.S. blocked Broadcom’s $117 billion takeover of Qualcomm. Last week, Nvidia CEO Jensen Huang — who was honored with the chip industry’s highest award — joked that Qualcomm’s CEO had been visiting lawmakers objecting to the Nvidia-Arm deal. Cambridge, England-based Arm had been acquired by SoftBank five years ago for $32 billion. Arm designs and licenses architectures and chip designs that are used in most smartphones as well as other kinds of chips, while Nvidia makes both graphics chips and AI processors for a wide range of products. One of Arm’s rivals, the open source RISC-V architecture, has been benefiting from fast growth thanks to the worry about Arm being acquired. In Nvidia’s most recent earnings call, Nvidia said it had been talking with regulators about their concerns. In an interview with VentureBeat after the earnings call, Huang said, “We’ve been working with regulators, with the FTC. They expressed concern. We’re in discussions with them about potential remedies. China is pending activation of their discussion. The EU and the United Kingdom have not approved the first phase. They’d like to review some more in the second phase. We’re now entering the second phase of review. The regulatories around the world have taken a fair amount of interest in this transaction. That’s the status.” The United Kingdom’s antitrust regulator also began an investigation of the transaction last month, as did the European Union. Nvidia was hoping to close the deal by April 2022. That’s not going to happen now. One of the ironies is that Intel, once the world’s biggest chip maker, now has a much smaller valuation than Nvidia’s. And that means Intel would likely testify against Nvidia. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,944
2,021
"Top 12 AI and machine learning announcements at AWS re:Invent 2021 | VentureBeat"
"https://venturebeat.com/2021/12/03/top-12-ai-and-machine-learning-announcements-at-aws-reinvent-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top 12 AI and machine learning announcements at AWS re:Invent 2021 Share on Facebook Share on X Share on LinkedIn The logo of Amazon is seen at the company logistics center in Boves, France, September 18, 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week during its re:Invent 2021 conference in Las Vegas, Amazon announced a slew of new AI and machine learning products and updates across its Amazon Web Services (AWS) portfolio. Touching on DevOps, big data, and analytics, among the highlights were a call summarization feature for Amazon Lex and a capability in CodeGuru that helps detect secrets in source code. Amazon’s continued embrace of AI comes as enterprises express a willingness to pilot automation technologies in transitioning their businesses online. Fifty-two percent of companies accelerated their AI adoption plans because of the COVID pandemic, according to a PricewaterhouseCoopers study. Meanwhile, Harris Poll found that 55% of companies accelerated their AI strategy in 2020 and 67% expect to further accelerate their strategy in 2021. “The initiatives we are announcing … are designed to open up educational opportunities in machine learning to make it more widely accessible to anyone who is interested in the technology,” AWS VP of machine learning Swami Sivasubramanian said in a statement. “Machine learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the world’s most challenging problems, we need the best minds entering the field from all backgrounds and walks of life.” DevOps Roughly a year after launching CodeGuru , an AI-powered developer tool that provides recommendations for improving code quality, Amazon this week unveiled the new CodeGuru Reviewer Secrets Detector. An automated tool that helps developers detect secrets in source code or configuration files such as passwords, API keys, SSH keys, and access tokens, Secrets Detector leverages AI to identify hard-coded secrets as part of the code review process. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The goal is to help ensure that all-new code doesn’t contain secrets before being merged and deployed, according to Amazon. In addition to detecting secrets, Secrets Detector can suggest remediation steps to secure secrets with AWS Secrets Manager, Amazon’s managed service that lets customers store and retrieve secrets. Secrets Detector is included as part of CodeGuru Reviewer, a component of CodeGuru, at no additional cost and supports most of the APIs from providers including AWS, Atlassian, Datadog, Databricks, GitHub, HubSpot, Mailchimp, Salesforce, Shopify, Slack, Stripe, Tableau, Telegram, and Twilio. Enterprise Contact Lens , a virtual call center product for Amazon Connect that transcribes calls while simultaneously assessing them, now features call summarization. Enabled by default, Contact Lens provides a transcript of all calls made via Connect, Amazon’s cloud contact center service. In a related development, Amazon has launched an automated chatbot designer in Lex , the company’s service for building conversational voice and text interfaces. The designer uses machine learning to provide an initial chatbot design that developers can then refine to create conversational experiences for customers. And Textract , Amazon’s machine learning service that automatically extracts text, handwriting, and data from scanned documents, now supports identification documents including licenses and passports. Without the need for templates or configuration, users can automatically extract specific as well as implied information from IDs, such as date of expiration, date of birth, name, and address. SageMaker SageMaker, Amazon’s cloud machine learning development platform, gained several enhancements this week including a visual, no-code tool called SageMaker Canvas. Canvas allows business analysts to build machine learning models and generate predictions by browsing disparate data sources in the cloud or on-premises, combining datasets, and training models once updated data is available. Also new is SageMaker Ground Truth Plus, a turnkey service that employs an “expert” workforce to deliver high-quality training datasets while eliminating the need for companies to manage their own labeling applications. Ground Truth Plus complements improvements to SageMaker Studio , including a novel way to configure and provision compute clusters for workload needs with support from DevOps practitioners. Within SageMaker Studio, SageMaker Inference Recommender — another new feature — automates load testing and optimizes model performance across machine learning instances. The idea is to allow MLOps engineers to run a load test against their model in a simulated environment, reducing the time it takes to get machine learning models from development into production. Developers can gain free access to SageMaker Studio through the new Studio Lab, which doesn’t require an AWS account or billing details. Users can simply sign up with their email address through a web browser and can start building and training machine learning models with no financial obligation or long-term commitment. SageMaker Training Compiler, another new SageMaker capability, aims to accelerate the training of deep learning models by automatically compiling developers’ Python programming code and generating GPU kernels specifically for their model. The training code will use less memory and compute and therefore train faster, Amazon says, cutting costs and saving time. Last on the SageMaker front is Serverless Inference, a new inference option that enables users to deploy machine learning models for inference without having to configure or manage the underlying infrastructure. With Serverless Inference, SageMaker automatically provisions, scales, and turns off compute capacity based on the volume of inference requests. Customers only pay for the duration of running the inference code and the amount of data processed, not for idle time. Compute Amazon also announced Graviton3 , the next generation of its custom ARM-based chip for AI inferencing applications. Soon to be available in AWS C7g instances, the processors are optimized for workloads including high-performance compute, batch processing, media encoding, scientific modeling, ad serving, and distributed analytics, the company says. Alongside Graviton3, Amazon debuted Trn1, a new instance for training deep learning models in the cloud — including models for apps like image recognition , natural language processing , fraud detection, and forecasting. It’s powered by Trainium , an Amazon-designed chip that the company last year claimed would offer the most teraflops of any machine learning instance in the cloud. (A teraflop translates to a chip being able to process 1 trillion calculations per second.) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,945
2,021
"Elementary raises $30M for AI that automates physical product inspections | VentureBeat"
"https://venturebeat.com/2021/12/16/elementary-raises-30m-for-ai-that-automates-physical-product-inspections"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elementary raises $30M for AI that automates physical product inspections Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While it might not be the first example that comes to mind when thinking of AI applications, AI systems are increasingly being used in the manufacturing sector. In industrial factories and warehouses, AI has the potential to improve equipment efficiency and production yields as well as uptime and consistency. According to a 2021 survey from The Manufacturer , 65% of leaders in the manufacturing sector are working to pilot AI. Implementation in warehouses alone is expected to hit a 57.2% compound annual growth rate over the next five years. Many barriers stand in the way of successful AI manufacturing deployments, however. Both hiring and retaining AI technologists remain difficult for businesses, in addition to addressing the technological issues associated with AI systems. For example, in a recent report , 82% of data executives told Precisely that poor data quality was jeopardizing data-driven projects in the enterprise — including AI projects. In recent years, platforms designed to abstract away the complexity of AI applied to manufacturing have emerged as awareness of the technology grows. One of these is Elementary , which uses AI to enable customers to inspect manufactured goods down to the individual parts and assemblies. The company claims that interest in its solution in particular has climbed at an accelerated pace as labor shortages worsen. As many as 2.1 million manufacturing jobs could go unfilled through 2030, according to a study published by Deloitte and The Manufacturing Institute. Elementary AI Elementary was founded in 2017 by Arye Barnehama, who previously launched and sold wearable technology company Melon to Daqri, an industrial augmented reality startup. Elementary’s no-code platform and hardware allows customers to create routines and train AI models to inspect products for quality assurance by labeling data through a dashboard. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elementary describes its product as a “full stack” technology solution, with everything from motor controls to an API that keeps human inspectors in the loop to trace and train the models over time. The company’s computer vision platform for quality and inspection in manufacturing can learn to perform monotonous tasks and leverage RGB cameras, depth sensors, and AI to perceive the world, allowing them to learn from processes they observe. Elementary partners with companies like Rapid Robotics, a startup providing out-of-the-box automation products for manufacturers, to deliver turnkey automation solutions to manufacturers. Barnehama asserts that the combination of Elementary’s and Rapid’s products lets customers achieve greater levels of autonomy without sacrificing quality. “Elementary performs use cases from cosmetic inspections (making sure finished goods are acceptable for the end consumer) to defect detection (making sure no critical issues are present in a product) to foreign material detection (making sure no foreign material or objects are present) to label verification (making sure the right label is on the right product),” Barnehama explained to VentureBeat via email. “[M]anufacturers can use the platform for a global view at their production yields, their most common defects, and full reporting to drive insights and improvements to the production line.” Growth in automation Elementary’s success — the company today raised $30 million in a series B funding round led by Tiger Global — reflects the surging demand for AI technologies in physical industries. Barnehama estimates that more than 10% of all open roles in manufacturing are quality- or inspection-related, making it among the hardest kinds of positions to fill. Among other startups, Landing AI is developing computer vision-based technologies for various types of manufacturing automation. Cogniac and Seebo are other recent entrants in the field, as well as tech giants like Google, which offers a visual inspection product that spots — and aims to correct — defects before products ship. The no-code nature of Elementary’s platform dovetails with another trend: the growth of tools that allow non-developers to create software through visual dashboards instead of traditional programming. An OutSystems report shows that 41% of organizations were using a low- or no-code tool in 2019/2020, up from 34% in 2018/2019. And if the current trend holds, the market for low- and no-code could climb from between $13.3 billion and $17.7 billion in 2021 to between $58.8 billion and $125.4 billion in 2027. “During the pandemic, manufacturing and logistics have undergone major labor shortages … As companies look to continue to automate without having to rely on expensive and hard-to-find engineering talent, our business has scaled because we’re able to provide them with no-code AI solutions,” Barnehama said. “Not only do we enable them to automate a task that they cannot find enough labor for — quality assurance — but we make our system easy to use, removing the need for machine vision experts that are even harder to find today.” Fika Ventures, Fathom Capital, Riot VC, and Toyota Ventures also participated in 50-person Elementary’s series B. It brings the company’s total raised to over $47.5 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,946
2,020
"AI Weekly: Workplace surveillance tech promises safety, but not worker rights | VentureBeat"
"https://venturebeat.com/2020/07/06/ai-weekly-workplace-surveillance-tech-promises-safety-but-not-worker-rights"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Workplace surveillance tech promises safety, but not worker rights Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. All of the issues around the pandemic-driven rash of surveillance and tracking that emerged for society at large are coalescing in the workplace , where people may have little to no choice about whether to show up to work or what sort of surveillance to accept from their employer. Our inboxes have simmered with pitches about AI-powered workplace tracing and safety tools and applications, often from smaller or newer companies. Some are snake oil, and some seem more legitimate, but now we’re seeing larger tech companies unveil more about their workplace surveillance offerings. Though presumably the solutions coming from large and well-established tech companies reliably perform the functions they promise and offer critical safety tools, they don’t inspire confidence for workers’ rights or privacy. Recently, IBM announced Watson Works, which it described in an email as “a curated set of products that embeds Watson artificial intelligence (AI) models and applications to help companies navigate many aspects of the return-to-workplace challenge following lockdowns put in place to slow the spread of COVID-19.” There were curiously few details in the initial release about the constituent parts of Watson Works. It mainly articulated boiled-down workplace priorities — prioritizing employee health; communicating quickly; maximizing the effectiveness of contact tracing; and managing facilities, optimizing space allocation, and helping ensure safety compliance. IBM accomplishes the whole of the above by collecting and monitoring external and internal data sources to track, produce information, and make decisions. Those data sources include public health information as well as “WiFi, cameras, Bluetooth beacons and mobile phones” within the workplace. Though there’s a disclaimer in the release that Watson Works follows IBM’s Principles for Trust and Transparency and preserves employees’ privacy in its data collection, serious questions remain. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After VentureBeat reached out to IBM via email, an IBM representative replied with some answers and more details on Watson Works (and at this point, there’s a lot of information on the Watson Works site ). The suite of tools within Watson Works includes Watson Assistant, Watson Discovery, IBM Tririga, Watson Machine Learning, Watson Care Manager, and IBM Maximo Worker Insights — which vacuums and processes real-time data from the aforementioned sources. Judging by its comments to VentureBeat, IBM’s approach to how its clients use Watson Works is rather hands-off. On the question of who bears liability if an employee gets sick or has their rights violated, IBM punted to the courts and lawmakers. The representative clarified that the client collects data and stores it however and for whatever length of time the client chooses. IBM processes the data but does not receive any raw data, like heart rate information or a person’s location. The data is stored on IBM’s cloud, but the client owns and manages the data. In other words, IBM facilitates and provides the means for data collection, tracking, analysis, and subsequent actions, but everything else is up to the client. This approach to responsibility is what Microsoft’s Tim O’Brien would classify as a level one. In a Build 2019 session about ethics, he laid out four schools of thought about a company’s responsibility for the technology it makes: We’re a platform provider, and we bear no responsibility (for what buyers do with the technology we sell them) We’re going to self-regulate our business processes and do the right things We’re going to do the right things, but the government needs to get involved, in partnership with us, to build a regulatory framework This technology should be eradicated IBM is not alone in its “level one” position. A recent report from VentureBeat’s Kyle Wiggers found that drone companies are largely taking a similar approach in selling technology to law enforcement. (Notably, drone maker Parrot declined comment for that story, but a couple of weeks later, the company’s CEO explained in an interview with Protocol why he’s comfortable having the U.S. military and law enforcement as customers.) When HPE announced its own spate of get-back-to-work technology , it followed IBM’s playbook: It put out a press release with tidy summaries of workplace problems and HPE’s solutions without many details (though you can click through to learn more about its extensive offerings). Yet in those summaries are a couple of items worthy of a raised eyebrow, like the use of facial recognition for contactless building entry. As for guidance for clients about privacy, security, and compliance, the company wrote in part: “HPE works closely with customers across the globe to help them understand the capabilities of the new return-to-work solutions, including how data is captured, transmitted, analyzed, and stored. Customers can then determine how they will handle their data based on relevant legal, regulatory, and company policies that govern privacy.” Amazon’s Distance Assistant appears to be a fairly useful and harmless application of computer vision in the workplace. It scans walkways and overlays green or red highlights to let people know if they’re maintaining proper social distancing as they move around the workplace. On the other hand, the company is under legal scrutiny and dealing with worker objections over a lack of coronavirus safety in its own facilities. In a chipper fireside chat keynote at the conference on Computer Vision and Pattern Recognition ( CVPR ), Microsoft CEO Satya Nadella espoused the capabilities of the company’s “4D Understanding” in the name of worker safety. But in a video demo , you can see that it’s just more worker surveillance — tracking people’s bodies in space relative to one another and tracking the objects on their workstations to ensure they’re performing their work correctly and in the right order. From the employer perspective, this sort of oversight equates to improved safety and efficiency. But what worker wants to have literally every move they make the subject of AI-powered scrutiny? To be fair to IBM, it’s out of the facial recognition business entirely — ostensibly on moral grounds — and the computer vision in Watson Works, the company representative said, is for object detection only and isn’t designed to identify people. And most workplaces that would use this technology are not as fraught as the military or law enforcement. But when a tech provider like IBM cedes responsibility for ethical practices in workplace surveillance, that puts all the power in the hands of employers and thus disempowers workers. Meanwhile, the tech providers profit. We do need technologies that help us get back to work safely, and it’s good that there are numerous options available. But it’s worrisome that the tone around so many of the solutions we’re seeing — including those from larger tech companies — is morally agnostic and that the solutions themselves appear to give no power to workers. We can’t forget that technology can be a tool just as easily as it can be a weapon and that, devoid of cultural and historical contexts (like people desperate to hang onto their jobs amid historically poor unemployment), we can’t understand the potential harms (or benefits) of technology. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,947
2,020
"You can't eliminate bias from machine learning, but you can pick your bias | VentureBeat"
"https://venturebeat.com/2020/11/14/you-cant-eliminate-bias-from-machine-learning-but-you-can-pick-your-bias"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest You can’t eliminate bias from machine learning, but you can pick your bias Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bias is a major topic of concern in mainstream society, which has embraced the concept that certain characteristics — race, gender , age, or zip code, for example — should not matter when making decisions about things such as credit or insurance. But while an absence of bias makes sense on a human level, in the world of machine learning, it’s a bit different. In machine learning theory, if you can mathematically prove you don’t have any bias and if you find the optimal model, the value of the model actually diminishes because you will not be able to make generalizations. What this tells us is that, as unfortunate as it may sound, without any bias built into the model, you cannot learn. The oxymoron of discrimination-free discriminators Modern businesses want to use machine learning and data mining to make decisions based on what their data tells them, but the very nature of that inquiry is discriminatory. Yet, it is perhaps not discriminatory in the way that we typically define the word. The purpose of data mining is to, as Merriam-Webster puts it, “distinguish by discerning or exposing differences: to recognize or identify as separate and distinct,” rather than “to make a difference in treatment or favor on a basis other than individual merit.” It is a subtle but important distinction. Society clearly passes judgments on people and treats them differently based on many different categories. Well-intentioned organizations try to rectify or overcompensate for this by eliminating bias in machine learning models. What they don’t realize is that in doing so, it can mess things up further. Why is this? Once you get into removing data categories, other components, characteristics, or traits sneak in. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Suppose, for example, you uncover that income is biasing your model, but there is also a correlation between income and where someone comes from (wages vary by geography). The moment you add income into the model, you need to de-discriminate that by putting origin in as well. It’s extremely hard to make sure that you have nothing discriminatory in the model. If you take out where someone comes from, how much they earn, where they live, and maybe what their education is, there’s not much left to allow you to determine the difference between one person to another. And still, there could be some remaining bias you haven’t thought about. David Hand has described how the United Kingdom once mandated that car insurance policies couldn’t discriminate against young or old drivers, nor could they set different premiums by gender. On the surface, this sounds nice, how very equal. The problem is that people within these groupings generally have different accident rates. When age and gender are included in the data model, it shows young males have much higher accident rates, and the accidents are more serious; therefore, they should theoretically pay higher premiums. By removing the gender and age categories, however, policy rates go down for young men, enabling more to afford insurance. In the UK model, this factor — more young men with insurance — ultimately drove up the number of overall accidents. The changed model also introduced a new type of bias: Women were paying a disproportionate amount for insurance compared to their accident ratio because they were sponsoring the increased number of accidents by young males. The example shows that you sometimes get undesired side effects by removing categories from the model. The moment you take something out, you haven’t necessarily eliminated bias. It’s still present in the data, only in a different way. When you get rid of a category, you start messing with the whole system. We find a reverse of the above example in Germany. There, health insurers are not allowed to charge differently based on gender, even though men and women clearly experience different conditions and risk factors throughout their lives. For example, women generate significant costs to the health system around pregnancy and giving birth, but no one argues about it because the outcome is viewed as positive — vs. the negative association with car accidents in the UK — therefore, it is perceived as fair that those costs are distributed evenly. The danger of omission The omission of data is quite common, and it doesn’t just occur when you remove a category. Suppose you’re trying to decide who is qualified for a loan. Even the best models will have a certain margin of error because you’re not looking at all of the people that didn’t end up getting a loan. Some people who wanted loans may have never come into the bank in the first place, or maybe they walked in and didn’t make it to your desk; they were scared away based on the environment or got nervous that they would not be successful. As such, your model may not contain the comprehensive set of data points it needs to make a decision. Similarly, companies that rely very heavily on machine learning models often fail to realize that they are using data from way too many “good” customers and that they simply don’t have enough data points to recognize the “bad” ones. This can really mess with your data. You can see this kind of selection bias at work in academia, life sciences in particular. The “publish or perish mantra” has long ruled. Even so, how many journal articles do you remember seeing that document failed studies? No one puts forth papers that say, “I tried this, and it really didn’t work.” Not only does it take an incredible amount of time to prepare a study for publication, the author gains nothing from pushing out results of a failed study. If I did that, my university might look at my work and say, “Michael, 90% of your papers have had poor results. What are you doing?” That is why you only see positive or promising results in journals. At a time when we’re trying to learn as much as we can about COVID-19 treatments and potential vaccines, the data from failures is really important, but we are not likely to learn much about them because of how the system works, because of what data was selected for sharing. So what does this all mean? What does all of this mean in the practical sense? In a nutshell, data science is hard, machine learning is messy, and there is no such thing as completely eliminating bias or finding a perfect model. There are many, many more facets and angles we could delve into as machine learning hits its mainstream stride, but the bottom line is that we’re foolish if we assume that data science is some sort of a be-all and end-all when it comes to making good decisions. Does that mean machine learning has less value than we thought or were promised? No, that is not the case at all. Rather, there simply needs to be more awareness of how bias functions — not just in society but also in the very different world of data science. When we bring awareness to data science and model creation, we can make informed decisions about what to include or exclude, understanding that there will be certain consequences — and sometimes accepting that some consequences will be worth it. Michael Berthold is CEO and co-founder at KNIME , an open source data analytics company. He has more than 25 years of experience in data science, working in academia, most recently as a full professor at Konstanz University in Germany and previously at University of California, Berkeley and Carnegie Mellon, and in industry at Intel’s Neural Network Group, Utopy, and Tripos. Michael has published extensively on data analytics, machine learning, and artificial intelligence. Follow him on Twitter , LinkedIn and the KNIME blog. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,948
2,020
"AI Weekly: Cutting-edge language models can produce convincing misinformation if we don't stop them | VentureBeat"
"https://venturebeat.com/2020/09/18/ai-weekly-cutting-edge-language-models-can-produce-convincing-misinformation-if-we-dont-stop-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Cutting-edge language models can produce convincing misinformation if we don’t stop them Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. It’s been three months since OpenAI launched an API underpinned by cutting-edge language model GPT-3 , and it continues to be the subject of fascination within the AI community and beyond. Portland State University computer science professor Melanie Mitchell found evidence that GPT-3 can make primitive analogies , and Columbia University’s Raphaël Millière asked GPT-3 to compose a response to the philosophical essays written about it. But as the U.S. presidential election nears, there’s growing concern among academics that tools like GPT-3 could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3’s strength in generating “informational,” “influential” text could be leveraged to “radicalize individuals into violent far-right extremist ideologies and behaviors.” Bots are increasingly being used around the world to sow the seeds of unrest, either through the spread of misinformation or the amplification of controversial points of view. An Oxford Internet Institute report published in 2019 found evidence of bots disseminating propaganda in 50 countries, including Cuba, Egypt, India, Iran, Italy, South Korea, and Vietnam. In the U.K., researchers estimate that half a million tweets about the country’s proposal to leave the European Union sent between June 5 and June 12 came from bots. And in the Middle East, bots generated thousands of tweets in support of Saudi Arabia’s crown prince Mohammed bin Salman following the 2018 murder of Washington Post opinion columnist Jamal Khashoggi. Bot activity perhaps most relevant to the upcoming U.S. elections occurred last November, when cyborg bots spread misinformation during the local Kentucky elections. VineSight, a company that tracks social media misinformation, uncovered small networks of bots retweeting and liking messages casting doubt on the gubernatorial results before and after the polls closed. But bots historically haven’t been sophisticated; most simply retweet, upvote, or favorite posts likely to prompt toxic (or violent) debate. GPT-3-powered bots or “cyborgs” — accounts that attempt to evade spam detection tools by fielding tweets from human operators — could prove to be far more harmful given how convincing their output tends to be. “Producing ideologically consistent fake text no longer requires a large corpus of source materials and hours of [training]. It is as simple as prompting GPT-3; the model will pick up on the patterns and intent without any other training,” the coauthors of the Middlebury Institute study wrote. “This is … exacerbated by GPT-3’s impressively deep knowledge of extremist communities, from QAnon to the Atomwaffen Division to the Wagner Group, and those communities’ particular nuances and quirks.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A question-answer thread generated by GPT-3. In their study, the CTEC researchers sought to determine whether people could color GPT-3’s knowledge with ideological bias. (GPT-3 was trained on trillions of words from the internet, and its architectural design enables fine-tuning through longer, representative prompts like tweets, paragraphs, forum threads, and emails.) They discovered that it only took a few seconds to produce a system able to answer questions about the world consistent with a conspiracy theory, in one case falsehoods originating from the QAnon and Iron March communities. “GPT-3 can complete a single post with convincing responses from multiple viewpoints, bringing in various different themes and philosophical threads within far-right extremism,” the coauthors wrote. “It can also generate new topics and opening posts from scratch, all of which fall within the bounds of [the communities’] ideologies.” CTEC’s analysis also found GPT-3 is “surprisingly robust” with respect to multilingual language understanding, demonstrating an aptitude for producing Russian-language text in response to English prompts that show examples of right-wing bias, xenophobia, and conspiracism. The model also proved “highly effective” at creating extremist manifestos that were coherent, understandable, and ideologically consistent, communicating how to justify violence and instructing on anything from weapons creation to philosophical radicalization. Above: GPT-3 writing extremist manifestos. “No specialized technical knowledge is required to enable the model to produce text that aligns with and expands upon right-wing extremist prompts. With very little experimentation, short prompts produce compelling and consistent text that would believably appear in far-right extremist communities online,” the researchers wrote. “GPT-3’s ability to emulate the ideologically consistent, interactive, normalizing environment of online extremist communities poses the risk of amplifying extremist movements that seek to radicalize and recruit individuals. Extremists could easily produce synthetic text that they lightly alter and then employ automation to speed the spread of this heavily ideological and emotionally stirring content into online forums where such content would be difficult to distinguish from human-generated content.” OpenAI says it’s experimenting with safeguards at the API level including “toxicity filters” to limit harmful language generation from GPT-3. For instance, it hopes to deploy filters that pick up antisemitic content while still letting through neutral content talking about Judaism. Another solution might lie in a technique proposed by Salesforce researchers including former Salesforce chief scientist Richard Socher. In a recent paper, they describe GeDi (short for “generative discriminator”), a machine learning algorithm capable of “detoxifying” text generation by language models like GPT-3’s predecessor, GPT-2. During one experiment, the researchers trained GeDi as a toxicity classifier on an open source data set released by Jigsaw, Alphabet’s technology incubator. They claim that GeDi-guided generation resulted in significantly less toxic text than baseline models while achieving the highest linguistic acceptability. But technical mitigation can only achieve so much. CTEC researchers recommend partnerships between industry, government, and civil society to effectively manage and set the standards for use and abuse of emerging technologies like GPT-3. “The originators and distributors of generative language models have unique motivations to serve potential clients and users. Online service providers and existing platforms will need to accommodate for the impact of the output from such language models being utilized with the use of their services,” the researchers wrote. “Citizens and the government officials who serve them may empower themselves with information about how and in what manner creation and distribution of synthetic text supports healthy norms and constructive online communities.” It’s unclear the extent to which this will be possible ahead of the U.S. presidential election, but CTEC’s findings make apparent the urgency. GPT-3 and like models have destructive potential if not properly curtailed, and it will require stakeholders from across the political and ideological spectrum to figure out how they might be deployed both safely and responsibly. For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,949
2,021
"Researchers release dataset to expose racial, religious, and gender biases in language models | VentureBeat"
"https://venturebeat.com/2021/02/03/researchers-release-dataset-to-expose-racial-religious-and-gender-biases-in-language-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers release dataset to expose racial, religious, and gender biases in language models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Natural language models are the building blocks of apps including machine translators, text summarizers, chatbots, and writing assistants. But there’s growing evidence showing that these models risk reinforcing undesirable stereotypes, mostly because a portion of the training data is commonly sourced from communities with gender, race, and religious prejudices. For example, OpenAI’s GPT-3 places words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” A new study from researchers affiliated with Amazon and the University of California, Santa Barbara aims to shed light specifically on biases in open-ended English natural language generation. (In this context, “bias” refers to the tendency of a language model to generate text perceived as being negative, unfair, prejudiced, or stereotypical against an idea or a group of people with common characteristics.) The researchers created what they claim is the largest benchmark dataset of its kind containing 23,679 prompts, 5 domains, and 43 subgroups extracted from Wikipedia articles. Beyond this, to measure biases from multiple angles, they introduce new metrics with which to measure bias including “psycholinguistic norms,” “toxicity,” and “gender polarity.” In experiments, the researchers tested three common language models including GPT-2 (GPT-3’s predecessor), Google’s BERT, and Salesforce’s CTRL. The results show that, in general, these models exhibit larger social biases than the baseline Wikipedia text, especially toward historically disadvantaged groups of people. For example, the three language models strongly associated the profession of “nursing” with women and generated a higher proportion of texts with negative conceptions about men. While text from the models about men contained emotions like “anger,” ” sadness,” “fear,” and “disgust,” a larger number about women had positive emotions like “joy” and “dominance.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With regard to religion, GPT-2, BERT, and CTRL expressed the most negative sentiments about atheism followed by Islam. A higher percentage of texts generated with Islam prompts were labeled as conveying negative emotions, while on the other hand, Christianity prompts tended to be more cheerful in sentiment. In terms of toxicity, only prompts with Islam, Christianity, and atheism resulted in toxic texts, among which atheism had the largest proportion. Across ethnicities and races, toxicity from the models was outsize for African Americans. In fact, the share of texts with negative regard for African American groups was at least marginally larger in five out of six models, indicating a consistent bias against African Americans in multiple key metrics. The coauthors say that the results highlight the importance of studying the behavior of language generation models before they’re deployed into a production environment. Failure to do so, they warn, could at the least propagate negative outcomes and experiences for end users. “Our intuition is that while carefully handpicked language model triggers and choices of language model generations can show some interesting results, they could misrepresent the level of bias that an language model produces when presented with more natural prompts. Furthermore, language model generations in such a contrived setting could reinforce the type of biases that it was triggered to generate while failing to uncover other critical biases that need to be exposed,” the researchers wrote. “Given that a large number of state-of-the-art models on natural language processing tasks are powered by these language generation models, it’s of critical importance to properly discover and quantify any existing biases in these models and prevent them from propagating as unfair outcomes and negative experiences to the end users of the downstream applications.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,950
2,021
"How bias creeps into the AI designed to detect toxicity | VentureBeat"
"https://venturebeat.com/2021/12/09/how-bias-creeps-into-the-ai-designed-to-detect-toxicity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How bias creeps into the AI designed to detect toxicity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2017, Google’s Counter Abuse Technology team and Jigsaw, the organization working under Google parent company Alphabet to tackle cyberbullying and disinformation, released an AI-powered API for content moderation called Perspective. Perspective’s goal is to “identify toxic comments that can undermine a civil exchange of ideas,” offering a score from zero to 100 on how similar new comments are to others previously identified as toxic, defined as how likely a comment is to make someone leave a conversation. Jigsaw claims its AI can immediately generate an assessment of a phrase’s toxicity more accurately than any keyword blacklist and faster than any human moderator. But studies show that technologies similar to Jigsaw’s still struggle to overcome major challenges, including biases against specific subsets of users. For example, a team at Penn State recently found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. After training several of these models to complete an open benchmark from Jigsaw, the team observed that the models learned to associate “negative-sentiment” words like “drugs,” “homelessness,” “addiction,” and “gun violence” with disability — and the words “blind,” “autistic,” “deaf,” and “mentally handicapped” with a negative sentiment. “The biggest issue is that they are public models that are easily used to classify texts based on sentiment,” Pranav Narayanan Venkit and Shomir Wilson, the coauthors of the paper, told VentureBeat via email. Narayanan Venkit is a Ph.D. student in informatics at Penn State and Wilson is an assistant professor in Penn State’s College of Information Sciences. “The results are important as they show how machine learning solutions are not perfect and how we need to be more responsible for the technology we create. Such outright discrimination is both wrong and detrimental to the community as it does not represent such communities or languages accurately.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Emergent biases Studies show that language models amplify the biases in data on which they were trained. For instance, Codex, a code-generating model developed by OpenAI, can be prompted to write “terrorist” when fed the word “Islam.” Another large language model from Cohere tends to associate men and women with stereotypically “male” and “female” occupations, like “male scientist” and “female housekeeper.” That’s because language models are essentially a probability distribution over words or sequences of words. In practice, a model gives the probability of a word sequence being “valid” — i.e., resembling how people write. Some language models are trained on hundreds of gigabytes of text from occasionally toxic websites and so learn to correlate certain genders, races, ethnicities , and religions with “negative” words, because the negative words are overrepresented in the texts. The model powering Perspective was built to classify rather than generate text. But it learns the same associations — and therefore biases — as generative models. In a study published by researchers at the University of Oxford, the Alan Turing Institute, Utrecht University, and the University of Sheffield, an older version of Perspective struggled to recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations like missing characters. An earlier University of Washington paper published in 2019 found that Perspective was more likely to label tweets from Black users offensive versus tweets from white users. The problem extends beyond Jigsaw and Perspective. In 2019, engineers at Meta (formerly Facebook) reportedly discovered that a moderation algorithm at Meta-owned owned Instagram was 50% more likely to ban Black users than white users. More recent reporting revealed that, at one point, the hate speech detection systems Meta used on Facebook aggressively detected comments denigrating white people more than attacks on other demographic groups. For its part, Jigsaw acknowledges that Perspective doesn’t always work well in certain areas. But the company stresses that it’s intent on reducing false positives for “the high toxicity thresholds that most Perspective users employ.” Perspective users can adjust the confidence level that the model must reach to deem a comment “toxic.” Jigsaw says that most users — which include The New York Times — set the threshold at .7-.9 (70% to 90% confident, where .5 [50%] equates to a coin flip). “Perspective scores already represent probabilities, so there is some confidence information included in the score (low or high toxicity scores can be viewed as high confidence, while mid-range score indicate low confidence),” Jigsaw conversation AI software engineer Lucy Vasserman told VentureBeat via email. “We’re working on clarifying this concept of uncertainty further.” Measuring uncertainty and detecting toxicity A silver bullet to the problem of biased toxicity detection models remains predictably elusive. But the coauthors of a new study, which has yet to be peer-reviewed, explore a technique they claim could make it easier to detect — and remove — problematic word associations that models pick up from data. In the study, researchers from Meta and the University of North Carolina at Chapel Hill propose what they call a “belief graph,” an interface with language models that shows the relationships between a model’s “beliefs” (e.g., “A viper is a vertebrate,” “A viper has a brain”). The graph is editable, allowing users to “delete” individual beliefs they determine to be toxic, for example, or untrue. “You could have a model that gives a toxic answer to a question, or rates a toxic statement as ‘true,’ and you don’t want it to do that. So you go in and make a targeted update to that model’s belief and change what it says in response to that input,” Peter Hase, a Ph.D. student at UNC Chapel Hill and a coauthor of the paper, told VentureBeat via email. “If there are other beliefs that are logically entailed by the new model prediction (as opposed to the toxic prediction), the model is consistent and believes those things too. You don’t accidentally change other independent beliefs about the things that the toxic belief involves, whether they’re people groups, moral attitudes, or just a specific topic.” Hase believes that this technique could be applied to the kinds of models that are already a part of widely used products like GPT-3, but that it might not be practical because the methods aren’t perfect yet. He points to complementary work coming out of Stanford that focuses on scaling the update methods to work with models of bigger sizes, which are likely to be used in more natural language applications in the future. “Part of our vision in the paper is to make an interface with language models that shows people what they believe and why,” Hase added. “We wanted to visualize model beliefs and the connections between them, so people could see when changing one belief changes others, [plus] other interesting things like whether the connections between beliefs are logically consistent.” Context-sensitive models Another new work from researchers at Imperial College London pitches the idea of “contextual toxicity detection.” Compared with models that don’t take certain semantics into account, the researchers argue that their models can achieve a lower miss rate (i.e., miss fewer comments that seem harmless on their own but taken in context should be considered toxic) as well as lower false alarm rate (i.e., flag comments that may contain toxic words but aren’t toxic if interpreted in context). “We are proposing prediction models that take previous comments and the root post into account in a structured way, representing them separately from the comment to moderate, [and] preserving the order in which the comments were written,” coauthor and Imperial College London natural language professor Lucia Specia told VentureBeat via email. “This is particularly important when the comments are sarcastic, ironic, or vague.” Above: The Imperial College London researchers’ context-sensitive models are more nuanced in some cases than many production systems. Specia says that context sensitivity can help address existing problems in toxicity detection, like oversensitivity to African American Vernacular English. For example, comments containing the word “ass” are often flagged toxic by tools like Perspective. But the researchers’ models can understand from the context when it’s a friendly, harmless conversation. “These models can certainly be implemented in production. Our results in the paper are based on training on a small dataset of tweets in context, but if the models are trained with sufficient contextual data — which social media platforms have access to — I am confident they can achieve much higher accuracies than context-unaware models,” Specia added. Uncertainty Jigsaw says it’s investigating a different concept — uncertainty — that incorporates the confidence in the toxicity rating into the model. The company claims that uncertainty can help moderators prioritize their time where it’s most needed, especially in areas where the risk for bias or model errors is significant — such as community-specific language, context-dependent content, and content outside of the online conversation domain. “We’re currently working to improve how we handle uncertainty and ensure that content that may be difficult for the models to handle properly receives low confidence scores. We’re [also] exploring how users understand the model’s confidence, weighing possibilities like adding more documentation for how to interpret the scores,” Vasserman said. “The latest technology of large language models and new serving infrastructure to serve them has improved our modeling abilities in several ways. These models have allowed us to serve many more languages than previously possible and they have reduced bias … In addition, the specific large language model we have chosen to use is also more resilient to misspellings and intentional misspellings, because it breaks words down into characters and pieces rather than evaluating the word as a whole.” Toward this end, Perspective today rolled out support for 10 new languages: Arabic, Chinese, Czech, Dutch, Indonesian, Japanese, Korean, Polish, Hindi, and Hinglish — a mix of English and Hindi transliterated using Latin characters. Previously, Perspective was available in English, French, German, Italian, Portuguese, Russian, and Spanish. Detecting biases through annotation Research shows that attempting to “debias” biased toxicity detection models is less effective than addressing the root cause of the problem: the training datasets. In a study from the Allen Institute for AI, Carnegie Mellon, and the University of Washington, researchers investigated how differences in dialect can lead to racial biases in automatic hate speech detection models. They found that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — are more likely to label phrases in the African American English (AAE) dialect more toxic than general American English equivalents, despite their being understood as non-toxic by AAE speakers. Toxicity detectors are trained on input data — text — annotated for a particular output — “toxic” or “nontoxic” — until they can detect the underlying relationships between the inputs and output results. During the training phase, the detector is fed with labeled datasets, which tell it which output is related to each specific input value. The learning process progresses by constantly measuring the outputs and fine-tuning the system to get closer to the target accuracy. Beyond language, the computer vision domain is rife with examples of prejudice arising from biased annotations. Research has found that ImageNet and Open Images — two large, publicly available image datasets — are U.S.- and Euro-centric. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. Cognizant of issues that can arise during the dataset labeling process, Jigsaw says that it has conducted experiments to determine how annotators from different backgrounds and experiences classify things according to toxicity. In a study expected to be published in early 2022, researchers at the company found differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who didn’t identify as either of those two groups. “This research is in its early stages, and we’re working on expanding into additional identities too,” Vasserman said. “We’re [also] working on how best to integrate those annotations into Perspective models, as this is non-trivial. For example, we could always average annotations from different groups or we could choose to use annotations from only one group on specific content that might be related to that group. We’re still exploring the options here and the different impact of each potential choice.” Acceptable tradeoffs Jigsaw’s Perspective API is processing over 500 million requests daily for media organizations including Vox Media, OpenWeb, and Disqus. Facebook has applied its automatic hate speech detection models to content from the billions of people that use its social networks. And they aren’t the only ones. But as the technology stands today, even models developed with the best of intentions are likely to make mistakes that disproportionately impact disadvantaged groups. How often they make those mistakes — and the heavy-handedness of the moderation that those mistakes inform — is ultimately up to the platforms. Perspective claims to err on the side of transparency, allowing publishers to show readers feedback on the predicted toxicity of their comments and filter conversations based on the level of predicted toxicity. Other platforms, like Facebook, are more opaque about the predictions that their algorithms make. Hase argues that explainability is increasingly “critical” as language models become more capable — and are are entrusted with more complex tasks. During testimony before the U.S. Congress in April 2018, Facebook CEO Mark Zuckerberg infamously predicted that AI could take a primary role in automatically detecting hate speech on Facebook in the next 5 to 10 years. Leaked documents suggest that the company is far from achieving that goal — but that it hasn’t communicated this to its users. “Work on making language models explainable is an important part of checking whether models deserve this trust at all,” Hase said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,951
2,020
"Why asking an AI to explain itself can make things worse | MIT Technology Review"
"https://www.technologyreview.com/2020/01/29/304857/why-asking-an-ai-to-explain-itself-can-make-things-worse"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Why asking an AI to explain itself can make things worse By Will Douglas Heaven archive page Frogger about to speak MS Tech / Getty Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty driver’s seat, anxious passengers were encouraged to watch a “pacifier” screen that showed a car’s-eye view of the road: hazards picked out in orange and red, safe zones in cool blue. For Ehsan , who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: “Don’t get freaked out—this is why the car is doing what it’s doing.” But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassured. It got Ehsan thinking: what if the self-driving car could really explain itself? The success of deep learning is due to tinkering: the best neural networks are tweaked and adapted to make better ones, and practical results have outpaced theoretical understanding. As a result, the details of how a trained model works are typically unknown. We have come to think of them as black boxes. A lot of the time we’re okay with that when it comes to things like playing Go or translating text or picking the next Netflix show to binge on. But if AI is to be used to help make decisions in law enforcement, medical diagnosis, and driverless cars, then we need to understand how it reaches those decisions—and know when they are wrong. People need the power to disagree with or reject an automated decision, says Iris Howley , a computer scientist at Williams College in Williamstown, Massachusetts. Without this, people will push back against the technology. “You can see this playing out right now with the public response to facial recognition systems,” she says. Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box. The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learning—and thus whether the resulting model is accurate and unbiased. One solution is to build machine-learning systems that show their workings: so-called glassbox—as opposed to black-box—AI. Glassbox models are typically much-simplified versions of a neural network in which it is easier to track how different pieces of data affect the model. “There are people in the community who advocate for the use of glassbox models in any high-stakes setting,” says Jennifer Wortman Vaughan , a computer scientist at Microsoft Research. “I largely agree.” Simple glassbox models can perform as well as more complicated neural networks on certain types of structured data, such as tables of statistics. For some applications that's all you need. But it depends on the domain. If we want to learn from messy data like images or text, we’re stuck with deep—and thus opaque—neural networks. The ability of these networks to draw meaningful connections between very large numbers of disparate features is bound up with their complexity. Even here, glassbox machine learning could help. One solution is to take two passes at the data, training an imperfect glassbox model as a debugging step to uncover potential errors that you might want to correct. Once the data has been cleaned up, a more accurate black-box model can be trained. It's a tricky balance, however. Too much transparency can lead to information overload. In a 2018 stud y looking at how non-expert users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the model’s mistakes. Another approach is to include visualizations that show a few key properties of the model and its underlying data. The idea is that you can see serious problems at a glance. For example, the model could be relying too much on certain features, which could signal bias. These visualization tools have proved incredibly popular in the short time they’ve been around. But do they really help? In the first study of its kind , Vaughan and her team have tried to find out—and exposed some serious issues. The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the machine-learning model picked up on most in training. Eleven AI professionals were recruited from within Microsoft, all different in education, job roles, and experience. They took part in a mock interaction with a machine-learning model trained on a national income data set taken from the 1994 US census. The experiment was designed specifically to mimic the way data scientists use interpretability tools in the kinds of tasks they face routinely. What the team found was striking. Sure, the tools sometimes helped people spot missing values in the data. But this usefulness was overshadowed by a tendency to over-trust and misread the visualizations. In some cases, users couldn’t even describe what the visualizations were showing. This led to incorrect assumptions about the data set, the models, and the interpretability tools themselves. And it instilled a false confidence about the tools that made participants more gung-ho about deploying the models, even when they felt something wasn’t quite right. Worryingly, this was true even when the output had been manipulated to show explanations that made no sense. To back up the findings from their small user study, the researchers then conducted an online survey of around 200 machine-learning professionals recruited via mailing lists and social media. They found similar confusion and misplaced confidence. Worse, many participants were happy to use the visualizations to make decisions about deploying the model despite admitting that they did not understand the math behind them. “It was particularly surprising to see people justify oddities in the data by creating narratives that explained them,” says Harmanpreet Kaur at the University of Michigan, a coauthor on the study. “The automation bias was a very important factor that we had not considered.” Ah, the automation bias. In other words, people are primed to trust computers. It’s not a new phenomenon. When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem. What can we do about it? For some, part of the trouble with the first wave of XAI is that it is dominated by machine-learning researchers, most of whom are expert users of AI systems. Says Tim Miller of the University of Melbourne, who studies how humans use AI systems: “The inmates are running the asylum.” This is what Ehsan realized sitting in the back of the driverless Uber. It is easier to understand what an automated system is doing—and see when it is making a mistake—if it gives reasons for its actions the way a human would. Ehsan and his colleague Mark Riedl are developing a machine-learning system that automatically generates such rationales in natural language. In an early prototype, the pair took a neural network that had learned how to play the classic 1980s video game Frogger and trained it to provide a reason every time it made a move. To do this, they showed the system many examples of humans playing the game while talking out loud about what they were doing. They then took a neural network for translating between two natural languages and adapted it to translate instead between actions in the game and natural-language rationales for those actions. Now, when the neural network sees an action in the game, it “translates” it into an explanation. The result is a Frogger-playing AI that says things like “I’m moving left to stay behind the blue truck” every time it moves. Ehsan and Riedl’s work is just a start. For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMind’s board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense? Reasons help whether we understand them or not, says Ehsan: “The goal of human-centered XAI is not just to make the user agree to what the AI is saying—it is also to provoke reflection.” Riedl recalls watching the livestream of the tournament match between DeepMind's AI and Korean Go champion Lee Sedol. The commentators were talking about what AlphaGo was seeing and thinking. "That wasn’t how AlphaGo worked," says Riedl. "But I felt that the commentary was essential to understanding what was happening." What this new wave of XAI researchers agree on is that if AI systems are to be used by more people, those people must be part of the design from the start—and different people need different kinds of explanations. (This is backed up by a new study from Howley and her colleagues, in which they show that people’s ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan. You’d want the explanation it gives to an oncologist to be very different from the explanation it gives to the patient. Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feeds—and anyone sitting in the backseat of a self-driving car. “We’ve always known that people over-trust technology, and that’s especially true with AI systems,” says Riedl. “The more you say it’s smart, the more people are convinced that it’s smarter than they are.” Explanations that anyone can understand should help pop that bubble. hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise. By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
15,952
2,020
"How to make sure your 'AI for good' project actually does good | VentureBeat"
"https://venturebeat.com/2020/10/31/how-to-make-sure-your-ai-for-good-project-actually-does-good"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to make sure your ‘AI for good’ project actually does good Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence has been front and center in recent months. The global pandemic has pushed governments and private companies worldwide to propose AI solutions for everything from analyzing cough sounds to deploying disinfecting robots in hospitals. These efforts are part of a wider trend that has been picking up momentum: the deployment of projects by companies, governments, universities, and research institutes aiming to use AI for societal good. The goal of most of these programs is to deploy cutting-edge AI technologies to solve critical issues such as poverty, hunger, crime, and climate change, under the “AI for good” umbrella. But what makes an AI project good ? Is it the “goodness” of the domain of application, be it health, education, or environment? Is it the problem being solved (e.g. predicting natural disasters or detecting cancer earlier)? Is it the potential positive impact on society, and if so, how is that quantified? Or is it simply the good intentions of the person behind the project? The lack of a clear definition of AI for good opens the door to misunderstandings and misinterpretations, along with great chaos. AI has the potential to help us address some of humanity’s biggest challenges like poverty and climate change. However, as any technological tool, it is agnostic to the context of application, the intended end-user, and the specificity of the data. And for that reason, it can ultimately end up having both beneficial and detrimental consequences. In this post, I’ll outline what can go right and what can go wrong in AI for good projects and will suggest some best practices for designing and deploying AI for good projects. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Success stories AI has been used to generate lasting positive impact in a variety of applications in recent years. For example, Statistics for Social Good out of Stanford University has been a beacon of interdisciplinary work at the nexus of data science and social good. In the last few years, it has piloted a variety of projects in different domains, from matching nonprofits with donors and volunteers to investigating inequities in palliative care. Its bottom-up approach, which connects potential problem partners with data analysts, helps these organizations find solutions to their most pressing problems. The Statistics for Social Good team covers a lot of ground with limited manpower. It documents all of its findings on its website, curates datasets, and runs outreach initiatives both locally and abroad. Another positive example is the Computational Sustainability Network, a research group applying computational techniques to sustainability challenges such as conservation, poverty mitigation, and renewable energy. This group adopts a complementary approach for matching computational problem classes like optimization and spatiotemporal prediction with sustainability challenges such as bird preservation, electricity usage disaggregation and marine disease monitoring. This top-down approach works well given that members of the network are experts in these techniques and so are well-suited to deploy and fine-tune solutions to the specific problems at hand. For over a decade, members of CompSustNet have been creating connections between the world of sustainability and that of computing, facilitating data sharing and building trust. Their interdisciplinary approach to sustainability exemplifies the kind of positive impacts AI techniques can have when applied mindfully and coherently to specific real-world problems. Even more recent examples include the use of AI in the fight against COVID-19. In fact, a plethora of AI approaches have emerged to address various aspects of the pandemic, from molecular modeling of potential vaccines to tracking misinformation on social media — I helped write a survey article about these in recent months. Some of these tools, while built with good intentions, had inadvertent consequences. However, others produced positive lasting impacts, especially several solutions created in partnership with hospitals and health providers. For instance, a group of researchers at the University of Cambridge developed the COVID-19 Capacity Planning and Analysis System tool to help hospitals with resource and critical care capacity planning. The system, whose deployment across hospitals was coordinated with the U.K.’s National Health Service, can analyze information gathered in hospitals about patients to determine which of them require ventilation and intensive care. The collected data was percolated up to the regional level, enabling cross-referencing and resource allocation between the different hospitals and health centers. Since the system is used at all levels of care, the compiled patient information could not only help save lives but also influence policy-making and government decisions. Unintended consequences Despite the best intentions of the project instigators, applications of AI towards social good can sometimes have unexpected (and sometimes dire) repercussions. A prime example is the now-infamous COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) project, which various justice systems in the United States deployed. The aim of the system was to help judges assess risk of inmate recidivism and to lighten the load on the overflowing incarceration system. Yet, the tool’s risk of recidivism score was calculated along with factors not necessarily tied to criminal behaviour, such as substance abuse and stability. After an in-depth ProPublica investigation of the tool in 2016 revealed the software’s undeniable bias against blacks, usage of the system was stonewalled. COMPAS’s shortcomings should serve as a cautionary tale for black-box algorithmic decision-making in the criminal justice system and other areas of government, and efforts must be made to not repeat these mistakes in the future. More recently, another well-intentioned AI tool for predictive scoring spurred much debate with regard to the U.K. A-level exams. Students must complete these exams in their final year of school in order to be accepted to universities, but they were cancelled this year due to the ongoing COVID-19 pandemic. The government therefore endeavored to use machine learning to predict how the students would have done on their exams had they taken them, and these estimates were then going to be used to make university admission decisions. Two inputs were used for this prediction: any given student’s grades during the 2020 year, and the historical record of grades in the school the student attended. This meant that a high-achieving student in a top-tier school would have an excellent prediction score, whereas a high-achieving student in a more average institution would get a lower score, despite both students having equivalent grades. As a result, two times as many students from private schools received top grades compared to public schools, and over 39% of students were downgraded from the cumulative average they had achieved in the months of the school year before the automatic assessment. After weeks of protests and threats of legal action by parents of students across the country, the government backed down and announced that it would use the average grade proposed by teachers instead. Nonetheless, this automatic assessment serves as a stern reminder of the existing inequalities within the education system, which were amplified through algorithmic decision-making. While the the goals of COMPAS and the UK government were not ill-intentioned, they highlight the fact that AI projects do not always have the intended outcome. In the best case, these misfires can still validate our perception of AI as a tool for positive impact even if they haven’t solved any concrete problems. In the worst case, they experiment on vulnerable populations and result in harm. Improving AI for good Best practices in AI for good fall into two general categories — asking the right questions and including the right people. 1. Asking the right questions Before jumping head-first into a project intending to apply AI for good, there are a few questions you should ask. The first one is: What is the problem, exactly? It is impossible to solve the real problem at hand, whether it be poverty, climate change, or overcrowded correctional facilities. So projects inevitably involve solving what is, in fact, a proxy problem: detecting poverty from satellite imagery, identifying extreme weather events, producing a recidivism risk score. There is also often a lack of adequate data for the proxy problem, so you rely on surrogate data, such as average GDP per census block, extreme climate events over the last decade, or historical data regarding inmates committing crimes when on parole. But what happens when the GDP does not tell the whole story about income, when climate events are progressively becoming more extreme and unpredictable, or when police data is biased? You end up with AI solutions that optimize the wrong metric, make erroneous assumptions, and have unintended negative consequences. It is also crucial to reflect upon whether AI is the appropriate solution. More often than not, AI solutions are too complex, too expensive, and too technologically demanding to be deployed in many environments. It is therefore of paramount importance to take into account the context and constraints of deployment, the intended audience, and even more straightforward things like whether or not there is a reliable energy grid present at the time of deployment. Things that we take for granted in our own lives and surroundings can be very challenging in other regions and geographies. Finally, given the current ubiquity and accessibility of machine learning and deep learning approaches, you may take for granted that they are the best solution for any problem, no matter its nature and complexity. While deep neural networks are undoubtedly powerful in certain use cases and given a large amount of high-quality data relevant to the task, these factors are rarely the norm in AI-for-good projects. Instead, teams should prioritize simpler and more straightforward approaches, such as random forests or Bayesian networks, before jumping to a neural network with millions of parameters. Simpler approaches also have the added value of being more easily interpretable than deep learning, which is a useful characteristic in real-world contexts where the end users are often not AI specialists. Generally speaking, here are some questions you should answer before developing an AI-for-good project: Who will define the problem to be solved? Is AI the right solution for the problem? Where will the data come from? What metrics will be used for measuring progress? Who will use the solution? Who will maintain the technology? Who will make the ultimate decision based on the model’s predictions? Who or what will be held accountable if the AI has unintended consequences? While there is no guaranteed right answer to any of the questions above, they are a good sanity check before deploying such a complex and impactful technology as AI when vulnerable people and precarious situations are involved. In addition, AI researchers must be transparent about the nature and limitations of the data they are using. AI requires large amounts of data, and ingrained in that data are the inherent inequities and imperfections that exist within our society and social structures. These can disproportionately impact any system trained on the data leading to applications that amplify existing biases and marginalization. It is therefore critical to analyze all aspects of the data and ask the questions listed above, from the very start of your research. When you are promoting a project, be clear about its scope and limitations; don’t just focus on the potential benefits it can deliver. As with any AI project, it is important to be transparent about the approach you are using, the reasoning behind this approach, and the advantages and disadvantages of the final model. External assessments should be carried out at different stages of the project to identify potential issues before they percolate through the project. These should cover aspects such as ethics and bias, but also potential human rights violations, and the feasibility of the proposed solution. 2. Including the right people AI solutions are not deployed in a vacuum or in a research laboratory but involve real people who should be given a voice and ownership of the AI that is being deployed to “help'” them — and not just at the deployment phase of the project. In fact, it is vital to include non-governmental organizations (NGOs) and charities, since they have the real-world knowledge of the problem at different levels and a clear idea of the solutions they require. They can also help deploy AI solutions so they have the biggest impact — populations trust organizations such as the Red Cross, sometimes more than local governments. NGOs can also give precious feedback about how the AI is performing and propose improvements. This is essential, as AI-for-good solutions should include and empower local stakeholders who are close to the problem and to the populations affected by it. This should be done at all stages of the research and development process, from problem scoping to deployment. The two examples of successful AI-for-good initiatives I cited above (CompSusNet and Stats for Social Good) do just that, by including people from diverse, interdisciplinary backgrounds and engaging them in a meaningful way around impactful projects. In order to have inclusive and global AI, we need to engage new voices, cultures, and ideas. Traditionally, the dominant discourse of AI is rooted in Western hubs like Silicon Valley and continental Europe. However, AI-for-good projects are often deployed in other geographical areas and target populations in developing countries. Limiting the creation of AI projects to outside perspectives does not provide a clear picture about the problems and challenges faced in these regions. So it is important to engage with local actors and stakeholders. Also, AI-for-good projects are rarely a one-shot deal; you will need domain knowledge to ensure they are functioning properly in the long term. You will also need to commit time and effort toward the regular maintenance and upkeep of technology supporting your AI-for-good project. Projects aiming to use AI to make a positive impact on the world are often received with enthusiasm, but they should also be subject to extra scrutiny. The strategies I’ve presented in this post merely serve as a guiding framework. Much work still needs to be done as we move forward with AI-for-good projects, but we have reached a point in AI innovation where we are increasingly having these discussions and reflecting on the relationship between AI and societal needs and benefits. If these discussions turn into actionable results, AI will finally live up to its potential to be a positive force in our society. Thank you to Brigitte Tousignant for her help in editing this article. Sasha Luccioni is a postdoctoral researcher at MILA , a Montreal-based research institute focused on artificial intelligence for social good. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,953
2,021
"How to build a unicorn AI team without unicorns | VentureBeat"
"https://venturebeat.com/2021/06/19/how-to-build-a-unicorn-ai-team-without-unicorns"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to build a unicorn AI team without unicorns Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. How do you start assembling an AI team? Well, hire unicorns who can understand the business problem, can translate it into the “right” AI building blocks, and can deliver on the implementation and production deployment. Sounds easy! Except that sightings of such unicorns are extremely rare. Even if you find a unicorn, chances are you won’t be able to afford it! In my experience leading Data+AI products and platforms over the past two decades, a more effective strategy is to focus on recruiting solid performers who cumulatively support seven specific skill personas in the team. The 7 skill personas of a unicorn AI team Above: Seven skills personas of a unicorn AI team (Image by Author) Datasets interpreter persona The lifeblood of an AI project is data. Finding the right datasets, preparing the data, and ensuring high quality on an ongoing basis is a key skill. There is a lot of tribal knowledge about datasets, so you require someone who can specialize in tracking the meaning of data attributes and the origins of different datasets. A related challenge with data is tackling multiple definitions within the organization for business metrics. In one of my projects, we were dealing with eight definitions of “monthly new customers” across sales, finance, and marketing. A good starting point for this skill persona is a traditional data warehouse engineer who has strong data modeling skills and an inherent curiosity to correlate the meaning of data attributes with application and business operations. Pipeline builder persona Getting data from multiple sources to AI models requires data pipelines. Within the pipeline, data is cleaned, prepared, transformed, and converted into ML features. These data pipelines (known as Extract-Transform-Load or ETL in traditional data warehousing) can get quite complicated. Organizations typically have pipeline jungles with thousands of pipelines built using heterogeneous big data technologies such as Spark , Hive , and Presto. The pipeline builder persona focuses on building and running pipelines at scale with the right robustness and performance. The best place to find this persona is data engineers with years of experience developing batch as well as real-time event pipelines. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI full-stack persona AI is inherently iterative from design, training, deployment, and re-training. Building ML models require hundreds of experiments for different permutations of code, features, datasets, and model configurations. This persona is a combination of AI domain knowledge and strong system-building skills. They specialize in existing AI platforms, such as Tensorflow , Pytorch , or cloud-based solutions such as AWS , Google , and Azure. With the democratization of these AI platforms and widespread online courses, this persona is no longer a scarcity. In my experience, a strong background in software engineering combined with their curiosity to gain mastery in AI is an extremely effective combination. In hiring for this persona, it is easy to run into geniuses who like to fly solo instead of being a team player – be on the lookout and weed them out early. AI algorithms persona Most AI projects seldom need to start from scratch or implement new algorithms. The role of this persona is to guide the team on the search space of AI algorithms and techniques within the context of the problem. They help reduce dead-ends with course correction and help balance solution accuracy and complexity. This persona is not easy to get given the high demand at places focusing on AI algorithmic innovations. If you cannot afford to get someone full time for this skill, consider getting an expert as a consultant or a startup advisor. Another option is to invest in training the full-stack team by giving them time to learn research advancements and algorithmic internals. Data+AI operations persona After the AI solution is deployed in production, it needs to be continuously monitored to ensure it is working correctly. A lot of things can go wrong in production: data pipelines failing, bad quality data, under-provisioned model inference endpoint, drift in the correctness of model predictions, uncoordinated changes in business metric definitions, and so on. This persona focuses on building the right monitoring and automation to ensure seamless operations. In comparison to traditional DevOps for software products, Data+AI Ops is significantly complex given the number of moving pieces. Google researchers summarized this complexity correctly as the CACE principle: Change Anything Change Everything. A good starting point to find this persona is experienced DataOps engineers aspiring to learn the Data+AI space. Hypothesis planner persona AI projects are full of surprises! The journey from raw data to usable AI intelligence is not a straight line. You need flexible project planning – adapting based on proving or disproving hypotheses about datasets, features, model accuracy, customer experience. A good place to find this skill persona is in traditional data analysts with experience working on multiple concurrent projects with tight deadlines. They can act as excellent project managers given their instincts to track and parallelize hypotheses. Impact owner persona An impact owner is intimately familiar with the details of how the AI offering will be deployed to deliver value. For instance, when solving a problem related to improving customer retention using AI, this persona will have a complete understanding of the journey map associated with customer acquisition, retention, and attrition. They will be responsible for defining how the customer attrition predictions from the AI solution will be implemented by the support team specialist to reduce churn. The best place to find this persona is within the existing business team — ideally, an engineer with strong product instincts and pragmatism. Without this persona, teams end up building what is technically feasible rather than being pragmatic on what is actually required in the end-to-end workflow to generate value. To summarize, these seven skill personas are a must-have for every AI team. The importance of these personas varies depending on the maturity of the data, type of AI problems, and skillsets available with the broader data and application teams. For instance, the data interpreter persona is much more critical in organizations with data in a large number of small tables compared to those with a small number of large tables. These factors should be taken into account in determining the right seniority and cardinality for each of the skill personas within the AI team. Hopefully, you can now start building your AI team instead of waiting for unicorns to show up! Sandeep Uttamchandani Chief Data Officer and VP of Product Engineering at Unravel Data Systems. He is an entrepreneur with more than two decades of experience building Data+AI products and author of the book The Self-Service Data Roadmap: Democratize Data and Reduce Time to Insight (O’Reilly, 2020). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,954
2,021
"Is your AI project doomed to fail before it begins?  | VentureBeat"
"https://venturebeat.com/2021/11/28/is-your-ai-project-doomed-to-fail-before-it-begins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Is your AI project doomed to fail before it begins? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI), machine learning (ML) and other emerging technologies have potential to solve complex problems for organizations. Yet despite increased adoption over the past two years, only a small percentage of companies feel they are gaining significant value from their AI initiatives. Where are their efforts going wrong? Simple missteps can derail any AI initiative, but there are ways to avoid these missteps and achieve success. Following are four mistakes that can lead to a failed AI implementation and what you should do to avoid or resolve these issues for a successful AI rollout. Don’t solve the wrong problem When determining where to apply AI to solve problems, look at the situation through the right lens and engage both sides of your organization in design thinking sessions, as neither business nor IT have all the answers. Business leaders know which levers can be pulled to achieve a competitive advantage, while technology leaders know how to use technology to achieve those objectives. Design thinking can help create a complete picture of the problem, requirements and desired outcome, and can prioritize which changes will have the biggest operational and financial impact. One consumer product retail company with a 36-hour invoice processing schedule recently experienced this issue when it requested help speeding up its process. A proof of concept revealed that applying an AI/ML solution could decrease processing time to 30 minutes, a 720% speed increase. On paper the improvement looked great. But the company’s weekly settlement process meant the improved processing time didn’t matter. The solution never moved into production. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When looking at the problem to be solved, it’s important to relate it back to one of three critical bottom-line business drivers: increasing revenue, increasing profitability, or reducing risk. Saving time doesn’t necessarily translate to increased revenue or reduced cost. What business impact will the change bring? Data quality is critical to success Data can have a make-or-break impact on AI programs. Clean, dependable, accessible data is critical to achieving accurate results. The algorithm may be good and the model effective, but if the data is poor quality or not easy and feasible to collect, there will be no clear answer. Organizations must determine what data they need to collect, whether they can actually collect it, how difficult or costly it will be to collect, and if it will provide the information needed. A financial institution wanted to use AI/ML to automate loan processing, but missing data elements in source records were creating a high error rate, causing the solution to fail. A second ML model was created to review each record. Those that met the required confidence interval were moved forward in the automated process; those that did not were pulled for human intervention to solve data-quality problems. This multistage process greatly reduced the human interaction required and enabled the institution to achieve an 85% increase in efficiency. Without the additional ML model to address data quality, the automation solution never would have enabled the organization to achieve meaningful results. In-house or third-party? Each has its own challenges Each type of AI solution brings its own challenges. Solutions built in-house provide more control because you are developing the algorithm, cleaning the data, and testing and validating the model. But building your own AI solution is complicated, and unless you’re using open source, you’ll face costs around licensing the tools being used and costs associated with upfront solution development and maintenance. Third-party solutions bring their own challenges, including: No access to the model or how it works Inability to know if the model is doing what it’s supposed to do No access to the data if the solution is SaaS based Inability to do regression testing or know false acceptance or error rates. In highly regulated industries, these issues become more challenging since regulators will be asking questions on these topics. A financial services company was looking to validate a SaaS solution that used AI to identify suspicious activity. The company had no access to the underlying model or the data and no details on how the model determined what activity was suspicious. How could the company perform due diligence and verify the tool was effective? In this instance, the company found its only option was to perform simulations of suspicious or nefarious activity it was trying to detect. Even this method of validation had challenges, such as ensuring the testing would not have a negative impact, create denial-of-service conditions, or impact service availability. The company decided to run simulations in a test environment to minimize risk of production impact. If companies choose to leverage this validation method, they should review service agreements to verify they have authority to conduct this type of testing and should consider the need to obtain permission from other potentially impacted third parties. Invite all of the right people to the party When considering developing an AI solution, it’s important to include all relevant decision makers upfront, including business stakeholders, IT, compliance, and internal audit. This ensures all critical information on requirements is gathered before planning and work begins. A hospitality company wanted to automate its process for responding to data subject access requests (DSARs) as required by the General Data Protection Regulation (GDPR), Europe’s strict data-protection law. A DSAR requires organizations to provide, on request, a copy of any personal data the company is holding for the requestor and the purpose for which it is being used. The company engaged an outside provider to develop an AI solution to automate DSAR process elements but did not involve IT in the process. The resulting requirements definition failed to align with the company’s supported technology solutions. While the proof of concept verified the solution would result in more than a 200% increase in speed and efficiency, the solution did not move to production because IT was concerned that the long-term cost of maintaining this new solution would exceed the savings. In a similar example, a financial services organization didn’t involve its compliance team in developing requirements definitions. The AI solution being developed did not meet the organization’s compliance standards, the provability process hadn’t been documented, and the solution wasn’t using the same identity and access management (IAM) standards the company required. Compliance blocked the solution when it was only partially through the proof-of-concept stage. It’s important that all relevant voices are at the table early when developing or implementing an AI/ML solution. This will ensure the requirements definition is correct and complete and that the solution meets required standards as well as achieves the desired business objectives. When considering AI or other emerging technologies, organizations need to take the right actions early in the process to ensure success. Above all, they must make sure that 1) the solution they are pursuing meets one of the three key objectives — increasing revenue, improving profitability, or reducing risk, 2) they have processes in place to get the necessary data, 3) their build vs. buy decision is well-founded, and 4) they have all of the right stakeholders involved early on. Scott Laliberte is Managing Director and Global Leader of the Emerging Technology Group at Protiviti. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,955
2,021
"Why most AI implementations fail, and what enterprises can do to beat the odds | VentureBeat"
"https://venturebeat.com/2021/06/28/why-most-ai-implementations-fail-and-what-enterprises-can-do-to-beat-the-odds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why most AI implementations fail, and what enterprises can do to beat the odds Share on Facebook Share on X Share on LinkedIn Presented by BeyondMinds In recent years, AI has gained strong market traction. Enterprises across all industries began examining ways in which AI solutions can be deployed on top of their legacy IT infrastructure, addressing a host of pain points and business needs, from boosting productivity to reducing production defects, streamlining cumbersome processes, identifying fraudulent transactions, optimizing pricing, reducing operational costs, and delivering hyper-customized service. The reason that AI can power such a rich variety of use cases is that AI in itself is a very broad term. It covers diverse domains such as natural language processing (NLP), computer vision, speech recognition, and time series analysis. Each of these domains can serve as the base for developing AI solutions tailored to a specific use case of one company, utilizing its particular datasets, environment, and desired outputs. Despite AI’s immense potential to transform any business, oftentimes this potential is not realized. The somber reality is that most AI projects fail. According to a Gartner research, only 15% of AI solutions deployed by 2022 will be successful , let alone create ROI positive value. This disparity between the big promise of revolutionizing businesses and the high failure rate in reality , should be of interest to any enterprise embarking on the digital transformation journey. These enterprises should ask themselves two key questions: “Why do most AI projects fail?” and “Is there any methodology that can overcome this failure rate, paving the way to successful deployments of AI in production?” The answer to the first question starts with data. Specifically, the challenge of processing data in a real-life production environment, as opposed to a controlled lab environment. AI solutions are based on feeding algorithms with input data, which is processed into outputs in the desired form that serves the business case, such as data classification, predictions, anomaly detection, etc. To be able to produce accurate outputs, AI models are trained with the company’s historical data. A well-trained model should be able to deal with data that is similar to the samples it was trained with. This model may keep running smoothly in a controlled lab environment. However, as soon as it’s fed with data originating from outside the scope of the training data distribution, it will fail miserably. Unfortunately, this is what often happens in real-life production environments. This is perhaps the core reason most AI projects fail in production: the data used to train the model in sterile lab environments is static and fully controlled, while data in real-life environments tends to be much messier. Let’s take, for example, a company that deploys a text analysis model at its support center, with the aim of automatically analyzing emails from customers. The model is trained with a massive pool of text written in proper English and reaches high accuracy levels in extracting the key messages of each document. However, as soon as this model is deployed in the actual support center it runs into text riddled with typos, slang, grammar errors, and emojis. Facing this kind of “dirty data,” most AI models are generally not robust enough to produce meaningful outputs, let alone provide long-term value. Launching an AI solution in production is only half the battle. The second pitfall in AI implementation is maintaining the solution on course once deployed. This requires continuous control of data and model versions, optimization of a human-machine feedback loop, ongoing monitoring of the robustness and generalization of the model, and constant noise detection and correlation checks. This ongoing maintenance of an AI solution in production can be an extremely challenging and expensive aspect of deploying AI solutions. The combined challenge of launching an AI solution in a noisy, dynamic production environment and keeping it on the rails so it continues to deliver accurate predictions is the core reason that makes AI implementation profoundly complex. Whether trying to tackle this technological feat in-house or turning to an external provider — companies struggle with AI implementation. So, what can companies do in order to overcome these challenges and beat the 85% failure rate? While the textbook on guaranteed AI implementations has yet to be written, the following four guidelines can be used to curb down the risk factors that typically jeopardize deployments: 1. Customizing the AI solution for each environment Every AI use case is unique. Even in the same vertical or operational area, each business uses specific data to achieve its own goals, according to a particular business logic. For an AI solution to provide the perfect fit for the data, environment, and business needs, all these specificities need to be translated and built into the solution. Off-the-shelf AI solutions, in contrast, are not customized to specific needs and constraints of the business and will be less effective in creating accurate outputs and value. 2. Using a robust and scalable platform The robustness of AI solutions can be measured by their ability to cope in extreme data scenarios, facing noisy, unlabeled and constantly changing data. In evaluating AI solutions, enterprises should ensure that the anticipated outcomes will withstand their real-life production environments, instead of relying just on performance tests in lab conditions. 3. Staying on course once in production AI solutions must also be evaluated on their stability over time. Companies should familiarize themselves with the process of retraining AI models in live production environments, constantly fine-tuning them with feedback by human inspectors. 4. Adding new AI use cases over time An AI solution is always a means to an end, not a goal in itself. As such, the contribution of AI solutions to the business should be evaluated considering the broad perspective of the enterprise’s business needs, goals, and digital strategy. One-point solutions may deliver on a specific use case but will create a painstaking patchwork once additional AI solutions will be deployed to cover additional use cases. The potential that lies in AI may very well transform businesses and disrupt entire industries by the end of this decade. But as with any disruptive technology, the path to implementing AI is treacherous, with most projects falling by the wayside of the digital transformation journey. This is why it’s crucial for enterprises to learn from these past failures, identify the pitfalls in deploying AI into production, and familiarize themselves with the technology to a point in which they have a solid understanding of how specific AI solutions can deliver on their expectation. Sharon Reisner is the CMO of BeyondMinds , an enterprise AI software provider delivering AI solutions and accelerating AI transformations Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,956
2,021
"AI code discovery platform CatalyzeX raises $1.64M | VentureBeat"
"https://venturebeat.com/2021/11/16/ai-code-discovery-platform-catalyzex-raises-1-64m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI code discovery platform CatalyzeX raises $1.64M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. California-based CatalyzeX , a startup that offers a platform for AI/ML code discovery and know-how, today announced it has raised $1.64 million in a seed round of funding led by Unshackled Ventures, Darling Ventures, Kepler Ventures, On Deck, Abstraction Capital, Unpopular Ventures, and Basecamp Fund. The company said it plans to use the round — which also saw the participation of multiple angels — to further accelerate the development of its product, democratizing AI for builders worldwide. Over the years, tens of thousands of AI researches have been conducted, building a huge repository of technical material for various use-cases and industries. However, finding relevant information from this huge chunk for a project at hand has long been a challenge for developers and data scientists around the world. They’d often end up spending hours on Google, searching papers that could contain code snippets and models to build on (only 10 to 12% share code) and other know-how that could accelerate the development of their AI project. CatalyzeX for AI code discovery Above: CatalyzeX AI code discovery platform Prompted by this challenge in their own professional careers, brothers Gaurav and Himanshu Ragtah decided to start CatalyzeX in 2019. The startup offers a website that curates AI research papers and studies from the web, giving devs a one-stop-shop to discover ML techniques and know-how, along with the corresponding code, for their respective projects. “CatalyzeX’s offering is powered by crawlers, aggregators, and classifiers we’ve built in-house to automatically go through technical papers as well as code platforms daily and to match and link machine learning models and techniques with various corresponding code implementations ,” Gaurav told Venturebeat in an email. “We also allow code submissions and feedback from members of the CatalyzeX network.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The free-to-access platform is a search engine of sorts, where a developer picks the recommendations or puts in a problem query, like cancer detection, in the search field. The results show all relevant available ML models/techniques — with full paper and code implementation — that could help with the problem. If the code is not publicly available, the platform also provides an option to get in touch with the authors to request it or get further questions answered. In addition to this, CatalyzeX also offers a browser extension that automatically displays links to code implementations for ML techniques and papers appearing in Google Search results. “Since code is the lingua franca for builders and makers, not walls of text, and given the sheer volume of developments in AI research every single day, surfacing relevant code implementations greatly saves time and effort for developers and technical non-experts in discovering and assessing viable options to leverage artificial intelligence in their products and processes,” the cofounder added. Focus on addressing current status quo, growing user base While platforms like 42papers and Deepai.org also offer AI research and know-how, CatalyzeX claims to differentiate with a much larger repository for model/techniques and code discovery. The platform currently serves over 30,000 users every week with more than 500,000 code implementations. However, Gaurav emphasized that the real challenge is not to beat these sites but to address the current status quo, which is heavily fragmented and holding back significant technology development from reaching the real world. This, he said, will be done through accelerating the development of the product, taking it to more developers and data scientists around the world. Gaurav did not share specific product development plans, but he did note that a part of the funding will go toward hiring product designers and engineers who would work upgrading the platform. “We also have integrations and partnerships planned with several code-collaboration and AI research platforms,” he added while noting that they are also exploring monetization options such as introducing a paid tier with advanced search filters and integrations with development/deployment environments or connecting high-skill talent with global opportunities in AI. According to PwC , AI could contribute up to $15.7 trillion to the global economy in 2030. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion from consumption-side effects. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,957
2,020
"4 things you need to understand about edge computing | VentureBeat"
"https://venturebeat.com/2020/03/29/4-things-you-need-to-understand-about-edge-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 4 things you need to understand about edge computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Edge computing has claimed a spot in the technology zeitgeist as one of the topics that signals novelty and cutting-edge thinking. For a few years now, it has been assumed that this way of doing computing is, one way or another, the future. But until recently the discussion has been mostly hypothetical, because the infrastructure required to support edge computing has not been available. That is now changing as a variety of edge computing resources, from micro data centers to specialized processors to necessary software abstractions , are making their way into the hands of application developers, entrepreneurs, and large enterprises. We can now look beyond the theoretical when answering questions about edge computing’s usefulness and implications. So, what does the real-world evidence tell us about this trend? In particular, is the hype around edge computing deserved, or is it misplaced? Below, I’ll outline the current state of the edge computing market. Distilled down, the evidence shows that edge computing is a real phenomenon born of a burgeoning need to decentralize applications for cost and performance reasons. Some aspects of edge computing have been over-hyped, while others have gone under the radar. The following four takeaways attempt to give decision makers a pragmatic view of the edge’s capabilities now and in the future. 1. Edge computing isn’t just about latency Edge computing is a paradigm that brings computation and data storage closer to where it is needed. It stands in contrast to the traditional cloud computing model, in which computation is centralized in a handful of hyperscale data centers. For the purposes of this article, the edge can be anywhere that is closer to the end user or device than a traditional cloud data center. It could be 100 miles away, one mile away, on-premises, or on-device. Whatever the approach, the traditional edge computing narrative has emphasized that the power of the edge is to minimize latency, either to improve user experience or to enable new latency-sensitive applications. This does edge computing a disservice. While latency mitigation is an important use case, it is probably not the most valuable one. Another use case for edge computing is to minimize network traffic going to and from the cloud, or what some are calling cloud offload , and this will probably deliver at least as much economic value as latency mitigation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The underlying driver of cloud offload is immense growth in the amount of data being generated, be it by users, devices, or sensors. “Fundamentally, the edge is a data problem,” Chetan Venkatesh, CEO of Macrometa, a startup tackling data challenges in edge computing, told me late last year. Cloud offload has arisen because it costs money to move all this data, and many would rather not move it to if they don’t have to. Edge computing provides a way to extract value from data where it is generated, never moving it beyond the edge. If necessary, the data can be pruned down to a subset that is more economical to send to the cloud for storage or further analysis. A very typical use for cloud offload is to process video or audio data, two of the most bandwidth-hungry data types. A retailer in Asia with 10,000+ locations is processing both, using edge computing for video surveillance and in-store language translation services, according to a contact I spoke to recently who was involved in the deployment. But there are other sources of data that are similarly expensive to transmit to the cloud. According to another contact, a large IT software vendor is analyzing real-time data from its customers’ on-premises IT infrastructure to preempt problems and optimize performance. It uses edge computing to avoid backhauling all this data to AWS. Industrial equipment also generates an immense amount of data and is a prime candidate for cloud offload. 2. The edge is an extension of the cloud Despite early proclamations that the edge would displace the cloud, it is more accurate to say that the edge expands the reach of the cloud. It will not put a dent in the ongoing trend of workloads migrating to the cloud. But there is a flurry of activity underway to extend the cloud formula of on-demand resource availability and abstraction of physical infrastructure to locations increasingly distant from traditional cloud data centers. These edge locations will be managed using tools and approaches evolved from the cloud, and over time the line between cloud and edge will blur. The fact that the edge and the cloud are part of the same continuum is evident in the edge computing initiatives of public cloud providers like AWS and Microsoft Azure. If you are an enterprise looking to do on-premises edge computing, Amazon will now send you an AWS Outpost – a fully assembled rack of compute and storage that mimics the hardware design of Amazon’s own data centers. It is installed in a customer’s own data center and monitored, maintained, and upgraded by Amazon. Importantly, Outposts run many of the same services AWS users have come to rely on, like the EC2 compute service, making the edge operationally similar to the cloud. Microsoft has a similar aim with its Azure Stack Edge product. These offerings send a clear signal that the cloud providers envision cloud and edge infrastructure unified under one umbrella. 3. Edge infrastructure is arriving in phases While some applications are best run on-premises, in many cases application owners would like to reap the benefits of edge computing without having to support any on-premises footprint. This requires access to a new kind of infrastructure, something that looks a lot like the cloud but is much more geographically distributed than the few dozen hyperscale data centers that comprise the cloud today. This kind of infrastructure is just now becoming available, and it’s likely to evolve in three phases, with each phase extending the edge’s reach by means of a wider and wider geographic footprint. Phase 1: Multi-Region and Multi-Cloud The first step toward edge computing for a large swath of applications will be something that many might not consider edge computing, but which can be seen as one end of a spectrum that includes all the edge computing approaches. This step is to leverage multiple regions offered by the public cloud providers. For example, AWS has data centers in 22 geographic regions, with four more announced. An AWS customer serving users in both North America and Europe might run its application in both the Northern California region and the Frankfurt region, for instance. Going from one region to multiple regions can drive a big reduction in latency, and for a large set of applications, this will be all that’s needed to deliver a good user experience. At the same time, there is a trend toward multi-cloud approaches, driven by an array of considerations including cost efficiencies, risk mitigation, avoidance of vendor lock-in, and desire to access best-of-breed services offered by different providers. “Doing multi-cloud and getting it right is a very important strategy and architecture today,” Mark Weiner, CMO at distributed cloud startup Volterra, told me. A multi-cloud approach, like a multi-region approach, marks an initial step toward distributed workloads on a spectrum that progresses toward more and more decentralized edge computing approaches. Phase 2: The Regional Edge The second phase in the edge’s evolution extends the edge a layer deeper, leveraging infrastructure in hundreds or thousands of locations instead of hyperscale data centers in just a few dozen cities. It turns out there is a set of players who already have an infrastructure footprint like this: Content Delivery Networks. CDNs have been engaged in a precursor to edge computing for two decades now, caching static content closer to end users in order to improve performance. While AWS has 22 regions, a typical CDN like Cloudflare has 194. What’s different now is these CDNs have begun to open up their infrastructure to general-purpose workloads, not just static content caching. CDNs like Cloudflare, Fastly, Limelight, StackPath, and Zenlayer all offer some combination of container-as-a-service , VM-as-a-service , bare-metal-as-a-service , and serverless functions today. In other words, they are starting to look more like cloud providers. Forward-thinking cloud providers like Packet and Ridge are also offering up this kind of infrastructure, and in turn AWS has taken an initial step toward offering more regionalized infrastructure, introducing the first of what it calls Local Zones in Los Angeles, with additional ones promised. Phase 3: The Access Edge The third phase of the edge’s evolution drives the edge even further outward, to the point where it is just one or two network hops away from the end user or device. In traditional telecommunications terminology this is called the Access portion of the network, so this type of architecture has been labeled the Access Edge. The typical form factor for the Access Edge is a micro data center , which could range in size from a single rack to roughly that of a semi trailer, and could be deployed on the side of the road or at the base of a cellular network tower, for example. Behind the scenes, innovations in things like power and cooling are enabling higher and higher densities of infrastructure to be deployed in these small-footprint data centers. New entrants such as Vapor IO, EdgeMicro, and EdgePresence have begun to build these micro data centers in a handful of US cities. 2019 was the first major buildout year, and 2020 – 2021 will see continued heavy investment in these buildouts. By 2022, edge data center returns will be in focus for those who made the capital investments in them, and ultimately these returns will reflect the answer to the question: are there enough killer apps for bringing the edge this close to the end user or device? We are very early in the process of getting an answer to this question. A number of practitioners I’ve spoken to recently have been skeptical that the micro data centers in the Access Edge are justified by enough marginal benefit over the regional data centers of the Regional Edge. The Regional Edge is already being leveraged in many ways by early adopters, including for a variety of cloud offload use cases as well as latency mitigation in user-experience-sensitive domains like online gaming, ad serving, and e-commerce. By contrast, the applications that need the super-low latencies and very short network routes of the Access Edge tend to sound further off: autonomous vehicles, drones, AR/VR, smart cities, remote-guided surgery. More crucially, these applications must weigh the benefits of the Access Edge against doing the computation locally with an on-premises or on-device approach. However, a killer application for the Access Edge could certainly emerge – perhaps one that is not in the spotlight today. We will know more in a few years. 4. New software is needed to manage the edge I’ve outlined above how edge computing describes a variety of architectures and that the “edge” can be located in many places. However, the ultimate direction of the industry is one of unification, toward a world in which the same tools and processes can be used to manage cloud and edge workloads regardless of where the edge resides. This will require the evolution of the software used to deploy, scale, and manage applications in the cloud, which has historically been architected with a single data center in mind. Startups such as Ori, Rafay Systems, and Volterra, and big company initiatives like Google’s Anthos , Microsoft’s Azure Arc , and VMware’s Tanzu are evolving cloud infrastructure software in this way. Virtually all of these products have a common denominator: They are based on Kubernetes, which has emerged as the dominant approach to managing containerized applications. But these products move beyond the initial design of Kubernetes to support a new world of distributed fleets of Kubernetes clusters. These clusters may sit atop heterogeneous pools of infrastructure comprising the “edge,” on-premises environments, and public clouds, but thanks to these products they can all be managed uniformly. Initially, the biggest opportunity for these offerings will be in supporting Phase 1 of the edge’s evolution, i.e. moderately distributed deployments that leverage a handful of regions across one or more clouds. But this puts them in a good position to support the evolution to the more distributed edge computing architectures beginning to appear on the horizon. “Solve the multi-cluster management and operations problem today and you’re in a good position to address the broader edge computing use cases as they mature,” Haseeb Budhani, CEO of Rafay Systems, told me recently. On the edge of something great Now that the resources to support edge computing are emerging, edge-oriented thinking will become more prevalent among those who design and support applications. Following an era in which the defining trend was centralization in a small number of cloud data centers, there is now a countervailing force in favor of increased decentralization. Edge computing is still in the very early stages, but it has moved beyond the theoretical and into the practical. And one thing we know is this industry moves quickly. The cloud as we know it is only 14 years old. In the grand scheme of things, it will not be long before the edge has left a big mark on the computing landscape. James Falkoff is an investor with Boston-based venture capital firm Converge. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,958
2,021
"Combining edge computing and IoT to unlock autonomous and intelligent applications | VentureBeat"
"https://venturebeat.com/2021/03/10/combining-edge-computing-and-iot-to-unlock-intelligent-applications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Combining edge computing and IoT to unlock autonomous and intelligent applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The boom in internet-connected devices and the unprecedented amount of data being collected has left enterprises grappling with the challenges of storing, securing, and processing the data at scale. The sheer amount of data involved is driving the case for edge computing, even as enterprises continue with their digital transformation plans. Edge computing refers to moving processing power to the network edge — where the devices are — instead of first transferring the data to a centralized location, whether that is to a datacenter or a cloud provider. Edge computing analyzes the data near where it’s being collected, which reduces internet bandwidth usage and addresses security and scalability concerns over where the data is stored and how it’s being transferred. The main drivers are internet-of-things (IoT) and real-time applications that demand instantaneous data processing. 5G deployments are accelerating this trend. Enterprises have been focused on moving their applications to the cloud over the past few years. Analysts estimate that 70% of organizations have at least one application in the cloud, and enterprise decision-makers say digital transformation is one of their top priorities. However, as more data-hungry applications come online, it’s clear there are limits to an all-cloud strategy . By 2025, 175 zettabytes (or 175 trillion gigabytes) of data will be generated around the globe, and more than 90 zettabytes of that data will be created by edge devices, according to IDC’s Data Age 2025 report. That is a lot of data that needs to be uploaded someplace before anything can be done with it, and there may not always be enough bandwidth to do so. Latency is also a problem since it would take time for data to travel the distance from the device to where the analysis is being performed and come back to the device with the results. Finally, there is no guarantee that the network would always be available or reliable. If the network is unavailable for some reason, the application is essentially offline. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “You’re backhauling data to a cloud that’s far away, miles away,” said James Thomason, CTO of EDJX, which provides a platform that makes it easy for developers to write edge and IoT applications and secure edge data at the source. “That’s an insurmountable speed of light problem.” Analysts estimate that 91% of today’s data is created and processed in centralized datacenters. By 2022, about 75% of all data will need analysis and action at the edge. “We knew when we started EDJX that the pendulum would have to swing from cloud and centralization back to decentralized,” Thomason said. The case of edge in enterprises Edge computing isn’t limited to just sensors and other IoT; it can also involve traditional IT devices, such as laptops, servers, and handheld systems. Enterprise applications such as enterprise resource planning (ERP), financial software, and data management systems typically don’t need the level of real-time instantaneous data processing most commonly associated with autonomous applications. Edge computing has the most relevance in the world of enterprise software in the context of application delivery. Employees don’t need access to the whole application suite or all of the company’s data. Providing them just what they need with limited data generally results in better performance and user experience. Edge computing also makes it possible to harness AI in enterprise applications, such as voice recognition. Voice recognition applications need to work locally for fast response, even if the algorithm is trained in the cloud. “For the first time in history, computing is moving out of the realm of abstract stuff like spreadsheets, web browsers, video games, et cetera, and into the real world,” Thomason said. Devices are sensing things in the real world and acting based on that information. Developing for the edge Next-generation applications and services require a new computing infrastructure that delivers low latency networks and high-performance computing at the extreme edge of the network. That is the idea behind Public Infrastructure Network Node (PINN), the initiative out of the Autonomy Institute, a cooperative research consortium focused on advancing and accelerating autonomy and AI at the edge. PINN is a unified open standard supporting 5G wireless, edge computing, radar, lidar, enhanced GPS, and intelligent transportation systems (ITS). PINN looks like a streetlight post, so a PINN cluster could potentially provide a lot of computing power without requiring a lot of cell towers or heavy cables. According to Thomason, PINN clusters in a city deployment could be positioned to collect information from the sensors and cameras at a street intersection. The devices can see things a driver can’t see — such as both directions of traffic, or that a pedestrian is about to enter the crosswalk — and know things the driver doesn’t know — like that an emergency vehicle is on the way or traffic lights are about to change. Edge computing using PINN is what will make it possible to process all of the signals and do something about it, whether that is to make the traffic lights change or force the autonomous vehicle to do something differently. Currently, only vetted developers would be allowed in the PINN ecosystem, Thomason said. Developers write code that is then compiled in WebAssembly, which is the actual code that runs on PINN. Using WebAssembly makes it possible to have a very small attack surface that’s very hardened, making it more difficult for an adversary to break out of the application and get to the data on the PINN, Thomason said. PINN in the real world Autonomy Institute announced a pilot program for PINN at the Texas Military Department’s Camp Mabry location in Austin, Texas. The program will deploy PINNs 1,000 feet apart on a sidewalk over the 400-acre property. With the pilot, the focus will be on optimizing traffic management, autonomous cards, industrial robotics, autonomous delivery, drones that respond to 911 calls, and automated road and bridge inspection — all the things a smart city would care about. Above: The Autonomy Institute partnered with Atrius Industries and EDJX for a pilot program to deliver autonomous solutions at the edge. The first PINNS are scheduled to come online in the second quarter of 2021, and the goal is to have tens of thousands of PINNs deployed by mid-2022. Eventually, the program will be expanded from Austin to other major cities in the United States and around the world, EDJX said. While the pilot program is specifically for building out city infrastructure , Thomason said this was an opportunity to explore other contexts to use PINN. As developers start developing for the platform, there will be opportunities to build and test applications for other industry sectors and use cases where data needs to be aggregated from multiple sources and fused together. Real-world edge applications on PINN can cover a whole range of things, including industrial IoT, artificial intelligence, augmented reality, and robotics. “That general pattern of sensor data, fusion, and things happening in the real world is happening across industries,” Thomason said. “It’s not just smart cities and vehicles.” For specific industries, there are different ways PINNs can be used. The energy sector needs to monitor the pipelines for natural gas and oil for signs of leaks — for financial reasons and over environmental concerns. However, having enough sensors with sniffers to cover all the pipelines and wells could be too difficult. But setting up an infrared camera or a spectrometer to see the leaks and then raise the alert would prevent costly damages. In another example, a factory may use cameras or other sensors to detect the presence of a worker inside the assembly line before starting the machinery. “If you can use computing and sensors to do that, you can reduce workplace accidents significantly,” Thomason said. It is up to the developers that come to the platform what kind of applications they will build — the PINN had to exist first, Autonomy Institute chair Jeffrey DeCoux said. PINN deployments will also encourage more work around sensors, 5G deployments , and all other technologies that depend on edge computing. “Everybody came to the same realization: If we don’t do this, all of these industry 4.0 applications will never happen,” DeCoux said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,959
2,021
"Cloudflare DLP brings zero trust to corporate network data | VentureBeat"
"https://venturebeat.com/2021/03/24/cloudflare-dlp-brings-zero-trust-to-corporate-network-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloudflare DLP brings zero trust to corporate network data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloudflare’s new data loss prevention offering adds zero trust controls to an organization’s data, regardless of where that information is stored. Preventing data loss was hard enough when all of a company’s data was only stored on the corporate network, protected by a firewall. The challenge is even greater when so much of the application now lives outside the corporate network — whether that is in cloud infrastructure, software-as-a-service applications, or on devices used by employees working remotely. Defining rules for each application and configuring individual devices can be a time-consuming process that’s prone to error. The new Cloudflare Data Loss Prevention (DLP) looks at all the traffic passing through the network and applies security controls to protect sensitive information. Organizations are already using Cloudflare’s infrastructure and global network to accelerate user traffic to the internet, as well as to inspect traffic regardless of how it enters the network and filter out malicious activity. Cloudflare has been gradually taking over the corporate network : web traffic filtering with Cloudflare Gateway, zero trust access to cloud and local applications with Cloudflare Access, protection from distributed denial-of-service attacks with Magic Transit, and centralized controls over what is allowed in and out of the network with Magic Firewall. The new Magic WAN lets organizations connect branch offices, datacenters, virtual private clouds, and individual remote employees to Cloudflare’s network to create virtual networks. Almost all of the traditional data loss prevention products on the market ultimately force traffic to go through a central location, which impacts network performance, according to Cloudflare cofounder and CEO Matthew Prince. Cloudflare DLP takes advantage of the fact that an organization is already using Cloudflare’s infrastructure and applies network-wide data security policies to ensure sensitive information does not leave the network. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “[Everyone] knows they need a DLP solution, but the only options are expensive, hard to manage, and haven’t seen innovation in years,” Prince said. “We’re doing something new by rethinking data loss prevention as an extension of our network, instead of adding yet another point solution for CISOs to manage.” DLP needs to do more than just look for specific types of data. The shift to remote work and software-as-a-service has meant administrators no longer have visibility into what kind of data they have and who is using it, making it harder to protect the data and prevent a data breach. The new tool takes advantage of the fact that all the traffic is passing through Cloudflare’s network and every DNS query, request, and file uploads/downloads are now logged. Cloudflare DLP builds on this increased visibility to identify specific types of personally identifiable information (such as credit card numbers and Social Security numbers) using prebuilt patterns, but that isn’t all it does. The new tool also gives administrators the ability to apply granular controls to applications to restrict access. Expanding Cloudflare One Cloudflare DLP is part of Cloudflare One, the secure access secure edge (SASE) solution the company introduced last October. With Cloudflare One, enterprises can implement network security controls over the entire network instead of defining different sets of controls for traffic passing through the corporate firewall, cloud servers, software-as-a-service products, and remote employees connecting to corporate assets via virtual private networks. The growing popularity of SASE is a direct result of enterprises increasingly adopting cloud computing infrastructure and software-as-a-service applications, as well as the recent shift to a remote workforce. Cloudflare’s goal is to “help protect the application on the Internet, protect the infrastructure, and ensure that employees have access to the data they need to have to do their jobs,” Prince said. When so much of an organization’s data lives on infrastructure it doesn’t control, such as SaaS applications, administrators are often restricted when it comes to controlling who can access the data or how it is used. In many cases, the default setting is that anyone on the team with access to the application has access to all the data stored in that application. Some applications allow administrators to define roles and role-based access controls (RBAC), but these are specific to the application. Configuring rules for every application can be tedious and doesn’t address the fact that some applications don’t allow any rules to be created. “How do we extend the network when the threats come from all directions?” Prince asked. Adding security controls The first step was to give administrators visibility. The second was to give administrators the ability to build “need-to-know” rules for both internally-managed applications and SaaS applications in a single place. The rules can block users from accessing certain types of information, or allow users to view a record but prevent them from downloading the information. There are ways to add security controls to the application, such as requiring a hard key as a second factor authentication method. This way, enterprises aren’t restricted to using only the controls provided by the application. For example, the administrator can apply rules to the organization’s customer relationship management (CRM) system to restrict who has access to which kind of information. Legal and finance can look at revenue information stored in the CRM, but marketing teams may not need that same level of access. This kind of control can prevent disgruntled employees from deleting information from SaaS applications, as happened two years ago when an IT contractor for a California-based company deleted over 80% of employee Microsoft Office 365 accounts after his contract was terminated. Another step is to protect applications that may leak data through APIs. Administrators can now scan and block responses that contain data that was never intended to be sent out. When the application responds to an API query, Cloudflare will check to see if the response contains protected data such as credit card and Social Security numbers. There have been cases when certain types of data was being returned in response to an API call that was not part of the intended behavior. Another source of data leakage could be if the API wasn’t restricted to authenticated users. Cloudflare can now act as a “digital bouncer” and protect what data is being returned, Prince said, which is especially important for legacy APIs that can’t be changed to restrict what is returned in those results. Cloudflare’s “corporate network of the future” reflects the reality of the hybrid model, where applications can be inside or outside the corporate network and employees can be working in the office or remotely, Prince. Regardless of where the data resides, where the workers are, or who is hosting the application, enterprises need to reconsider how they manage and protect the network. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,960
2,021
"Cloudflare releases tools and database integrations for serverless development | VentureBeat"
"https://venturebeat.com/2021/11/15/cloudflare-releases-tools-and-database-integrations-for-serverless-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloudflare releases tools and database integrations for serverless development Share on Facebook Share on X Share on LinkedIn Close-up of logo on facade at headquarters Cloudflare in the SoMA neighborhood of San Francisco, California, June 10, 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloudflare announced today new tools and integrations to build applications on its serverless computing platform, Cloudflare Workers. The company also unveiled a partnership with database tools maker Prisma that enables developers to connect Cloudflare Workers to databases like MySQL, Prisma, and Postgres, as well as NoSQL databases like MongoDB, FaunaDB, and any database that connects over HTTP, such as DynamoDB, Firebase, and AWS Aurora. “The promise of serverless computing is its simplicity,” said Matthew Prince, cofounder, and CEO of Cloudflare. “That’s why these new tools and partnerships are grounded in our belief that any developer in the world should be able to connect their data to build any type of application on Cloudflare, period.” Making serverless computing accessible While many organizations use serverless computing solutions, these have often required users to spend a significant amount of time configuring and managing infrastructure and databases, a challenge that Cloudflare is hoping to address by enabling users to connect directly to databases, and quickly migrate data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Reports and Data, the serverless computing market is anticipated to reach $25.49 billion by 2026. Since Cloudflare’s IPO in 2019, Cloudflare Workers helped launch more than 2 million applications, the company said. While Cloudflare is competing against major players like Amazon’s AWS Lambda and Microsoft’s Azure Functions , the company’s emphasis on increasing simplicity for end-users and offering direct integrations with popular databases will play a key role in differentiating it from other providers. Durable objects Cloudflare said it is also making Durable Objects, its solution to provide low-latency and reliable storage for Cloudflare Workers, generally available. Durable Objects enables users to automatically create and delete objects so that they don’t need to waste time managing infrastructure. As John Graham-Cumming, chief technology officer at Cloudflare, explained, the managed state is the hardest part of distributed compute — you have to think about the states your data could end up in, how to synchronize it, scale, and make it fast to access. If a developer wants to write a stateful application, typically they employ several services to provision, manage, and scale: databases, caches, servers, and more. With Durable Objects, developers get all of those in a serverless API, with scaling and strongly consistent data access built-in, Graham-Cumming said. One of the features included with Durable Objects that’s particularly notable is the ability to create a named instance of a Worker that runs on Cloudflare’s network. After creating this named instance or Durable Object, other workers can send messages to it and store data within it, laying out the foundation to build scalable stateless applications. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,961
2,021
"Cloudflare acquires Zaraz to speed up websites and solve third-party bloat | VentureBeat"
"https://venturebeat.com/2021/12/08/cloudflare-acquires-zaraz-to-speed-up-websites-and-solve-third-party-bloat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloudflare acquires Zaraz to speed up websites and solve third-party bloat Share on Facebook Share on X Share on LinkedIn Cloudflare Zaraz Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloudflare has announced its first ever acquisition involving a company built on its own Cloudflare Workers platform. The web infrastructure and security giant, which is perhaps best known for its content delivery network ( CDN ) and distributed denial of service ( DDoS ) mitigation technologies, has snapped up Zaraz , a fledgling startup that promises to speed up website performance with a single line of code. Terms of the deal were not disclosed. For context, most websites rely on third-party integrations such as ad-tracking pixels, video players, and analytics. According to data from the HTTP Archive’s Web Almanac , the median website uses some 21 third-party integrations on mobile, and 23 for desktop — though these numbers can be significantly higher for certain online properties. While these third-party tools are often vital to a company’s bottom line, they can significantly impact a website’s performance — the more scripts a company adds to its website codebase, the bigger the impact. And this is where Zaraz helps, by identifying all the third-party tools running under the hood (e.g., Mixpanel or Google Analytics) and then loading them on its own backend infrastructure (i.e., server-side). This negates the end-user’s browser from having to execute what could be dozens of scripts, thus slowing down their entire experience on that website. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Cloudflare Zaraz Post-acquisition, Zaraz will be folded into Cloudflare and offered as a standalone product called Cloudflare Zaraz — a free version will also be made available, initially in beta. “This means Cloudflare customers will be able to simply toggle on the solution on their dashboard, and it will immediately work on their website, no coding involved,” Zaraz cofounder and CEO Yair Dovrat told VentureBeat. So while Zaraz was already pitched as an easy-to-deploy tool that only required a single line of code, now that it will be bundled into Cloudflare, users won’t have to alter any code whatsoever — customers just click the Zaraz icon in their dashboard to configure all their third-party integrations. “If a website’s domain is managed by Cloudflare, we can include the Zaraz script directly in the page’s HTML automatically,” Dovrat added.” Above: The Zaraz icon in Cloudflare Low-latency loading While Cloudflare has acquired a handful of companies in its 12-year history, Zaraz is particularly notable given that it’s built on Cloudflare’s own developer platform. Cloudflare launched its Workers serverless app platform back in 2018, enabling developers to deploy code and run their apps instantly in any region around the world using Cloudflare’s fleet of datacenters. Website traffic is automatically routed and load-balanced, with companies paying only for the resources they consume. This is a core selling point behind Zaraz — it not only enables websites to load third-party integrations in the cloud, but also uses Cloudflare’s global infrastructure to ensure millisecond latency wherever the web traffic hails from. “We couldn’t have built Zaraz without the global scale, speed, and flexibility provided by Cloudflare Workers,” Dovrat said. “Using Workers meant that we were able to optimize Zaraz for performance and security, something traditional tag managers couldn’t do.” Zaraz inhabits a space that includes tag-management tools such as Google’s Tag Manager , which marketers can use for free to manage and deploy marketing tags on websites or apps. However, most of the alternative solutions out there still evaluate and load the third-party tools in the browser. Zaraz, on the other hand, handles as much as is possible in the cloud, promising to make websites as much as 40% faster, while its serverless architecture means that it merely acts as a data pipeline — customers can save their data wherever they like, including on-premises (though this is limited to enterprise accounts). Founded back in 2019, Zaraz emerged from stealth back in February after an extended beta period, with the Y Combinator (YC) graduate also nabbing $2 million in seed funding from YC, Twitch cofounder Kevin Lin, Pico Partners, and Stormbreaker VC. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,962
2,021
"Faster, safer, more efficient data processing with edge AI | VentureBeat"
"https://venturebeat.com/2021/12/21/faster-safer-more-efficient-data-processing-with-edge-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Faster, safer, more efficient data processing with edge AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Alexey Posternak, Chief Financial and Investment Officer of MTS AI and managing partner of Intema Humanity just can’t stop itself from producing more and more data. In 2010, total data created annually reached two zettabytes. Each zettabyte is equivalent to around 1 trillion gigabytes, or 1021 bytes. Since then, there has been no slowing down. The explosion of mobile computing and the internet of things (IoT) has increased demand further. By 2025, data created is estimated to be 175 zettabytes, and by 2035, will reach a staggering 2142 zettabytes. Much of our modern data is processed by cloud computing, and while the cloud is impressive technology, it is not without its problems. Cloud security is a constant risk for any business. Web hosting company GoDaddy not only reported that more than 1.2 million customers may have had their data accessed during a recent breach, but that it took them more than a month to discover it had happened. Even non-security outages can be greatly damaging – Google had a cloud outage in November , denying access to its services, and Meta’s servers went down for more than three hours in October. As data requirements exponentially increase, these cloud servers will be placed under greater pressure than ever before. Simply expanding cloud capacity cannot be the only solution to this data-processing nightmare. Servers require large amounts of energy, making up 1% of total global consumption. With fears of climate change ever-increasing, the pressure is on to reduce energy usage rather than increase. To solve this, we should turn to edge computing and edge AI. Edge AI not only makes data processing more energy efficient, but safer and faster. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Edge AI is when machine learning algorithms are processed locally ‘on edge’ – on the device itself, or on a nearby server. The technology already exists – smartphones are remarkably intelligent devices, which use edge tech for a variety of tasks. A true edge AI microchip would be capable of making autonomous, data-led decisions without the need for an internet or cloud connection. Edge AI is not intended to replace cloud computing , but to complement and improve it. The first way it does this is by improving latency. Currently, if a device makes a data request on a 4G or 5G network it is received by a cellular tower, and then is passed on to a data server somewhere within the network. Latency – The time it takes for the data to reach the servers and back to your phone – is fast (somewhere in the 10-20 millisecond range for 5G at the moment) but there remains a delay. As data volume increases, the latency often increases with it. Edge AI that has been incorporated into a microchip can have a sub-millisecond latency as the data never leaves the device. The decentralized nature of the technology allows machine-learning algorithms to run autonomously. There are no risks of internet outages or poor mobile phone reception. Data never leaving the device increases security, as data cannot be intercepted in transit to towers or a server. If data does need to leave the device, the incorporation of edge AI chips greatly reduces the amount of information that is sent, improving efficiency. Only data that has been highly processed is sent to the cloud, reducing energy consumption by 30-40%. Edge tech is becoming increasingly integral to 5G rollout, as network providers move to incorporate edge AI into the towers themselves, reducing the requirement for external servers and improving speeds. The applications of edge AI have been noticed already by business and industry leaders. Pitchbook notes that investment in the edge computing semiconductor industry has grown by 74% over the last 12 months, bringing the total investment to $5.8 billion. The median post-money valuation of companies in this niche grew by 110.2% in the same timeframe to $1.05 billion. The ramifications of this tech are game-changing. Further integration of edge AI microchips into the internet of things, has commercial and industrial applications. A self-driving car, for example, cannot be at the mercy of latency. Real-time data processing must be instantaneous – if a small child runs into the road, a delay in data-transfer speeds may prevent the car from braking in time. Even if the latency is sufficiently low, data transfer could be intercepted by hackers, potentially endangering the occupants. This can work to benefit drivers as well – edge AI in driver-facing cameras could be programmed to identify if a driver is distracted, is on their phone, or has even fallen asleep at the wheel, and then communicate with intelligent devices within the car to pull over. On a production line, integrated edge AI chips can analyze data at unprecedented speed. Analyzing sensor data and detecting deviations from the norm in real-time allows workers to replace machinery before it fails. Real-time analytics triggers the automatic decision-making process, notifying workers. The integration of video analytics would allow instant notification of problems on the production line. Production speed could be moderated constantly, with equipment slowed down if there are blockages further up the line, or to maximize the lifetime of machinery. Manufacturing bottlenecks caused by faulty equipment would therefore be reduced, and worker safety increased – the AI could detect that a worker’s arm is in the way of a machine and shut it down far faster than a human could react. Edge AI is very much the cutting edge of technological advancement. In conjunction with existing cloud-based communicative technologies, the integration of AI into the devices themselves will improve the efficiency, security, and speed of data analytics. AI is the future. Alexey Posternak is the CFIO of MTS AI and managing partner of Intema. Alexey has more than 17 years of experience in corporate finance and investing, and deep industry knowledge in TMT, IT, and financial services. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,963
2,021
"Uber researchers propose AI language model that emphasizes positive and polite responses | VentureBeat"
"https://venturebeat.com/2021/01/04/uber-researchers-propose-ai-language-model-that-emphasizes-positive-and-polite-responses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uber researchers propose AI language model that emphasizes positive and polite responses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI-powered assistants like Siri, Cortana, Alexa, and Google Assistant are pervasive. But for these assistants to engage users and help them to achieve their goals, they need to exhibit appropriate social behavior and provide informative replies. Studies show that users respond better to social language in the sense that they’re more responsive and likelier to complete tasks. Inspired by this, researchers affiliated with Uber and Carnegie Mellon developed a machine learning model that injects social language into an assistant’s responses while preserving their integrity. The researchers focused on the customer service domain, specifically a use case where customer service personnel helped drivers sign up with a ride-sharing provider like Uber or Lyft. They first conducted a study to suss out the relationship between customer service representatives’ use of friendly language to drivers’ responsiveness and the completion of their first ride-sharing trip. Then, they developed a machine learning model for an assistant that includes a social language understanding and language generation component. In their study, the researchers found that that the “politeness level” of customer service representative messages correlated with driver responsiveness and completion of their first trip. Building on this, they trained their model on a dataset of over 233,000 messages from drivers and corresponding responses from customer service representatives. The responses had labels indicating how generally polite and positive they were, chiefly as judged by human evaluators. Post-training, the researchers used automated and human-driven techniques to evaluate the politeness and positivity of their model’s messages. They found it could vary the politeness of its responses while preserving the meaning of its messages, but that it was less successful in maintaining overall positivity. They attribute this to a potential mismatch between what they thought they were measuring and manipulating and what they actually measured and manipulated. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A common explanation for the negative association of positivity with driver responsiveness in … and the lack of an effect of positivity enhancement on generated agent responses … might be a discrepancy between the concept of language positivity and its operationalization as positive sentiment,” the researchers wrote in a paper detailing their work. “[Despite this, we believe] the customer support services can be improved by utilizing the model to provide suggested replies to customer service representatives so that they can (1) respond quicker and (2) adhere to the best practices (e.g. using more polite and positive language) while still achieving the goal that the drivers and the ride-sharing providers share, i.e., getting drivers on the road.” The work comes as Gartner predicts that by the year 2020, only 10% of customer-company interactions will be conducted via voice. According to the 2016 Aspect Consumer Experience Index research , 71% of consumers want the ability to solve most customer service issues on their own, up 7 points from the 2015 index. And according to that same Aspect report, 44% said that they would prefer to use a chatbot for all customer service interactions compared with a human. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,964
2,021
"The U.S. Department of Homeland Security tested technology that can recognize masked faces | VentureBeat"
"https://venturebeat.com/2021/01/05/the-u-s-department-of-homeland-security-tested-technology-that-can-recognize-masked-faces"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The U.S. Department of Homeland Security tested technology that can recognize masked faces Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The U.S. Department of Homeland Security (DHS) tested facial recognition technology that would allow it to identify people wearing masks ostensibly with high accuracy. That’s according to a press release issued this week, which reveals that the DHS’ Science and Technology Directorate (S&T) conducted a pilot as part of its third annual Biometric Technology Rally at a facility in Maryland. While a number of companies claim to have developed technologies that can identify wearers of masks such as the cloth masks designed to minimize the spread of viruses, the DHS proposes applying it to screening processes at airports and other ports of entry. For example, U.S. Customs and Border Protection’s (CBP) Simplified Arrival program, which recently expanded to airports in Las Vegas, San Francisco, and Los Angeles, uses facial recognition to verify the identity of airline travelers arriving in the U.S. According to the press release, S&T’s in-person event included 10 days of human testing with 60 facial recognition configurations, six camera-based face and iris recording systems, 10 matching algorithms, and 582 “diverse” test volunteers representing 60 countries. Each system was evaluated based on its ability to “reliably” take images of each volunteer with and without masks and minimize processing time. With masks, the DHS says the median accuracy of all systems was 77% and the best-performing system correctly identified people 96% of the time. That’s roughly in line with a report from the U.S. National Institute for Standards and Technology in December , which found that the best performers of over 150 commercial facial recognition algorithms had a false match rate of about 5% with high-coverage masks. But the DHS concedes that performance varied widely across the systems it tested, down to as little as 4% accurately identified for the worst-performing algorithm. Moreover, cameras couldn’t capture photos for 14% of the masked volunteers and couldn’t find a face for 1%. One commercial iris recognition system failed to capture photos 33% of the time. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The DHS initially raised concerns that face masks meant to protect against the spread of the novel coronavirus might interfere with facial recognition technology, according to a report from The Intercept. But now, the department appears to have changed its tune. “[This technology] could reduce the need for people to remove masks at airports or ports of entry … thereby protecting both the public and frontline workers during the COVID-19 era,” the DHS said in the press release this week. As early as 2016, CBP, the largest federal law enforcement agency of the DHS, began laying the groundwork for the program of which Simplified Arrival is a part: the $1 billion Biometric Entry-Exit Program. Through partnerships with airlines like Delta and JetBlue, CBP has access to manifests that it uses to build facial recognition databases incorporating photos from entry inspections, U.S. visas, and other U.S. Department of Homeland Security corpora. Camera kiosks at airports capture live photos and compare them with photos in the database, attempting to identify matches. When there’s no existing photo available for matching, the system compares the live photos to photos from physical IDs including passports and travel documents. As the CBP explains, with Simplified Arrival, travelers on international flights pause for photos at primary inspection points after debarking from the plane. If the alternative photo-matching process fails, they undergo the traditional inspection process. CBP says that to date, more than 57 million travelers have participated in the Biometric Entry-Exit Program (up from 23 million as of March 2020) and that over 300 imposters have been prevented from illegally entering the U.S. since September 2018. But Simplified Arrival and other pilots under the umbrella of the Biometric Entry-Exit Program are inconsistent, opaque, and potentially discriminatory. While travelers can opt out of Simplified Arrival by notifying CBP officers at inspection points, a Government Accountability Office audit found that CBP resources regarding the Biometric Entry-Exit Program provide limited information and aren’t always complete. At least one CBP call center information operator the GAO reached in November 2019 wasn’t aware of which locations had deployed the technology, and some airport gate signage is outdated, missing, or obscured. It’s also unclear the extent to which CBP’s facial recognition might exhibit bias against certain demographic groups. In a CBP test conducted from May to June 2019, the agency found that 0.0092% of passengers leaving the U.S. were incorrectly identified, a fraction that could translate to a total in the millions. (CBP inspects an estimated over 2 million international travelers every day.) More damningly, photos of departing passengers were successfully captured only 80% of the time due to camera outages, incorrectly configured systems, and other confounders. The match failure rate in one airport was 25%. Despite the controversial nature of the CBP’s ongoing efforts, the U.S. Transportation Security Administration recently announced that it, too, would begin piloting checkpoints at airports that rely on facial scans to match ID photos. The White House has mandated that facial recognition technology be in use at the 20 busiest U.S. airports for “100 percent of all international passengers” entering and exiting the country by 2021. CBP recently proposed a rule that would authorize the collection of facial images — which can be stored for up to 75 years — from any noncitizen entering the country. A coalition of civil rights groups led by the American Civil Liberties Union filed an objection alongside The National Immigration Law Center, Fight for the Future, the Electronic Frontier Foundation, and twelve others. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,965
2,021
"Workato raises $110 million for its business workflow automation platform | VentureBeat"
"https://venturebeat.com/2021/01/12/workato-raises-110-million-for-its-business-workflow-automation-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Workato raises $110 million for its business workflow automation platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Workato , which offers an integration and automation platform for businesses, today announced it has raised $110 million at a post-money valuation of $1.7 billion. The company says it will put the funds toward product innovation and technology development, expanding its customer success program, launching its first user conference in 2021, and investing in scaling teams in the U.S. and internationally. When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. According to market research firm Fact.MR , small and medium-sized enterprises are expected to adopt business workflow automation at scale, creating a market opportunity of more than $1.6 billion between 2017 and 2026. Workato lets companies integrate a range of data and apps to automate backend and front-end business workflows. The company’s platform delivers robotic process automation, integration platform-as-a-service, business process automation, and chatbot capabilities in a solution designed to enable IT and business teams to collaborate, ostensibly without compromising security and governance. With Workato, users can create automations from scratch or opt for over 500,000 prebuilt recipes addressing marketing, sales, finance, HR, IT, and other processes. The company says its customers and partners are creating over 500 new connectors to apps and systems each month. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Workato claims its platform is used by over 70,000 people across 7,000 businesses, including Broadcom, Coupa, Intuit, Autodesk, Nutanix, and Rapid7. Moreover, it says it has experienced 200% growth in new partners since 2019, working with Adobe, Snowflake, Workplace by Facebook, and more. “There’s been explosive growth in business apps and cloud technologies, but their potential remains largely untapped. This explosion has created tech chaos with siloed data, fragmented business processes, and broken UX,” Workato cofounder and CEO Vijay Tella said in a statement. “Workato addresses this with a single platform built for business and IT that easily, reliably, and securely connects their apps, data, and business processes so teams can work smarter and faster. With our new investment, we’re looking forward to helping other companies around the world use integration-led automation to transform how they work.” The series D investment in Mountain View, California-based Workato comes after a two-year period during which the company nearly tripled its revenue and customer base. It brings the company’s total capital raised to over $221 million. Altimeter Capital and Insight Partner co-led the round, with participation from Redpoint Ventures and Battery Ventures. Workato competes with a number of workflow automation companies in a market that’s anticipated to be worth $18.45 billion by 2023, according to Markets and Markets. AirSlate this week raised $40 million for its products that automate repetitive enterprise tasks like e-signature collection. In April, Tonkean nabbed $24 million to further develop its no-code workflow automation platform. There’s also Tray.io and Berlin-based Camunda , both of which have closed funding rounds in the tens of millions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,966
2,021
"Microsoft launches Custom Neural Voice in limited access | VentureBeat"
"https://venturebeat.com/2021/02/03/microsoft-launches-custom-neural-voice-in-limited-access"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches Custom Neural Voice in limited access Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Microsoft today announced the general availability of Custom Neural Voice , an Azure Cognitive Services product that lets developers create synthetic voices with neural text-to-speech technology. It’s in limited access, meaning customers must apply and be approved by Microsoft, but it’s ready for production and available in most Azure cloud regions. Brand voices like Progressive’s Flo are often tasked with recording phone trees for elearning scripts used in corporate training videos. Synthetization could boost actors’ productivity by cutting down on additional recordings and pickups — the recording sessions to address mistakes, changes, or additions in voiceover scripts. At the same time, it could free them up to pursue creative work and enable them to collect residuals. With Custom Neural Voice, prosody — the tone and duration of each phoneme, the unit of sound that distinguishes one word from another — is combined so machine learning models running in Azure can closely reproduce an actor’s voice or a wholly original voice. One set of models converts a script into an acoustic sequence, predicting prosody, while another set of models converts that acoustic sequence into speech. Microsoft claims that because the models can simultaneously predict the right prosody and synthesize a voice, Custom Neural Voice results in more natural-sounding voices. Custom Neural Voice includes controls to help prevent misuse of the service, according to Microsoft. When a customer submits a recording, the voice actor makes a statement acknowledging that they (1) understand the technology and (2) are aware the customer is having a voice made. The recording is compared with the model training data using speaker verification to make sure the voices match before a customer can begin creating the voice. Microsoft also contractually requires customers to get consent from voice talent. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Beyond this, Microsoft says it reviews each potential use case and has customers agree to a code of conduct before they can begin using Custom Neural Voice. “We require customers to make very clear it’s a synthetic voice,” Sarah Bird, responsible AI lead for Cognitive Services within Azure AI, said in a statement. “When it’s not immediately obvious in context, [customers must] explicitly disclose it’s synthetic in a way that’s perceivable by users and not buried in terms.” Microsoft says it’s also working on a way to embed a digital watermark within a synthetic voice to indicate that the content was created with a Custom Neural Voice. Microsoft is effectively going toe to toe with Google, which in 2019 debuted new AI-synthesized WaveNet voices and standard voices in its Cloud Text-to-Speech service. It has another rival in Amazon, which recently launched a service — Brand Voice — that taps AI to generate custom spokespeople and offers a number of voice styles and emotion styles through Amazon Polly, Amazon’s cloud offering that converts text into speech. AT&T has used Custom Neural Voice to create a Bugs Bunny soundalike at a retail location in Dallas from around 2,000 phrases and lines supplied by a voice actor. Duolingo is using the service to introduce a cast of multilingual characters within its language learning apps. Progressive created a Facebook Messenger chatbot with the voice of Flo. And Microsoft worked with a nonprofit in Beijing, China, using Custom Neural Voice and a team of volunteers to generate content to be donated to the Beijing Hongdandan Visually Impaired Service Center, which provides resources for people who are blind or have limited vision. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "