id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
1,613 | 2,021 |
"Incoming White House science and technology leader on AI, diversity, and society | VentureBeat"
|
"https://venturebeat.com/ai/incoming-white-house-science-and-technology-leader-on-ai-diversity-and-society"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Incoming White House science and technology leader on AI, diversity, and society Share on Facebook Share on X Share on LinkedIn Dr. Alondra Nelson in a ceremony introducing the Biden science team Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Technologies like artificial intelligence and human genome editing “reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue,” Dr. Alondra Nelson said today at a televised ceremony introducing President-elect Joe Biden’s science team. On Friday, the Biden transition team appointed Nelson to the position of OSTP deputy director for science and society. Biden will be sworn in Wednesday to officially become the 46th president of the United States.
Nelson said in the ceremony that science is a social phenomenon and a reflection of people, their relationships, and their institutions. This means it really matters who’s in the room when new technology like AI is developed, she said. This is also why for much of her career she has sought to understand the perspectives of people who are not typically included in the development of emerging technology. Connections between our scientific and social worlds have never been as urgent as they are today, she said, and there’s never been a more important moment to situate scientific development in ethical values like equality, accountability, justice, and trustworthiness.
“When we provide inputs to the algorithm; when we program the device; when we design, test, and research; we are making human choices, choices that bring our social world to bear in a new and powerful way,” she said. “As a Black woman researcher, I am keenly aware of those who are missing from these rooms. I believe we have a responsibility to work together to make sure that our science and technology reflects us, and when it does it reflects all of us, that it reflects who we truly are together. This too is a breakthrough. This too is an innovation that advances our lives.” Nelson’s comments allude to trends of pervasive algorithmic bias and a well-documented lack of diversity among teams deploying artificial intelligence.
Those trends appear to have converged when Google fired AI ethics co-lead Timnit Gebru last mont h. Algorithmic bias has been shown to disproportionately and negatively impact the lives of Black people in a number of ways, including use of facial recognition leading to false arrests , adverse health outcomes for millions , and unfair lending practices.
A study published last month found that diversity on teams developing and deploying artificial intelligence is a key to reducing algorithmic bias.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dr. Eric Lander will be nominated to serve as director of the OSTP and presidential science advisor. In remarks today, he called America’s greatest asset its “unrivaled diversity” and spoke of science and tech policy that creates new industries and jobs but also ensures benefits of progress are “shared broadly among all Americans.” “Scientific progress is about someone seeing something that no one’s ever seen before because they bring a different lens, different experiences, different questions, different passions. No one can top America in that regard, but we have to ensure that everyone not only has a seat at the table, but a place at the lab bench,” he said.
Biden also spoke at the ceremony, referring to the team he has assembled as one that will help “restore America’s hope in the frontier of science” while tackling advances in health care and challenges like climate change.
“We have the most diverse population in the world that’s in a democracy, and there’s so much we can do. I can’t tell you how excited we’ve been about doing this. We saved it for last. I know it’s not naming Department of Defense or attorney general, but I tell you what: You have more impact on what our children are going to face and our grandchildren are going to have opportunities to do than anyone,” he said.
As part of today’s announcement, Biden said the presidential science advisor will be a cabinet-level position for the first time in U.S. history. Vice President-elect Kamala Harris, whose mother worked as a scientist at UC Berkeley, also spoke. She concluded her remarks with an endorsement of funding for science, technology, engineering, and mathematics (STEM) education and an acknowledgment of Dr. Kizzmekia Corbett , a Black female scientist whose contributions helped create the Moderna COVID-19 vaccine.
The Biden-Harris campaign platform has also pledged to address some forms of algorithmic bias.
While the Trump administration signed a few international agreements supporting trustworthy AI , the current president’s harsh immigration policy and bigoted rhetoric undercut any chance of leadership when it comes to addressing the ways algorithmic bias leads to discrimination or civil rights violations.
Earlier this week, members of the Trump administration introduced the AI Initiatives Office to guide a national AI strategy following the passage of the National Defense Authorization Act (NDAA). The AI Initiatives Office might be one of the only federal offices to depict a neural network and eagle in its seal.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,614 | 2,021 |
"ImageNet creators find blurring faces for privacy has a 'minimal impact on accuracy' | VentureBeat"
|
"https://venturebeat.com/ai/imagenet-creators-find-blurring-faces-for-privacy-has-a-minimal-impact-on-accuracy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ImageNet creators find blurring faces for privacy has a ‘minimal impact on accuracy’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The makers of ImageNet , one of the most influential datasets in machine learning, have released a version of the dataset that blurs people’s faces in order to support privacy experimentation. Authors of a paper on the work say their research is the first known effort to analyze the impact blurring faces has on the accuracy of large-scale computer vision models. For this version, faces were detected automatically before they were blurred. Altogether, the altered dataset removes the faces of 562,000 people in more than a quarter-million images. Creators of a truncated version of the dataset of about 1.4 million images that was used for competitions told VentureBeat the plan is to eliminate the version without blurred faces and replace it with a version with blurred faces.
“Experiments show that one can use the face-blurred version for benchmarking object recognition and for transfer learning with only marginal loss of accuracy,” the team wrote in an update published on the ImageNet website late last week, together with a research paper on the work. “An emerging problem now is how to make sure computer vision is fair and preserves people’s privacy. We are continually evolving ImageNet to address these emerging needs.” Computer vision systems can be used for everything from recognizing car accidents on freeways to fueling mass surveillance , and as ongoing controversies over facial recognition have shown, images of the human face are deeply personal.
Following experiments with object detection and scene detection benchmark tests using the modified dataset, the team reported in the paper that blurring faces can reduce accuracy by 13% to 60%, depending on the category — but that this reduction has a “minimal impact on accuracy” overall. Some categories that involve blurring objects close to people’s faces, like a harmonica or a mask, resulted in higher rates of classification errors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Through extensive experiments, we demonstrate that training on face-blurred does not significantly compromise accuracy on both image classification and downstream tasks, while providing some privacy protection. Therefore, we advocate for face obfuscation to be included in ImageNet and to become a standard step in future dataset creation efforts,” the paper’s coauthors write.
An assessment of the 1.4 million images included in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset found that 17% of the images contain faces, despite the fact that only three of 1,000 categories in the dataset mention people. In some categories, like “military uniform” and “volleyball,” 90% of the images included faces of people. Researchers also found reduced accuracy in categories rarely related to human faces, like “Eskimo dog” and “Siberian husky.” “It is strange since most images in these two categories do not even contain human faces,” the paper reads.
Coauthors include researchers who released ImageNet in 2009, including Princeton University professor Jia Deng and Stanford University professor and former Google Cloud AI chief Fei-Fei Li. The original ImageNet paper has been cited tens of thousands of times since it was introduced at the Computer Vision and Pattern Recognition (CVPR) conference in 2009 and has since become one of the most influential research papers and datasets for the advancement of machine learning.
The ImageNet Large Scale Visual Recognition Challenge that took place from 2010 to 2017 is known for helping usher in the era of deep learning and leading to the spinoff of startups like Clarifai and MetaMind. Founded by Richard Socher, who helped Deng and Li assemble ImageNet, MetaMind was acquired by Salesforce in 2016.
After helping establish the Einstein AI brand, Socher left his role as chief scientist at Salesforce last summer to launch a search engine startup.
The face-blurring version marks the second major ethical or privacy-related change to the dataset released 12 years ago. In a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) in 2020, creators of the ImageNet dataset removed a majority of categories associated with people because the categories were found to be offensive.
That paper attributes racist, sexist, and politically charged predictions associated with ImageNet to issues like a lack of diversity in demographics represented in the dataset and use of the WordNet hierarchy for the words used to select and label images. A 2019 analysis found that roughly 40% of people in ImageNet photos are women, and about 1% are people over 60. It also found an overrepresentation of men between the ages of 18-40 and an underrepresentation of people with dark skin.
A few months after that paper was published, MIT deleted and removed another computer vision dataset, 80 Million Tiny Images, that’s over a decade old and also used WordNet after racist, sexist labels and images were found in an audit by Vinay Prabhu and Abeba Birhane. Following an NSFW analysis of 80 Million Tiny Images, that paper examines common shortcomings of large computer vision datasets and considers solutions for the computer vision community going forward.
Analysis of ImageNet in the paper found instances of co-occurrence of people and objects in ImageNet categories involving musical instruments, since those images often include people even if the label itself does not mention people. It also suggests the makers and managers of large computer vision datasets take steps toward reform, including the use of techniques to blur the faces of people found in datasets.
On Monday, Birhane and Prabhu urged coauthors to cite ImageNet critics whose ideas are reflected in the face-obfuscation paper, such as the popular ImageNet Roulette. In a blog post, the duo detail multiple attempts to reach the ImageNet team, and a spring 2020 presentation by Prabhu at HAI that included Fei-Fei Li about the ideas underlying Birhane and Prabhu’s criticisms of large computer vision datasets.
“We’d like to clearly point out that the biggest shortcomings are the tactical abdication of responsibility for all the mess in ImageNet combined with systematic erasure of related critical work, that might well have led to these corrective measures being taken,” the blog post reads. Coauthor and Princeton University assistant professor Olga Olga Russakovsky told WIRED a citation of the paper will be included in an updated version of the paper. VentureBeat asked coauthors for additional comment about criticisms from Birhane and Prabhu but did not receive additional comment.
In other work critical of ImageNet, a few weeks after 80 Million Tiny Images was taken down, MIT researchers analyzed the ImageNet data collection pipeline and found “systematic shortcomings that led to reductions in accuracy.” And a 2017 paper found that a majority of images included in the ImageNet dataset came from Europe and the United States, another example of poor representation of people from the Global South in AI.
ILSVRC is a subset of the larger ImageNet dataset, which contains over 14 million images across more than 20,000 categories. ILSVRC, ImageNet, and the recently modified version of ILSVRC were created with help from Amazon Mechanical Turk employees using photos scraped from Google Images.
In related news, a paper by researchers from Google, Mozilla Foundation, and the University of Washington analyzing datasets used for machine learning concludes that the machine learning research community needs to foster a culture change and recognize the privacy and property rights of individuals. In other news related to harm that can be caused by deploying AI, last fall, Stanford University and OpenAI convened experts from a number of fields to critique GPT-3. The group concluded that the creators of large language models like Google and OpenAI have only a matter of months to set standards and address the societal impact of deploying such language models.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,615 | 2,021 |
"Hugging Face triples investment in open source machine learning models | VentureBeat"
|
"https://venturebeat.com/ai/hugging-face-triples-investment-in-open-source-machine-learning-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hugging Face triples investment in open source machine learning models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Hugging Face launched in 2016 with a chatbot app designed to be your “AI friend.
” Now the NLP company has more than 100,000 community members and is planning to triple its efforts and expand beyond language models into fields like computer vision. Developers have used a hub on Hugging Face to share thousands of models, and CEO and cofounder Clement Delangue told VentureBeat Hugging Face wants to become to machine learning what GitHub is to software engineering.
As part of that effort, Hugging Face closed a $40 million series B funding round today. The round was led by Addition, with participation from Lux Capital, A.Capital, and Betaworks. Notable individual investors in the round include MongoDB CEO Dev Ittycheria, NBA star Kevin Durant, Dataiku CEO Florian Douetteau, and former Salesforce chief scientist Richard Socher.
Delangue said Hugging Face believes transfer learning is critical to the future of machine learning. As evidence of this trend, Delangue points to an AI research paper published earlier this week by researchers from Google Brain, Facebook AI Research, and UC Berkeley about pretrained language models working with numerical computation, vision, and protein fold prediction. This and other recent advances, he said, signify that “transfer learning models are starting to eat the whole field of machine learning.” “Everything transfer learning-based we believe is here to stay and is going to transform machine learning for the next five years,” he told VentureBeat. “We’ve seen that they completely changed the NLP field, and they’re starting to change the computer vision fields, like with vision transformers and the speech-to-text fields. Ultimately, we think transfer learning is going to power machine learning, and hopefully we’re going to be able to power all these transfer learning models.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hugging Face has also published AI research. A paper about the Transformers NLP library that’s seen more than 10 million Python pip installs and been used by a number of businesses — including Microsoft’s Bing and MongoDB — received the Best Demo paper award at the EMNLP research conference last year.
In addition to tripling efforts to grow an open source community for the development of language models, Delangue said the funds will help ensure Hugging Face has the resources to act as a “counter-power” to major cloud AI services being sold to enterprise customers. NLP is an area of interest for a number of companies hoping to sell AI services to enterprise customers, including Databricks, which raised $1 billion last month and plans to focus on acquiring NLP startups.
“I think one of the big challenges that you have in machine learning, it seems these days, is that most of the power is concentrated in the hands of a couple of big organizations,” he said. “We’ve always had acquisition interests from Big Tech and others, but we believe it’s good to have independent companies — that’s what we’re trying to do.” Democratization, Delangue said, will be key to assuring the benefits of AI extend to smaller organizations. Hugging Face CTO Julien Chaumond echoed that thought. In a statement shared with VentureBeat, he said democratization of AI will be one of the biggest achievements for society and that no single company, not even a Big Tech business, can do it alone.
Hugging Face began monetizing ways to help businesses create custom models six months ago, and now it works with over 100 companies, including Bloomberg and Qualcomm. A Hugging Face spokesperson told VentureBeat the company has been cash-positive in the first months of 2021.
“You can start seeing that companies are really going to have dozens of what we call machine learning features or NLP features,” he said. “It’s not going to be like one big feature, but they’re going to have a lot of different NLP features that are going to be really deeply embedded into their products or their workflow in multiple different ways.” In other recent Hugging Face news, Hugging Face extended into machine translation last year and in recent weeks launched subcommunities for people working with low-resource languages to create language models.
Hugging Face raised $15 million in a 2019 series A funding round and has raised a total of $60 million to date. In 2017, Hugging Face was part of the Voicecamp startup accelerator hosted by Betaworks in New York City.
Hugging Face currently has 30 employees, with offices in New York and Paris.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,616 | 2,021 |
"How AI trained to beat Atari games could impact robotics and drug design | VentureBeat"
|
"https://venturebeat.com/ai/how-ai-trained-to-beat-atari-games-could-impact-robotics-and-drug-design"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI trained to beat Atari games could impact robotics and drug design Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In 2018, Uber AI Labs introduced Go-Explore, a family of algorithms that beat the Atari game Montezuma’s Revenge , a commonly accepted reinforcement learning challenge. Last year, Go-Explore was used to beat text-based games.
Now researchers from OpenAI and Uber AI Labs say Go-Explore has solved all previously unsolved games in the Atari 2600 benchmark from the Arcade Learning Environment, a collection of more than 50 games, including Pitfall and Pong. Go-Explore also quadruples the state-of-the-art score performance on Montezuma’s Revenge.
Training agents to navigate complex environments has long been considered a challenge for reinforcement learning. Success in these areas has accounted for some major machine learning milestones, like DeepMind’s AlphaGo or OpenAI’s Dota 2 beating human champions.
Researchers envision recent Go-Explore advances being applied to language models but also used for drug design and robotics trained to navigate the world safely. In simulations, a robotic arm was able to successfully pick up an object and put it on one of four shelves, two of which are behind doors with latches. The ability to complete this transfer, they say, proves the policy approach is not simply leveraging the ability to restore a previously held state in a reinforcement learning environment, but a “function of its overall design.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The insights presented in this work extend broadly; the simple decomposition of remembering previously found states, returning to them, and then exploring from them appears to be especially powerful, suggesting it may be a fundamental feature of learning in general. Harnessing these insights, either within or outside of the context of Go-Explore, may be essential to improve our ability to create generally intelligent agents,” reads a paper on the research published last week in Nature.
Researchers theorize that part of the problem is that agents in reinforcement learning environments forget how to get to places they have previously been (known as detachment) and generally fail to return to a state before exploring from it (known as derailment).
“To avoid detachment, Go-Explore builds an ‘archive’ of the different states it has visited in the environment, thus ensuring that states cannot be forgotten. Starting from an archive containing only the initial state, it builds this archive iteratively,” the paper reads. “By first returning before exploring, Go-Explore avoids derailment by minimizing exploration when returning (thus minimizing failure to return) after which it can focus purely on exploration.” Last year Jeff Clune, who cofounded Uber AI Labs in 2017 before moving to OpenAI last year, told VentureBeat that catastrophic forgetting is the Achilles’ heel of deep learning.
Solving this problem, he said at the time, could offer humans a faster path to artificial general intelligence (AGI).
In other recent news, OpenAI shared more details about multimodal model CLIPS this week, and the AI Index, compiled in part by former OpenAI policy director Jack Clark, was released on Wednesday.
The annual index chronicles AI performance progress, as well as trends in startup investment, education, diversity, and policy.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,617 | 2,021 |
"Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru | VentureBeat"
|
"https://venturebeat.com/ai/google-targets-ai-ethics-lead-margaret-mitchell-after-firing-timnit-gebru"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru Share on Facebook Share on X Share on LinkedIn Google San Francisco office Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google has revoked Ethical AI team leader Margaret “Meg” Mitchell’s employee privileges and is currently investigating her activity, according to a statement provided by a company spokesperson. Should Google fire Mitchell, it will mean the company has effectively chosen to behead its own AI ethics team in under two months. In an interview with VentureBeat last month , former Google AI ethics co-lead Timnit Gebru said she had worked with Mitchell since 2018 to create one of the most diverse teams within Google Research.
Gebru tweeted Tuesday evening that Google’s move to freeze Mitchell’s employee account echoed the way hers was frozen before she was fired.
When VentureBeat emailed Google to ask if Mitchell was still an employee, a spokesperson provided the following statement: “Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today. We are actively investigating this matter as part of standard procedures to gather additional details.” Last month, Google fired Gebru following a demand by Google leadership that she rescind an AI research paper she coauthored about the negative consequences of large-scale language models , including their disproportionate impact on marginalized communities in the form of environmental impact and perpetuating stereotypes. Since then, Google released a trillion parameter language model and told its AI researchers to strike a positive tone on topics deemed “sensitive.
Some members of the AI research community have pledged not to review the work of Google researchers at academic conferences in protest.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Mitchell has publicly criticized actions taken by Google leaders like AI chief Jeff Dean following the ousting of Gebru.
Say you have a problem with consistently alienating Black women and have caused serious damage in their lives.
You could: A) try to undo that damage B) try to find more Black people to like you (the tokenism approach).
Good luck…..
https://t.co/rrhL8AQDIF — MMitchell (@mmitchell_ai) January 19, 2021 After Gebru was fired, April Curley, a queer Black woman who said she was fired by Google last fall, publicly recounted numerous negative experiences during her time as a recruiter of talent from historically Black colleges and universities (HBCU).
On Tuesday, news emerged that Google CEO Sundar Pichai will meet with HBCU leaders following allegations of racism and sexism at the company by current and former employees.
Members of Congress interested in regulating AI and more than 2,000 Google employees have joined prominent figures in the AI research community in questioning Gebru’s dismissal. Members of Google’s AI ethics team called for her reinstatement in a series of demands sent to company leadership.
Organizers cited the way Google treated Gebru and the impact AI can have on society as motivators behind the establishment of the Alphabet Workers Union , which was formed earlier this month and as of a week ago counted 700 members including Margaret Mitchell. Gebru had previously endorsed the idea of a workers union as a way to help protect AI researchers from company retribution.
“With AI permeating every aspect of our world—from criminal justice, to credit scores, to military applications—paying careful attention to ethics within the industry is critical,” the Alphabet Workers Union said in a statement shared with VentureBeat.
“As one of the most profitable players in the AI industry, Alphabet has a responsibility to continue investing in its ethical application. Margaret founded the Ethical AI team, built a cross-product area coalition around machine learning fairness, and is a critical member of academic and industry communities around the ethical production of AI. Regardless of the outcome of the company’s investigation, the ongoing targeting of leaders in this organization calls into question Google’s commitment to ethics—in AI and in their business practices. Many members of the Ethical AI team are AWU members and the membership of our union recognizes the crucial work that they do and stands in solidarity with them in this moment.” The incoming Biden administration has in recent days shared a commitment to diversity and to addressing algorithmic bias and other AI-driven harms to society through its science and technology policy platform.
Experts in AI, law, and policy told VentureBeat last month that Google’s treatment of Gebru could impact a range of policy matters, including the passage of stronger whistleblower protections for tech workers and more public funding of independent AI research.
What happens to Mitchell will continue to shape attitudes toward corporate self-governance and speculation about the voracity of research produced with Big Tech funding. A research paper published in late 2020 compared the way Big Tech funds AI ethics research to Big Tobacco’s history of funding health research.
Updated 7:18 am PT January 21 to include a statement from the Alphabet Workers Union.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,618 | 2,021 |
"Google fires Ethical AI lead Margaret Mitchell | VentureBeat"
|
"https://venturebeat.com/ai/google-fires-ethical-ai-lead-margaret-mitchell"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Google fires Ethical AI lead Margaret Mitchell Share on Facebook Share on X Share on LinkedIn Former Google Ethical AI team lead Margaret Mitchell Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google fired Margaret “Meg” Mitchell, lead of the Ethical AI team, today. The move comes just hours after Google announced diversity policy changes and Google AI chief Jeff Dean sent an apology in the wake of the firing of former Google AI ethics lead Timnit Gebru in late 2020.
I'm fired.
— MMitchell (@mmitchell_ai) February 19, 2021 Mitchell, a staff research scientist and Google employee since 2016, had been under an internal investigation by Google for five weeks.
In an email sent to Google shortly before Mitchell was placed on investigation, Mitchell called Google firing Gebru “forever after a really, really, really terrible decision.” A statement from a Google spokesperson about Mitchell reads: “After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When asked for comment, Margaret declined, describing her mood as “confused and hurting.” Mitchell was a member of the recently formed Alphabet Workers Union.
Gebru has previously suggested that union protection could be a way for AI researchers to shield themselves from retaliation like the kind she encountered when a research paper she co-wrote was reviewed last year.
Earlier today, Dean apologized if Black and female employees were hurt by the firing of Gebru.
Additional changes to Google diversity policy were also announced today, including tying DEI goals to performance evaluations for employees at the VP level and above.
On Thursday, Google restructured its AI ethics efforts that brings 10 teams within Google Research, including the Ethical AI team, under Google VP Marian Croak. Croak will report directly to Dean. In a video message, Croak called for more “diplomatic” conversations when addressing ways AI can harm people. Multiple members of the Ethical AI team said they found out about the restructure in the press.
“Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. It’s incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team,” Gebru told VentureBeat about the decision Thursday.
Mitchell and Gebru came together to co-lead the Ethical AI team in 2018, eventually creating what’s believed to be one of the most diverse divisions within Google Research. The Ethical AI team has published research on model cards to bring transparency to AI and how to perform internal algorithm audits.
Last year, the Ethical AI team hired its first sociologists and began to consider how to address algorithmic fairness with critical race theory.
At the VentureBeat Transform conference in 2019, Mitchell called diversity in hiring practices important to ethical deployments of AI.
The way Gebru was fired led to allegations of gaslighting, racism, and retaliation, as well as questions from thousands of Google employees and members of Congress with records of authoring legislation to regulate algorithms. Members of the Ethical AI team requested Google leadership take a series of steps to restore trust.
A Google spokesperson told VentureBeat that the Google legal team has worked with outside counsel to conduct an investigation into how Google fired Gebru.
Google also worked with outside counsel to investigate employee allegations of bullying and mistreatment by DeepMind cofounder Mustafa Suleyman, who led ethics research efforts at the London-based startup acquired by Google in 2014.
The spokesperson did not provide details when asked what steps the organization has taken to meet demands to restore trust the Ethical AI team made or those laid out in a letter signed by more than 2,000 employees shortly after the firing of Gebru that called for a transparent investigation in full view of the public.
A Google spokesperson also told VentureBeat that Google will work more closely with HR in regard to “certain employee exits that are sensitive in nature.” In a December 2020 interview with VentureBeat , Gebru called a companywide memo that called de-escalation strategies part of the solution “dehumanizing” and a response that paints her as an angry Black woman.
Updated 5:40 p.m. to include comment from Margaret Mitchell VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,619 | 2,021 |
"Google employee group urges Congress to strengthen whistleblower protections for AI researchers | VentureBeat"
|
"https://venturebeat.com/ai/google-employee-group-urges-congress-to-strengthen-whistleblower-protections-for-ai-researchers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Google employee group urges Congress to strengthen whistleblower protections for AI researchers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google’s decision to fire its AI ethics leaders is a matter of “urgent public concern” that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That’s according to a letter Google employees published today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru , whom Google fired two weeks ago and in December 2020, respectively.
Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have “shattered” Google’s Black talent pipeline and signaled the collapse of AI ethics research in corporate environments.
“We must stand up together now, or the precedent we set for the field — for the integrity of our own research and for our ability to check the power of big tech — bodes a grim future for us all,” reads the letter published by the group Google Walkout for Change.
“Researchers and other tech workers need protections which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.” Google Walkout for Change was created in 2018, and organizers said the group’s global walkout that year involved 20,000 Googlers in 50 cities around the world.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a tweet days before Google fired her, Gebru asked whether anyone was working on regulation to protect AI ethics whistleblowers. In the days after being fired, she voiced support for unionization as a means of protecting AI researchers. The Alphabet Workers Union cites Gebru’s dismissal among the reasons it was formed in January.
The letter also urges academic conferences to refuse to review papers subjected to editing by corporate lawyers and to begin declining sponsorship from businesses that retaliate against ethics researchers. “Too many institutions of higher learning are inextricably tied to Google funding (along with other Big Tech companies), with many faculty having joint appointments with Google,” the letter reads.
The letter addressed to state and national lawmakers cites an article VentureBeat published two weeks after Google fired Gebru.
That piece looks at potential policy outcomes of the firing, including unionization and changes to whistleblower protection laws. The analysis — which drew on conversations with ethics, legal, and policy experts — cites UC Berkeley Center for Law and Technology co-director Sonia Katyal, who analyzed whistleblower protection laws in 2019 in the context of AI. In an interview with VentureBeat late last year, Katyal called these protections “totally insufficient.” “What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential,” Katyal told VentureBeat.
This is the second time in as many weeks groups have urged Congress to extend protections to AI workers wanting to alert the world to harmful applications. Last week, the National Security Commission on Artificial Intelligence, chaired by former Google CEO Eric Schmidt, sent recommendations to President Biden and Congress that include a call for government protections for workers who feel compelled to raise concerns about “irresponsible AI development.” VentureBeat spoke with two sources familiar with Google AI ethics and policy matters who said they want to see stronger whistleblower protection for AI researchers. One person familiar with the matter said that at Google and other tech companies, people sometimes know something is broken but won’t fix it because they either don’t want to or don’t know how to.
“They’re stuck in this weird place between making money and making the world more equitable, and sometimes that inherent tension is very difficult to resolve,” the person, who spoke on condition of anonymity, told VentureBeat. “But I believe that they should resolve it because if you want to be a company that touches billions of people, then you should be responsible and held accountable for how you touch those billions of people.” After Gebru was fired, that source described a sense among Google employees from underrepresented groups that if they pushed the envelope too far they might be perceived as hostile and people would start filing complaints to push them out. She said this creates a feeling of “genuine unsafety” in the workplace and a “deep sense of fear.” She also told VentureBeat that when it comes to technology with the power to shape human lives, we need to have people throughout the design process with the authority to overturn potentially harmful decisions and ensure models learn from mistakes.
“Without that, we run the risk of … allowing algorithms that we don’t understand to literally shape our ability to be human, and that inherently isn’t fair,” she said.
The letter also criticizes Google leadership for “harassing and intimidating” not only Gebru and Mitchell, but other Ethical AI team members as well. Ethical AI team members were reportedly told to remove their names from a paper under review at the time Gebru was fired. The final copy of that paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” was published this week at the Fairness, Accountability, and Transparency (FAccT) conference and does not cite any authors from Google. But a copy of the paper VentureBeat obtained lists Mitchell as a coauthor, as well as three other members of the Ethical AI team, each with extensive expertise in biased language models and human speech. Google AI chief Jeff Dean questioned the veracity of the research represented in that paper in an email to Google Research. Last week, FAccT organizers told VentureBeat the organization has suspended sponsorship from Google.
The letter published today calls on academics and policymakers to take action and follows changes to company diversity policy and the reorganization of 10 teams within Google Research. These include Ethical AI, now led by Google VP Marian Croak , who will report directly to AI chief Jeff Dean. As part of the change, Google will double staff devoted to employee retention and enact policy to engage HR specialists when certain employee exits are deemed sensitive. Google CEO Sundar Pichai mentioned better de-escalation strategies as part of the solution in a companywide memo. But in an interview with VentureBeat, Gebru called his memo “dehumanizing” and an attempt to characterize her as an angry Black woman.
A Google spokesperson told VentureBeat in an email following the company’s reorganization last month that diversity policy changes were undertaken based on the needs of the organization, not in response to any particular team at Google Research.
In the past year or so, Google’s Ethical AI team has explored a range of topics, including the need for a culture change in machine learning and an internal algorithm auditing framework , algorithmic fairness issues specific to India , the application of critical race theory and sociology , and the perils of scale.
The past weeks and months have seen a rash of reporting about the poor experiences of Black people and women at Google, as well as concerns about corporate influence over AI ethics research. Reuters reported in December 2020 that AI researchers at Google were told to strike a positive tone when referring to “sensitive” topics. Last week, Reuters reported that Google will reform its approach to research review and additional instances of interference in AI research. According to an email obtained by Reuters , the coauthor of another paper about large language models referred to edits made by Google’s legal department as “deeply insidious.” In recent days, the Washington Post detailed how Google treats candidates from historically Black colleges and universities in a separate and unequal fashion, and NBC News reported that Google HR told employees who experienced racism or sexism to “assume good intent” and encouraged them to take mental health leave instead of addressing the underlying issues.
Instances of gender discrimination and toxic work environments for women and people of color have been reported at other major tech companies, including Amazon, Dropbox, Facebook, Microsoft, and Pinterest. Last month, VentureBeat reported that dozens of current and former Dropbox employees , particularly women of color, reported witnessing or experiencing gender discrimination at their company. Former Pinterest employee Ifeoma Ozoma, who previously spoke with VentureBeat about whistleblower protections, helped draft the proposed Silenced No More Act in California last month. If passed, that law will allow employees to report discrimination even if they have signed a nondisclosure agreement.
After Gebru was fired in December 2020, thousands of Google employees signed a Google Walkout letter protesting the way she was treated and what they termed “unprecedented research censorship.” That letter also called for a public inquiry into Gebru’s termination for the sake of Google users and employees. Members of Congress who have proposed regulations like the Algorithmic Accountability Act, including Rep. Yvette Clark (D-NY) and Sen. Cory Booker (D-NJ), also sent Google CEO Sundar Pichai an email raising concerns over the way Gebru was fired, Google’s research integrity, and steps the company is taking to mitigate bias in large language models.
About a week after Gebru was fired, members of the Ethical AI team sent their own letter to company leadership. According to a copy VentureBeat obtained, Ethical AI team members demanded Gebru be reinstated and Samy Bengio remain the direct report manager for the Ethical AI team. They also state that reorganization is sometimes used to “[shunt] workers who’ve engaged in advocacy and organizing into new roles and managerial relationships.” The letter described Gebru’s termination as having a demoralizing effect on the Ethical AI team and outlined a number of steps needed to reestablish trust. That document cosigns letters of support for Gebru from Google’s Black Researchers group and the DEI Working Group. A Google spokesperson told VentureBeat outside counsel conducted an investigation but declined to share details. The Ethical AI letter also demands Google maintain and strengthen the department, guarantee the integrity of independent research, and clarify its sensitive review process by the end of Q1 2021. And the letter calls for a public statement that guarantees research integrity at Google, including in areas tied to the company’s business interests, such as large language models and datasets like JFT-300 , which has over a billion labeled images.
A Google spokesperson said Croak will oversee the work of about 100 AI researchers going forward. A source familiar with the matter told VentureBeat a reorganization that brings Google’s numerous AI fairness efforts under a single leader makes sense and had been discussed before Gebru was fired. The question, this person said, is whether Google will fund fairness testing and analysis.
“Knowing what these communities need consistently becomes hard when these populations aren’t necessarily going to make the company a bunch of money,” a person familiar with the matter told VentureBeat. “So yeah, you can put us all under the same team, but where’s the money at? Are you going to give a bunch of headcount and jobs so that people can actually go do this work inside of products? Because these teams are already overtaxed — like these teams are really, really small in comparison to the products.” Google walkout organizers Meredith Whittaker and Claire Stapleton claimed they also experienced retaliation before leaving the company, as did employees who attempted to unionize, many of whom identify as queer. Shortly before Gebru was fired, the National Labor Review Board filed a complaint against Google that accuses the company of retaliating against employees and illegally spying on them.
The AI Index, an annual accounting of performance advances and AI’s impact on startups, business, and government policy, was released last week.
The report found that the United States differs from other countries in its large quantity of industry-backed research and called for more fairness benchmarks. The report also cited research finding only 3% of AI Ph.D. graduates in the U.S. are Black and 18% are women. The index noted that Congress is talking about AI more than ever and that AI ethics incidents — including Google firing Gebru — were among the most popular AI news stories of 2020.
VentureBeat requested an interview with Google VP Marian Croak, but a Google spokesperson declined on her behalf.
In a related matter, VentureBeat analysis about the “ fight for the soul of machine learning ” was cited in a paper published this week at FAccT about power, exclusion, and AI ethics education.
Updated 11:40 a.m. Pacific, March 9 to mention the National Security Commission on Artificial Intelligence’s recommendation to Congress and to note that Timnit Gebru spoke about whistleblower protections before Google fired her and about unionization after she was fired as ways to protect AI researchers from retaliation.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,620 | 2,021 |
"Databricks raises $1 billion funding round at $28 billion valuation | VentureBeat"
|
"https://venturebeat.com/ai/databricks-raises-1-billion-funding-round-at-28-billion-valuation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks raises $1 billion funding round at $28 billion valuation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Databricks today announced the close of a $1 billion funding round, bringing the company’s valuation to $28 billion after post-money valuation, a company spokesperson told VentureBeat. News of the funding round — the largest to-date for Databricks — was first reported in late January by Newcomer.
This amounts to a series G funding round for the data analysis and AI company. Based in San Francisco, the $1 billion funding round was led by new investor Franklin Templeton, with participation from Amazon Web Services (AWS), the Canada Pension Plan investment board, Fidelity Management & Research, and Salesforce Ventures. Databricks CEO Ali Ghodsi told VentureBeat that part of the impetus behind the funding round was partnerships with cloud companies, which he called a symbiotic relationship of strategic importance for Databricks.
“Basically, we believe the vast majority of the data in the cloud is going to be in these data lakes, and we are building solutions to drive more of that,” he said.
The $1 billion funding will be used in part to fuel a merger and acquisition strategy with a focus on machine learning and data startups, a subject he told VentureBeat currently occupies 10-20% of his time every week. “I think there’s definitely a lot of interesting things happening, especially in natural language processing. There’s a lot of use cases in enterprise. They have a lot of textual data. Being able to sort of make sense of that can be super helpful for them,” he said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ghodsi lists continued advances in machine learning and democratization of data and AI tools to people in business beyond computer scientists major ongoing trends he expects to shape the future of Databricks. “All these other enterprises out there are going to do the same thing: They’re going to be able to use data and AI in a strategic way just like Google did over the past 10 years or they’re going to be replaced. So our job is to democratize that,” he said.
Previous funding rounds have been led by Andreessen Horowitz and New Enterprise Associates (NEA) with participation from investors like Microsoft and Battery Ventures. Previous $250 million and $400 million funding rounds, held in February and October 2019 respectively, focused on development of the Unified Analytics platform, Delta Lake , and optimization of performance with the open source MLFlow platform for performing machine learning experiments and launching models into production. In June 2020, Databricks acquired Redash , the dashboard visualization for data scientists, and turned over control of MLflow to the Linux Foundation.
Databricks was founded in 2013 by the creators of Apache Spark, an open source framework for distributed computation across multiple machines many deep learning projects use today. The group of data and machine learning researchers first met at UC Berkeley.
Alongside companies like C3.ai and Snowflake that filed IPOs in 2020, Databricks is the latest company focused on data analysis and AI to experience rapid growth. That’s despite a drop in gross domestic product in the U.S. economy in the past year the likes of which, according to the U.S. Department of Commerce, has not been seen since the 1940s.
In an unrelated but relevant matter, Databricks cofounder and UC Berkeley professor Ion Stoica talked about reinforcement learning trends as part of VentureBeat’s Transform conference.
Updated 2:30 p.m. to include comment from Databricks CEO Ali Ghodsi and add funding round details.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,621 | 2,021 |
"Confidence, uncertainty, and trust in AI affect how humans make decisions | VentureBeat"
|
"https://venturebeat.com/ai/confidence-uncertainty-and-trust-in-ai-affect-how-humans-make-decisions"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Confidence, uncertainty, and trust in AI affect how humans make decisions Share on Facebook Share on X Share on LinkedIn In 2019, as the Department of Defense considered adopting AI ethics principles , the Defense Innovation Unit held a series of meetings across the U.S. to gather opinions from experts and the public. At one such meeting in Silicon Valley, Stanford University professor Herb Lin argued that he was concerned about people trusting AI too easily and said any application of AI should include a confidence score indicating the algorithm’s degree of certainty.
“AI systems should not only be the best possible. Sometimes they should say ‘I have no idea what I’m doing here, don’t trust me.’ That’s going to be really important,” he said.
The concern Lin raised is an important one: People can be manipulated by artificial intelligence, with cute robots a classic example of the human tendency to trust machines. But understanding how people alter their decision-making when presented with the output of an AI system is critical as the technology is adopted to augment human activity in a range of high-stakes settings — from courts of law to hospitals and the battlefield.
Health, human trust, and machine learning Health care applications of AI are among the fastest-growing of any sector of the economy. According to the State of AI report McKinsey published in late 2020 , companies in health care, automotive, and manufacturing were most likely to report having increased investments in AI last year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While the marriage of health care and AI can offer many benefits, the stakes don’t get much higher than human health, and there are a number of obstacles to building robust, trustworthy systems. Diagnosis may be one of the most popular health care applications, but it’s also susceptible to automation bias, when people become overreliant on answers generated by AI. A 2017 analysis of existing literature found numerous examples of automation bias in health care, typically involving diagnosis, and trusting AI systems may be an intractable human bias.
To consider how AI can manipulate human decision-making, some AI researchers are focused on understanding the degree to which people are influenced when AI predictions are matched with confidence or uncertainty metrics.
A year ago, a team at IBM Research performed experiments to assess how much showing people an AI prediction with a confidence score would impact their trust levels and overall accuracy when predicting a person’s annual income. The study found that sharing a confidence score did increase human levels of trust. But the researchers had expected confidence scores to improve human decision-making, which did not turn out to be the case.
“The fact that showing confidence improved trust and trust calibration but failed to improve the AI-assisted accuracy is puzzling, and it rejects our hypothesis that showing AI confidence score improves accuracy of AI-assisted predictions,” their paper reads.
Himabindu Lakkaraju is an assistant professor at Harvard University and is currently working with doctors in Boston with Brigham and Women’s Hospital and Massachusetts General Hospital. She thinks the case of doctors using AI to determine diagnosis and treatment plans is a great application for uncertainty measurements.
“If your models are inaccurate, the risk that people might be over-trusting them is very real,” Lakkaraju said. “We’re trying to see how these kinds of tools can help first of all with how should we train doctors to read these kinds of measures, like posterior predictive distributions or other forms of uncertainty, how they can leverage this information, and if this kind of information will help them make better diagnosis decisions and treatment recommendation decisions.” Across a number of research projects over the past year, Lakkaraju and colleagues have considered how sharing information like the algorithm’s level of uncertainty impacts human trust. The researchers have conducted experiments with a range of skilled workers, from health care workers and legal professionals to machine learning experts, and even people who know a lot about apartment rental rates.
AI and machine learning applications in health care Machine learning has already infiltrated health care in a variety of ways, with both positive and negative results. The 2018 AI Index report notes that computer vision has advanced to the point that AI fueled with large amounts of training data can identify skin cancer as accurately as a panel of doctors.
Some of the best-known machine learning systems in health care research classify disease diagnosis or generate treatment plans. In early 2019, Google AI researchers working with Northwestern Medicine created an AI model capable of detecting lung cancer from screening tests better than human radiologists with an average eight years of experience. Around the same time, MIT CSAIL and Massachusetts General Hospital published a paper in the journal Radiology about a system capable of predicting the onset of breast cancer five years in advance using mammography exam imagery.
COVID-19 triggered the creation of computer vision for recognizing the novel coronavirus in CT scans, though doctors do not yet consider this an acceptable form of diagnosis. However, the American College of Radiology is working on projects that use machine learning to detect COVID-19 from CT scans.
AI is also being integrated into health care through smart hospitals with a range of sensors and edge devices. In May 2020, Nvidia introduced Clara Guardian software that uses machine learning to monitor distances between people and assist health care workers with contactless patient monitoring. Nvidia is also working with hospitals on federated learning applications to combine insights from multiple datasets while preserving privacy and to reduce the amount of data needed to train generative networks.
Despite these advances, experts have identified serious risks associated with putting too much trust in predictions made by these types of AI models. A study assessing an algorithm U.S. hospitals use that was published in Science last fall found that millions of Black patients received a lower standard of care than white patients. More recently, an algorithm Stanford Medicine used to dispense COVID-19 vaccines prioritized some hospital administration executives over residents working directly with patients.
And a Google Health study introduced in April 2020 in partnership with the Ministry of Public Health in Thailand shows just how inaccurate AI can be when taken from a lab setting into the real world.
As an example of the opportunities AI offers, deep learning can extend screening tools to people who do not have access to a specialist. There are fewer than 2,000 medical specialists in Thailand trained to carry out diabetic retinopathy screenings and an estimated 4.5 million patients with diabetes. Because of this shortage of experts, the traditional diabetic retinopathy screening approach in Thailand can take up to 10 weeks, and AI researchers sought to improve both speed and scale. Results of the study carried out at 11 Thai clinics in 2018 and 2019 were published in a paper accepted for the ACM Human Factors in Computing Systems conference in April 2020.
While the program had some success, it also articulated hurdles associated with moving from the lab to a clinical setting. Google’s AI was intended to deliver results in 10 minutes with more than 90% accuracy, but the real-world performance came up short. Analysis of model predictions carried out in the first six months in nearly a dozen clinics in Thailand found that 21% of images failed to meet standards for the model trained with high-quality imagery in a lab. The study also found that socio-environmental factors like the need to quickly screen patients for a number of diseases and inadequate lighting in clinics can negatively impact model performance.
Black box AI There’s still hope that machine learning can help doctors accurately diagnose diseases faster or screen patients who don’t have access to health care specialists. But the problem of flawed systems influencing human decision-making is amplified by black box algorithms whose results defy explainability. A 2019 study from Stanford University researchers suggests that “the fragility of neural network interpretation could be a much broader problem.” Before studying how uncertainty can influence the way doctors make decisions, in 2019 Lakkaraju and University of Pennsylvania research assistant Osbert Bastani created an AI system designed to mislead people.
For the experiment, researchers purposely created an untrustworthy AI system for bail decisions. They made the system after surveying law students who knew how pretrial bail hearings work to identify traits they associate with untrustworthy bail algorithms. The idea was for the system to hide the fact that it was basing decisions on untrustworthy traits.
The students agreed that race and gender are two of the least trustworthy metrics you can use when creating a bail risk assessment algorithm, so the untrustworthy system didn’t use those as stated reasons for a recommendation. However, in the United States, a history of segregation and racist housing policy means zip codes can often serve as a substitute for race.
“We find that user trust can be manipulated by high-fidelity, misleading explanations. These misleading explanations exist since prohibited features (e.g., race or gender) can be reconstructed based on correlated features (e.g., zip code). Thus, adversarial actors can fool end users into trusting an untrustworthy black box [system] — e.g., one that employs prohibited attributes to make decisions,” the study reads.
The experiment confirmed the researchers’ hypothesis and showed how easily humans can be manipulated by black box algorithms.
“Our results demonstrate that the misleading explanations generated using our approach can in fact increase user trust by 9.8 times. Our findings have far-reaching implications both for research on machine learning interpretability and real-world applications of ML.” Final thoughts To make AI systems more accurate, Microsoft Research and others say professionals in fields like health care should become part of the machine learning development process. In an interview with VentureBeat in 2019, Microsoft Business AI VP Gurdeep Pall called working with human professionals in different fields the next frontier of machine learning.
But regardless of how AI is trained, studies have shown people from any profession, level of education, or background can suffer from automation bias, and black box systems only exacerbate the situation.
As a potential solution to people’s willingness to trust black box deep learning systems, in 2019 Lakkaraju introduced Model Understanding through Subspace Explanations (MUSE), a framework for interpretable AI that allows people to ask natural language questions about a model. In a small study where MUSE was used to interpret the activity of black box algorithms, participants were more likely to prefer explanations from MUSE than those provided by other popular frameworks for the same purpose.
Another study centered on critical AI systems and human decision-making also found interpretability to be important, calling for the use of a rapid test calibration process so people can better trust results.
A study Lakkaraju conducted with apartment rental listings last fall demonstrated that people with a background in machine learning had an advantage over non-experts when it came to understanding uncertainty curves but that showing uncertainty scores to both groups had an equalizing effect on their resilience to AI predictions.
Research by Lakkaraju and others is only beginning to answer questions about the way explanations about confidence or uncertainty scores affect people’s trust in AI. While the solution may never be as simple as choosing between showing uncertainty measures or confidence metrics, awareness of the pitfalls could go some way toward helping protect humans from machine learning’s limitations. Just as we judge other people before deciding how much to trust them, it seems only proper to place more trust in an AI system that has been created, maintained, and protected in ways scientifically proven to improve outcomes for people.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,622 | 2,021 |
"Center for Applied Data Ethics suggests treating AI like a bureaucracy | VentureBeat"
|
"https://venturebeat.com/ai/center-for-applied-data-ethics-suggests-treating-ai-like-a-bureaucracy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Center for Applied Data Ethics suggests treating AI like a bureaucracy Share on Facebook Share on X Share on LinkedIn Supreme Court of the United States Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco urges AI practitioners to adopt terms from anthropology when reviewing the performance of large machine learning models. The research suggests using this terminology to interrogate and analyze bureaucracy, states, and power structures in order to critically assess the performance of large machine learning models with the potential to harm people.
“This paper centers power as one of the factors designers need to identify and struggle with, alongside the ongoing conversations about biases in data and code, to understand why algorithmic systems tend to become inaccurate, absurd, harmful, and oppressive. This paper frames the massive algorithmic systems that harm marginalized groups as functionally similar to massive, sprawling administrative states that James Scott describes in Seeing Like a State ,” the author wrote.
The paper was authored by CADE fellow Ali Alkhatib, with guidance from director Rachel Thomas and CADE fellows Nana Young and Razvan Amironesei.
The researchers particularly look to the work of James Scott, who has examined hubris in administrative planning and sociotechnical systems. In Europe in the 1800s, for example, timber industry companies began using abridged maps and a field called “scientific forestry” to carry out monoculture planting in grids. While the practice resulted in higher initial yields in some cases, productivity dropped sharply in the second generation, underlining the validity of scientific principles favoring diversity. Like those abridged maps, Alkhatib argues, algorithms can both summarize and transform the world and are an expression of the difference between people’s lived experiences and what bureaucracies see or fail to see.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The paper, titled “ To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes ,” was recently published and accepted by the ACM Conference on Human Factors in Computing Systems (CHI), which will be held in May.
Recalling Scott’s analysis of states, Alkhatib warns against harms that can result from unhampered AI, including the administrative and computational reordering of society, a weakened civil society, and the rise of an authoritarian state. Alkhatib notes that such algorithms can misread and punish marginalized groups whose experiences do not fit within the confines of data considered to train a model.
People privileged enough to be considered the default by data scientists and who are not directly impacted by algorithmic bias and other harms may see the underrepresentation of race or gender as inconsequential.
Data Feminism authors Catherine D’Ignazio and Lauren Klein describe this as “privilege hazard.
” As Alkhatib put it, “other people have to recognize that race, gender, their experience of disability, or other dimensions of their lives inextricably affect how they experience the world.” He also cautions against uncritically accepting AI’s promise of a better world.
“AIs cause so much harm because they exhort us to live in their utopia,” the paper reads. “Framing AI as creating and imposing its own utopia against which people are judged is deliberately suggestive. The intention is to square us as designers and participants in systems against the reality that the world that computer scientists have captured in data is one that surveils, scrutinizes, and excludes the very groups that it most badly misreads. It squares us against the fact that the people we subject these systems to repeatedly endure abuse, harassment, and real violence precisely because they fall outside the paradigmatic model that the state — and now the algorithm — has constructed to describe the world.” At the same time, Alkhatib warns people not to see AI-driven power shifts as inevitable.
“We can and must more carefully reckon with the parts we play in empowering algorithmic systems to create their own models of the world, in allowing those systems to run roughshod over the people they harm, and in excluding and limiting interrogation of the systems that we participate in building.” Potential solutions the paper offers include undermining oppressive technologies and following the guidance of Stanford AI Lab researcher Pratyusha Kalluri, who advises asking whether AI shifts power , rather than whether it meets a chosen numeric definition of fair or good. Alkhatib also stresses the importance of individual resistance and refusal to participate in unjust systems to deny them power.
Other recent solutions include a culture change in computer vision and NLP , reduction in scale , and investments to reduce dependence on large datasets that make it virtually impossible to know what data is being used to train deep learning models. Failure to do so, researchers argue, will leave a small group of elite companies to create massive AI models such as OpenAI’s GPT-3 and the trillion-parameter language model Google introduced earlier this month.
The paper’s cross-disciplinary approach is also in line with a diverse body of work AI researchers have produced within the past year. Last month, researchers released the first details of OcéanIA , which treats a scientific project for identifying phytoplankton species as a challenge for machine learning, oceanography, and science. Other researchers have advised a multidisciplinary approach to advancing the fields of deep reinforcement learning and NLP bias assessment.
We’ve also seen analysis of AI that teams sociology and critical race theory, as well as anticolonial AI , which calls for recognizing the historical context associated with colonialism in order to understand which practices to avoid when building AI systems. And VentureBeat has written extensively about the fact that AI ethics is all about power.
Last year, a cohort of well-known members of the algorithmic bias research community created an internal algorithm-auditing framework to close AI accountability gaps within organizations. That work asks organizations to draw lessons from the aerospace, finance, and medical device industries. Coauthors of the paper include Margaret Mitchell and Timnit Gebru, who used to lead the Google AI ethics team together. Since then, Google has fired Gebru and, according to a Google spokesperson, opened an investigation into Mitchell.
With control of the presidency and both houses of Congress in the U.S., Democrats could address a range of tech policy issues in the coming years, from laws regulating the use of facial recognition by businesses, governments, and law enforcement to antitrust actions to rein in Big Tech. However, a 50-50 Senate means Democrats may be forced to consider bipartisan or moderate positions in order to pass legislation.
The Biden administration emphasized support for diversity and distaste for algorithmic bias in a televised ceremony introducing the science and technology team on January 16. Vice President Kamala Harris has also spoken passionately against algorithmic bias and automated discrimination. In the first hours of his administration, President Biden signed an executive order to advance racial equality that instructs the White House Office of Science and Technology Policy (OSTP) to participate in a newly formed working group tasked with disaggregating government data. This initiative is based in part on concerns that an inability to analyze such data impedes efforts to advance equity.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,623 | 2,021 |
"Building AI for the Global South | VentureBeat"
|
"https://venturebeat.com/ai/building-ai-for-the-global-south"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Building AI for the Global South Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men , disproportionately reject female job candidates , or target people who identify as queer.
In India, those impacts can further injure marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism.
That’s the argument behind “De-centering Algorithmic Power: Towards Algorithmic Fairness in India,” a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which is taking place this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.
“As AI becomes global, algorithmic fairness naturally follows. Context matters. We must take care to not copy-paste the Western normative fairness everywhere,” the paper reads. “The considerations we identified are certainly not limited to India; likewise, we call for inclusively evolving global approaches to Fair-ML.” The paper’s coauthors concluded that conventional measurements of algorithm fairness make assumptions based on Western institutions and infrastructures after they conducted 36 interviews with researchers, activists, and lawyers working with marginalized Indian communities. Among the five coauthors, three are Indian and two are white, according to the paper.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google research scientist Nithya Sambasivan, who previously worked to create a phone broadcasting system for sex workers in India, is the lead author. She’s also head of human-computer interaction at Google Research India. Coauthors include Ethical AI team researchers Ben Hutchinson and Vinodkumar Prabhakaran. Hutchinson and Prabhakaran were listed as coauthors of a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” that was also accepted for publication at FAccT this year, but the version submitted to FAccT does not include their names. That paper was the subject of debate at the time former Google AI ethics co-lead Timnit Gebru was fired and concludes that extremely large language models harm marginalized communities by perpetuating stereotypes and biases. Organizers of the conference told VentureBeat this week that FAccT has suspended its sponsorship relationship with Google.
The paper about India identifies factors commonly associated with algorithmic harm in the country, including models being overfit to digitally rich profiles, which usually means middle class men, and a lack of ways to interrogate AI.
As a major step toward progress, the coauthors point to the AI Observatory , a project to document harm from automation in India that launched last year with support from the Mozilla Foundation. The paper also calls for reporters to go beyond business reporting and ask tech companies tough questions, stating, “Technology journalism is a keystone of equitable automation and needs to be fostered for AI.” “While algorithmic fairness keeps AI within ethical and legal boundaries in the West, there is a real danger that naïve generalization of fairness will fail to keep AI deployments in check in the non-West,” the paper reads. “We must take pains not to develop a general theory of algorithmic fairness based on the study of Western populations.” The paper is part of a recent surge in efforts to build AI that works for the Global South.
A 2019 paper about designing AI for the Global South describes the term “Global South” as similar to the term “third world,” with a shared history of colonialism and development goals. Global South does not mean simply the Southern Hemisphere, as Northern Hemisphere nations like China, India, and Mexico are generally included, while Australia is in the Southern Hemisphere but is considered part of the Global North. China seems to be set aside since its AI ambitions and results instill fear in politicians in Washington, D.C. and executives in Big Tech alike.
“The broad concern is clear enough: If privileged white men are designing the technology and the business models for AI, how will they design for the South?” the 2019 paper reads. “The answer is that they will design in a manner that is at best an uneasy fit, and at worst amplifies existing systemic harm and oppression to horrifying proportions.” Another paper accepted for publication at FAccT this week and covered by VentureBeat examines common hindrances to data sharing in Africa. Written primarily by AI researchers who grew up or live in Africa, the paper urges relationships to data that build trust and consider historical context, as well as current trends of Big Tech companies growing operations in Africa. Like the Google paper, that work draws conclusions from interviews with local experts.
“In recent years, the African continent as a whole has been considered a frontier opportunity for building data collection infrastructures. The enthusiasm around data sharing, and especially in machine learning or data science for development/social good settings, has ranged from tempered discussions around new research avenues to proclamations that ‘ the AI invasion is coming to Africa (and it’s a good thing).
’ In this work, we echo previous discussions that this can lead to data colonialism and significant, irreparable harm to communities.” The African data industry is expected to see steady growth in the coming years. Companies like Amazon’s AWS and Microsoft’s Azure opened their first datacenters in Africa in 2019 and 2020, respectively. Such trends have led to examination of data practices around the world, including in the Global South.
Last year, MIT hosted a three-day summit in Boston to discuss AI from a Latin American perspective. The winner of a pitch competition at that event was a predictive model for attrition rates in higher education in Mexico.
Above: The 2020 Global AI Readiness Index comparing preparedness and capacity across 33 different metrics As part of the summit, Latinx in AI founder Laura Montoya gave a presentation about the Global AI Readiness (GAIR) score of Caribbean and Latin American countries, alongside factors like unemployment rates, education levels, and the cost of hiring AI researchers.
The inaugural Government AI Readiness Index ranked Mexico highest among Latin American nations, followed by Uruguay and Colombia. Readiness rankings were based on around a dozen factors, including skills, education levels, and governance. Cuba ranked last in the region. When coauthors introduced GAIR in 2019, they questioned whether the Global South would be left out of the fourth industrial revolution. That concern was echoed in the 2020 report.
“If inequality in government AI readiness translates into inequality in AI implementation, this could entrench economic inequality and leave billions of citizens across the Global South with worse quality public services,” authors of the report said.
In the 2020 GAIR , Uruguay inched ahead of Mexico. At #42 in the world, Uruguay is the highest-ranking country in Latin America. Top 50 nations in the AI readiness index are almost entirely in the Global North. And authors of the report stress that having the capability of advancing isn’t the same thing as successful implementation.
Montoya insists that Caribbean and Latin American nations must consider factors like unemployment rates and warns that brain drain can also be a significant factor and lead to a lack of mentors for future generations.
“Overall, Latin American and Caribbean do have fairly high education levels, and specifically they actually develop more academic researchers in the area of AI than other regions globally, which is of note, but oftentimes those researchers with high technological skills will leave their country of origin in order to seek out potential job opportunities or resources that are not available in their country of origin,” she said.
Leda Basombrio is the data science lead at a center of excellence established by Banco de Credito del Peru. Speaking as part of a panel on the importance of working with industry, she described the difficulty of trying to recruit Latinx AI talent away from Big Tech companies like Facebook or Google in the United States. The majority of AI Ph.D. graduates in the U.S. today are born outside the United States, and about four out of five stay in the U.S. after graduation.
Solutions built elsewhere don’t simply transfer without consideration of local context and culture, she said. Americans or Europeans are likely unfamiliar with the financial realities in Peru, like microfinance loans or informal economic activity.
“The only people that are capable of solving and addressing [problems] using AI as a tool or not are ourselves. So we have to start giving ourselves more credit and start working on those fields because if we expect resolutions will come from abroad, nothing will happen, and I see that we do have the talent, experience, everything we can get,” she said.
AI policy: Global North vs. the Global South Diplomats and national government leaders have met on several occasions to discuss AI investment and deployment strategies in recent years, but those efforts have almost exclusively involved Global North nations.
In 2019, OECD member nations and others agreed to a set of principles in favor of the “responsible stewardship of trustworthy AI.” More than 40 nations signed the agreement, but only five were from the Global South.
Later that year, the G20 adopted AI principles based on the OECD principles calling for human-centered AI and the need for international cooperation and national policy to ensure trustworthy AI. But that organization only includes six Global South nations: Brazil, India, Indonesia, Mexico, South Africa, and Turkey.
The Global Partnership on AI ( GPAI ) was formed last year in part to counter authoritarian governments’ efforts to implement surveillance tech and China’s AI ambitions. The body of 15 nations includes the U.S., but Brazil, India, and Mexico are its only members from the Global South.
Last year, the United States Department of Defense brought together a group of allies to consider artificial intelligence applications in the military, but that was primarily limited to U.S. allies from Europe and East Asia.
No nations from Africa or South America participated.
Part of the lack of Global South participation in such efforts may have to do with the fact that several countries still lack national AI strategies. In 2017, Canada became the first country in the world to form a national AI strategy, followed by nations in Western Europe and the U.S. An analysis released this week found national AI strategies are under development in parts of South America, like Argentina and Brazil, and parts of Africa, including Ethiopia and Tunisia.
Above: A global map of national AI policy initiatives according to the 2021 AI Index at Stanford University An analysis published in late 2020 found a growing gap or “compute divide” between businesses and universities with the compute and data resources for deep learning and those without. In an interview with VentureBeat earlier this year about an OECD project to help nations understand their compute needs, Nvidia VP of worldwide AI initiatives Keith Strier said he expects a similar gap to form between nations.
“There’s a clear haves and have-nots that’s evolving, and it’s a global compute divide. And this is going to make it very hard for tier two countries in Africa, in Latin America and Southeast Asia and Central Europe. [I] mean that the gap in their prosperity level is going to really accelerate their inability to support research, support AI startups, keep young people with innovative ideas in these fields in their country. They’re all going to flock to big capitals — brain drain,” Strier said.
The OECD AI Policy Observatory maintains a database of national AI policies and is helping nations put ethical principles into practice. OECD AI Policy Observatory administrator Karine Perset told VentureBeat in January that some form of AI strategy is underway in nearly 90 nations, including Kenya and others in the Global South.
There are other encouraging signs of progress in AI in Africa.
The machine learning tutorial project Fast.ai found high growth in cities like Lagos, Nigeria in 2019, the same year the African Union formed an AI working group to tackle common challenges, and GitHub ranked a number of African and Global South nations by growth in their contribution to open source repositories. In education, the African Master’s in Machine Intelligence was established in 2018 with support from Facebook, Google, the African Institute for Mathematical Sciences, and prominent Western AI researchers from industry and academia.
The Deep Learning Indaba conference has flourished in Africa, but AI research conferences are generally held in North America and Europe. The International Conference on Learning Representations (ICLR) was scheduled to take place in Ethiopia in 2020 and would have been the first major machine learning conference in Africa , but it was scrapped due to the COVID-19 pandemic.
The AI Index released earlier this week found that Brazil, India, and South Africa have some of the highest levels of hiring in AI around the world, according to LinkedIn data.
Analysis included in that report finds that attendance at major AI research conferences roughly doubled in 2020. COVID-19 forced AI conferences to move online, which led to greater access worldwide. AI researchers from Africa have faced challenges when attempting to attend conferences like NeurIPS on numerous occasions in the past.
Difficulty faced by researchers from parts of Africa, Asia, and Eastern Europe led the Partnership on AI to suggest that more governments create visas for AI researchers to attend conferences, akin to the visas some nations have for athletes, doctors, and entrepreneurs.
Make a lexicon for AI in the Global South Data & Society has launched a project to map AI in the Global South. Ranjit Singh, a member of the AI on the Ground team at Data & Society, in late January launched a project for mapping AI in the Global South over the course of the year. As part of that project, he will collaborate with members of the AI community, including AI Now Institute, which is working to build a lexicon around conversations about AI for the Global South.
“The story of how public-private partnerships are imagined in the context of AI, especially in the Global South, and the nature of these relationships that are emerging, I find that to be quite a fascinating part of this study,” Singh said.
Singh said he focuses on conversations about AI in the Global South because identifying keywords can help people understand critical issues and provide information needed for governance, policy, and regulation.
“So I want to basically move from what the conversation and keywords that scholarly research, as well as practitioners in the space, talk about and use to then start thinking about, ‘OK, if this is the vocabulary of how things work, or how people talk about these things, then how do we start thinking about governance of AI?'” he said.
A paper published at FAccT and coauthored by Singh and the Data & Society AI on the Ground team considers how environmental, financial, and human rights impact assessments are used to measure commonalities and quantify impact.
Global South AI use cases Rida Qadri is a Ph.D. candidate who grew up in Pakistan and now studies urban information systems and the Global South at MIT. Qadri says papers on data and AI in India and Africa published at FAccT emphasize a narrative around ethics and communities influenced by legacies of colonialism.
“They’re thinking about those kinds of ethical concerns that now Silicon Valley is being critiqued for. But what’s interesting is they position themselves as homegrown startups that are solving developing world problems. And because the founders are all from the developing world, they automatically get a lot of legitimacy. But the language that they’re speaking is just directly what Silicon Valley would be speaking — with some sort of ICT for development stuff thrown in, like empowering the poor, like educating farmers. You have ethics washing in the Global North, and in the developing world we have development washing or empowerment speak, like poverty porn,” she said.
Qadri also sees ways AI can improve lives and says that building innovative AI for the Global South could help solve problems that plague businesses and governments around the world, particularly when it comes to working in lean or resource-strapped environments.
Trends she’s watching around AI in the Global South include security and surveillance, census and population counts using satellite imagery, and predictions of poverty and socio-economics.
There are also numerous efforts related to creating language models or machine translation. Qadri follows Matnsāz , a predictive keyboard and developer tool for Urdu speakers. There’s also the Masakhane open source project to provide machine translation for thousands of African languages to preserve local languages and enable commerce and communication. That project focuses on working with low-resource languages, those with less text data for machine translation training than languages like English or French.
Final thoughts Research published at FAccT this week highlights concerns about data colonialism in the Global North. If AI can build what Ruha Benjamin refers to as a new Jim Code in the United States , it seems critically important to consider trends of democratization, or the lack thereof, and how AI is being built in nations with a history of colonialism.
It’s also true that brain drain is a major factor for businesses and governments in the Global South and that a number of international AI coalitions have been formed largely without these nations. Let’s hope those coalitions expand. Doing so could involve reconciling issues between countries largely known for colonization and those that were colonized. And enabling the responsible development and deployment of AI in Global South nations could help combat issues like data colonialism in other parts of the world.
But issues of trust remain a constant across international agreements and papers published at FAccT by researchers from Africa and India this week. Trust is also highlighted in agreements from the OECD and G20.
There can be a temptation to view AI ethics purely as a human rights issue, but the fair and equitable deployment of artificial intelligence is also essential to AI adoption and business risk management.
Above: Global AI adoption rates according to McKinsey Jack Clark was formerly director of policy at OpenAI and is part of the steering committee for the AI Index, an annual report on the progress of AI in business, policy, and use case performance. He told VentureBeat earlier this week that the AI industry is industrializing rapidly , but it badly needs benchmarks and ways to test AI systems to move technical performance standards forward. As businesses and governments increase deployments, benchmark challenges can also help AI practitioners measure progress toward shared goals and rally people around common causes — like preventing deforestation.
The idea of common interests comes up in Ranjit Singh’s work at Data & Society. He said his project is motivated by a desire to map and understand the distinct language used to discuss AI in Global South nations, but also to recognize global concerns and encourage people to work together on solutions. These might include attempts to understand when a baby’s cough is deadly, as the startup Ubenwa is doing in Canada and Nigeria ; seeking public health insights from search engine activity; and fueling local commerce with machine translation. But whatever the use case, experts in Africa and India stress that equitable and successful implementation depends on involving local communities from the inception.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,624 | 2,021 |
"Bearing.ai emerges from stealth to power recommendations for shipping boat captains | VentureBeat"
|
"https://venturebeat.com/ai/bearing-ai-emerges-from-stealth-to-power-recommendations-for-shipping-boat-captains"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bearing.ai emerges from stealth to power recommendations for shipping boat captains Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Bearing.ai emerged from stealth today to launch AI-powered software that provides predictions to the maritime shipping industry and tanker boat captains. The idea is to optimize shipping route navigation based on fuel efficiency, profit, and safety. Since the company was founded in 2019, Bearing.ai has raised $3 million from the AI Fund, a $175 million endeavor led by former Google Brain cofounder Andrew Ng, as well as Japanese shipping company Mitsui and Co.
Bearing.ai CEO Dylan Kiel told VentureBeat the startup was able to train its first models with historical data provided by investor Mitsui and Co. and 2,500 ships. As part of the arrangement, Bearing.ai announced deals to provide services to 300 K Line vessels, as well as shipping companies MOL and ZeroNorth. Fuel consumption is Bearing.ai’s primary focus, Kiel said, because it’s the biggest single driver of operating costs for shipping companies. When making predictions, Bearing.ai takes in sensor data and considers factors like ship dimensions, location, and weather conditions like wind speed and wave size.
“Weather is one of the single biggest drivers of the variance that occurs with fuel consumption for a given voyage. I can have the same ship going on the same route, let’s say Tokyo to San Diego, and [carrying] the same cargo. And the consumption I have from voyage A to voyage B could be different by 30-40% based upon the weather,” Kiel said.
Bearing.ai claims its models are capable of predicting the fuel consumption of a container or hull ship with 98% accuracy, a feat made possible by fuel sensors and speed sensors collecting data on a minute-by-minute basis.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Container ships enable a vast amount of global trade but saw a sharp decline in 2020 due to the COVID-19 pandemic.
Like other industries, shipping faces pressure to automate, and Kiel said Bearing.ai wants to help companies consider a range of options to save money.
“It’s not just choosing the right route for one ship. If you choose that right route for that ship that impacts what the other ships need to do and that impacts what you’re going to do with your contract and when you’re going to clean that ship and so on … there’s a lot of decision points that ultimately are all interconnected if you’re trying to optimize the whole system,” he said. “Pretty much every decision you can make as a decision company — whether it’s fuel to use, the ship to use, the route to take, how you position your fleet — all of that impacts your ultimate operational efficiency.” Other examples of automation startups entering the maritime space include Sea Machines , which is working on autonomous shipping navigation, and Orca AI , which makes systems to help ships avoid collisions. Also of note is recent work by AI researchers to create amphibious robots capable of movement on sea and land.
Bearing.ai was founded in June 2019 and is based in Palo Alto, California. The company has 10 employees.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,625 | 2,021 |
"Band of AI startups launch 'rebel alliance' for interoperability | VentureBeat"
|
"https://venturebeat.com/ai/band-of-ai-startups-launch-rebel-alliance-for-interoperability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Band of AI startups launch ‘rebel alliance’ for interoperability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
More than 20 AI startups have banded together to create the AI Infrastructure Alliance in order to build a software and hardware stack for machine learning and adopt common standards. The alliance brings together companies like Algorithmia ; Determined AI , which works with deep learning; data monitoring startup WhyLabs ; and Pachyderm , a data science company that raised $16 million last year in a round led by M12, formerly Microsoft Ventures. A spokesperson for the alliance said partner organizations have raised about $200 million in funding from investors.
Dan Jeffries, chief tech evangelist at Pachyderm, will serve as director of the alliance. He said the group began to form from conversations that started over a year ago. Participants include a number of companies whose founders have experience running systems at scale within Big Tech companies. For example, WhyLabs CEO and cofounder Alessya Visnjic worked on fixing machine learning issues at Amazon, and Jeffries previously worked with machine learning at Red Hat.
But in a conversation with VentureBeat, Jeffries referred to the endeavor for small to medium-size businesses in AI as a “rebel alliance against the empire” that will serve as an alternative to offerings from Big Tech cloud providers, which he characterized as “building an infrastructure just to lock you in.” “Don’t get me wrong: There’s nothing wrong with a big proprietary tool if you’re all in, but a true canonical stack is one that’s portable across environments,” he said. “To become part of a truly foundational stack of the future, you’ve got to run in multiple environments. And you’ve got to play nice with others in the sandbox, and you have to have interoperability in that market.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Not everyone in the group will survive. But we’ve talked about this. Like we’re in this Cambrian explosion period, and the Alliance at this point, it will serve where we are in the adoption curve. Some of these companies will go away or fold into whoever the eventual winner is,” he said.
The alliance initially plans to focus on things like small partnerships between developers working on tools and frameworks, facilitating joint documentation, and creating test software for integration. How to eliminate bias in algorithms before being deployed will not be considered as part of what Jeffries refers to as the canonical stack.
Examples of alliances formed in the AI space to tackle include the Open Source Neural Exchange (ONNX ), created by Facebook and Microsoft in 2017, and open source projects like MLFlow, TensorFlow, and Apache Spark, which cofounders of Determined AI contributed to while at UC Berkeley.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,626 | 2,021 |
"AI Weekly: The Biden administration, algorithmic bias, and restoring the soul of America | VentureBeat"
|
"https://venturebeat.com/ai/ai-weekly-the-biden-administration-algorithmic-bias-and-restoring-the-soul-of-america"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: The Biden administration, algorithmic bias, and restoring the soul of America Share on Facebook Share on X Share on LinkedIn WASHINGTON, DC - JANUARY 20: Joe Biden is sworn in as the 46th president of the United States by Chief Justice John Roberts as Jill Biden holds the Bible and their children Ashley and Hunter look on during the 59th Presidential Inauguration on the West Front of the U.S. Capitol on January 20, 2021 in Washington, DC. During today's inauguration ceremony Joe Biden becomes the 46th president of the United States. (Photo by Andrew Harnik - Pool/Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This week, U.S. President Joe Biden spoke to the country about a need to restore the soul of America and reminded listeners of “what we owe our forebears, one another, and generations to follow.” Biden said he chose to run for president in reaction to a white supremacist rally in Charlottesville, Virginia. On Wednesday, he became the first president to mention white supremacy in a presidential inauguration speech, calling for the defeat of domestic terrorism and a social hierarchy older than the United States. In his speech, Biden stressed virtues like tolerance, humility, empathy, and, above all, unity.
There was a prayerful moment of silence for 400,000 Americans lost to COVID-19 and the message that we must defeat the forces that divide us in order to make the U.S. a “leading force for good in the world.” It was, as the New York Times Daily podcast noted, not unlike a church sermon.
But the Biden administration enters office with one of the longest to-do lists in U.S. history. Urgent issues range from COVID-19 to economic recovery to addressing inequality wrought by America’s original sin of racism. Biden also follows his predecessor’s lack of executive leadership and a coup attempt. Among multiple crises the administration must tackle from the outset, major artificial intelligence issues are set to play out in the coming weeks and years.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Applying civil rights to tech policy Democratic control of both houses of Congress means we could see new legislation to address a range of tech policy issues.
Early signs indicate the Biden administration plans to handle enforcement of existing law and regulation very differently from the Trump administration, particularly when it comes to issues like algorithmic bias, according to FTC Commissioner Rebecca Slaughter.
“I think there’s a lot of unity on the Democratic side and a lot of consensus about the direction that we need to go,” Slaughter said as part of a Protocol panel conversation about tech and the first 100 days of the Biden administration. On Thursday, Biden appointed Slaughter acting chair of the Federal Trade Commission (FTC). “For me, algorithmic bias is an economic justice issue. We see disparate outcomes coming out of algorithmic decision-making that disproportionately affect and harm Black and brown communities and affect their ability to participate equally in society. That’s something we need to address.” Speaking as a commissioner, she said one of her priorities is centering enforcement on anti-racist practices and confronting unfair market practices that disproportionately impact people of color. This will include treating antitrust enforcement and unfair market practices as racial justice issues.
Brookings Institution senior fellow and Center for Technology Innovation director Nicol Turner Lee also spoke during the panel conversation. Without attention to issues like algorithmic bias or data privacy, Lee said, “we actually run the risk of going backward.” The question becomes, Lee added, what kind of policy and enforcement support the Biden administration will allocate to that aim.
“There’s no reason that you couldn’t start in this administration applying every existing civil rights statute to tech. Period. When you design a credit analysis tool that relies on algorithms, make sure it’s compliant with the Fair Credit Reporting Act. Going to design a housing tool? Make sure it complies with the Fair Housing Act. To me, that’s a simple start that actually had some traction in Congress,” Lee said.
Earlier this month, Biden appointed civil rights attorneys Vanita Gupta and Kristen Clarke as associate attorney general and assistant attorney general for civil rights, respectively.
Both have a history of challenging algorithmic bias at companies like Facebook, Google, and Twitter. In testimony and letters to Congress in recent years, Gupta has stressed that machine learning “must protect civil rights, prevent discrimination, and advance equal opportunity.” Finally, last week Biden said he planned to elevate the position of science advisor, held by Office of Science and Technology Policy (OSTP) head Dr. Eric Lander, to cabinet level. Dr. Alondra Nelson will act as OSTP deputy director for science and society. AI, she said in a ceremony with Biden and Vice President Kamala Harris , is technology that can “reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue.” “When we provide inputs to the algorithm; when we program the device; when we design, test, and research; we are making human choices, choices that bring our social world to bear in a new and powerful way,” she said.
In the first hours of his administration, Biden signed an executive order to advance racial equality that instructs the OSTP to participate in a newly formed working group tasked with disaggregating government data. This initiative is based in part on concerns that an inability to analyze such data impedes efforts to advance equity.
Confronting white supremacy in AI The Biden administration faces a general lack of progress in addressing risks associated with AI deployment and recent events that seem to signal the collapse of AI ethics at Google.
According to a 2020 McKinsey survey , business leaders are addressing 10 major risks associated with artificial intelligence at glacial rates that echo the lack of progress on diverse hiring in tech.
Interrogating the role of white supremacy in the recent insurrection seems an essential step toward safeguarding the future of democracy in the United States. But links to white supremacy have also been found in the AI industry, and the white default in the intelligence industry persists after a year of efforts to interrogate artificial whiteness and anti-Blackness in artificial intelligence.
AI objectives in Biden’s policy goals include “addressing” the ongoing spread of disinformation and hate speech for profit on Facebook and YouTube , as well as current debates over facial recognition.
Another example comes from Clearview AI, a company built on billions of images scraped from the internet without permission. Clearview AI CEO Hoan Ton-That says the company’s tech is currently used by thousands of police departments and, according to Gothamist reporting this week , more than 100 prosecutorial offices in the United States.
Ton-That said this week that as a person of mixed race he’s committed to “non-biased technology,” but Clearview AI has a history of ties with white supremacist groups and has sought controversial government contracts.
Clearview AI usage reportedly rose following the insurrection two weeks weeks ago. Policy analysts with a history of sponsoring legislation to regulate AI on human rights grounds warned VentureBeat earlier this month that use of facial recognition to find white supremacists involved with the insurrection could lead to the proliferation of technology that negatively impacts Black people.
Healing wounds and making history In his inauguration speech Wednesday, Biden said “the U.S. will lead not only by the example of our power but by the power of our example.” We are now beginning to see what that example might look like.
The Biden administration may oversee increased government use of complex AI models. According to a study released roughly a year ago by Stanford and New York University, only 15% of AI used by federal agencies is considered highly sophisticated.
The administration will also take part in upcoming talks about lethal autonomous weapons, a subject European politicians addressed this week.
The final recommendations of the National Security Council on AI, a group appointed by Congress with commissions representing Big Tech executives, is due out later this year.
There’s also the need to, as one researcher put it, introduce legal intervention to provide redress and more definitively answer the question of who is held responsible when AI hurts people.
The ceremony in Washington, D.C. this week was not just notable for upholding the tradition of a peaceful transfer of power. Harris was the first woman in U.S. history to be sworn in as vice president. Hours later, she swore in Jon Ossoff (D-GA), the youngest senator in generations; Raphael Warnock (D-GA), the first Black man elected to the U.S. Senate by voters from a southern state; and Alex Padilla (D-CA), the first Latinx person to represent the state of California in the U.S. Senate.
The ceremony reinforced Biden’s commitment to a multiracial democracy where everyone is treated equally and a reestablishment of the rule of law. Part of keeping that promise — and, as Biden said, leading by example — will be addressing ways algorithmic decision-making systems and machine learning can harm people.
The desire for a more equitable and just society is also evident in Biden’s decision to decorate the Oval Office with busts of icons like Cesar Chavez, Eleanor Roosevelt, Martin Luther King Jr., and Rosa Parks. His respect for science is represented by the inclusion of a moon rock collected by NASA and his decision to elevate the role of Presidential Science Advisor to a cabinet-level position.
The Biden administration’s handling of AI and its impact on society won’t just have the potential to affect how businesses, governments, and law enforcement adopt and use the technology in the United States. It will also determine whether the country has the moral credibility to lead others. This includes condemning the way China treats Muslim minority groups , an ongoing situation the outgoing and incoming presidential administrations have both called a genocide.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,627 | 2,021 |
"AI Weekly: Techno-utopianism in the workplace and the threat of excessive automation | VentureBeat"
|
"https://venturebeat.com/ai/ai-weekly-techno-utopianism-in-the-workplace-and-the-threat-of-excessive-automation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Techno-utopianism in the workplace and the threat of excessive automation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Every so often, VentureBeat writes a story about something that needs to go away. A few years back, my colleague Blair Hanley Frank argued that AI systems like Einstein, Sensei, and Watson must go because corporations tend to overpromise results for their products and services. I’ve taken runs at charlatan AI and white supremacy.
This week, a series of events at the intersection of the workplace and AI lent support to the argument that techno-utopianism has no place in the modern world. Among the warning signs in headlines was a widely circulated piece by a Financial Times journalist who said she was wrong to be optimistic about robots.
The reporter describes how she used to be a techno-optimist but in the course of her reporting found that robots can crunch people into their system and force them to work at a robot’s pace. In the article, she cites the Center for Investigative Reporting’s analysis of internal Amazon records that found instances of human injury were higher in Amazon facilities with robots than in facilities without robots.
“Dehumanization and intensification of work is not inevitable,” wrote the journalist, who’s quite literally named Sarah O’Connor. Fill in your choice of Terminator joke here.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Also this week: The BBC quoted HireVue CEO Kevin Parker as saying AI is more impartial than a human interviewer.
After facing opposition on multiple fronts, HireVue announced last month it would no longer use facial analysis in its AI-powered video interview analysis of job candidates. Microsoft Teams got similar tech this week to recognize and highlight who’s enjoying a video call.
External auditors have examined the Al used by HireVue and hiring software company Pymetrics, which refers to its AI as “entirely bias free, ” but the processes seem to have raised more questions than they’ve answered.
And VentureBeat published an article about a research paper with a warning: Companies like Google and OpenAI have a matter of months to confront negative societal consequences of the large language models they release before they perpetuate stereotypes, replace jobs, or are used to spread disinformation.
What’s important to understand about that paper, written by researchers at OpenAI and Stanford, is that before criticism of large language models became widespread, research and dataset audits found major flaws in large computer vision datasets that were over a decade old, like ImageNet and 80 Million Tiny Images.
An analysis of face datasets dating back four decades also found ethically questionable practices.
A day after that article was published, OpenAI cofounder Greg Brockman tweeted what looked like an endorsement of a 90-hour work week.
Run the math on that. If you slept seven hours a night, you would have about four hours a day to do anything that is not work — like exercise, eating, resting, or spending time with your family.
Agreed: https://t.co/1rZc1Dhgma pic.twitter.com/vESf1DrBn4 — Greg Brockman (@gdb) February 10, 2021 An end to techno-utopianism doesn’t have to mean the death of optimistic views about ways technology can improve human lives. There are still plenty of people who believe that indoor farming can change lives for the better or that machine learning can accelerate efforts to address climate change.
Google AI ethics co-lead Margaret Mitchell recently made a case for AI design that keeps the bigger picture in mind. In an email sent to company leaders before she was placed under investigation , she said consideration of ethics and inclusion is part of long-term thinking for long-term beneficial outcomes.
“The idea is that, to define AI research now , we must look to where we want to be in the future, working backwards from ideal futures to this moment, right now, in order to figure out what to work on today,” Mitchell said. “When you can ground your research thinking in both foresight and an understanding of society, then the research questions to currently focus on fall out from there.” With that kind of long-term thinking in mind, Google’s Ethical AI team and Google DeepMind researchers have produced a framework for carrying out internal algorithm audits , questioned the wisdom of scale when addressing societal issues, and called for a culture change in the machine learning community.
Google researchers have also advocated rebuilding the AI industry according to principles of anticolonialism and queer AI and evaluating fairness using sociology and critical race theory.
And ethical AI researchers recently asserted that algorithmic fairness cannot simply be transferred from Western nations to non-Western nations or those in the Global South, like India.
The death of techno-utopia could entail creators of AI systems recognizing that they may need to work with the communities their technology impacts and do more than simply abide by the scant regulations currently in place. This could benefit tech companies as well as the general public. As Parity CEO Rumman Chowdhury told VentureBeat in a recent story about what algorithmic auditing startups need to succeed , unethical behavior can have reputation and financial costs that stretch beyond any legal ramifications.
Computer scientists trying to find a path to 'Ethical AI' but refusing to learn anything about white supremacy, heteropatriarchy, capitalism, ablism, or settler colonialism pic.twitter.com/vPf0aa4gLa — Sasha 'JoinMastodon.org' Costanza-Chock (@schock) February 8, 2021 The lack of comprehensive regulation may be why some national governments and groups like Data & Society and the OECD are building algorithmic assessment tools to diagnose risk levels for AI systems.
Numerous reports and surveys have found automation on the rise during the pandemic , and events of the past week remind me of the work of MIT professor and economist Daron Acemoglu, whose research has found one robot can replace 3.3 human jobs.
In testimony before Congress last fall about the role AI will play in the economic recovery in the United States, Acemoglu warned the committee about the dangers of excessive automation. A 2018 National Bureau of Economic Research (NBER) paper coauthored by Acemoglu says automation can create new jobs and tasks, as it has done in the past, but says excessive automation is capable of constraining labor market growth and has potentially acted as a drag on productivity growth for decades.
“AI is a broad technological platform with great promise. It can be used for helping human productivity and creating new human tasks, but it could exacerbate the same trends if we use it just for automation,” he told the House Budget committee. “Excessive automation is not an inexorable development. It is a result of choices, and we can make different choices.” To avoid excessive automation, in that 2018 NBER paper Acemoglu and his coauthor, Boston University research fellow Pascual Restrepo, call for reforms of the U.S. tax code, because it currently favors capital over human labor. They also call for new or strengthened institutions or policy to ensure shared prosperity, writing, “If we do not find a way of creating shared prosperity from the productivity gains generated by AI, there is a danger that the political reaction to these new technologies may slow down or even completely stop their adoption and development.” This week’s events involve complexities like robots and humans working together and language models with billions of parameters, but they all seem to raise a simple question: “What is intelligence?” To me, working 90 hours a week is not intelligent. Neither is perpetuating bias or stereotypes with language models or failing to consider the impact of excessive automation. True intelligence takes into account long-term costs and consequences, historical and social context, and, as Sarah O’Connor put it, makes sure “the robots work for us, and not the other way around.” For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,628 | 2,021 |
"AI Weekly: Algorithms, accountability, and regulating Big Tech | VentureBeat"
|
"https://venturebeat.com/ai/ai-weekly-algorithms-accountability-and-regulating-big-tech"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Algorithms, accountability, and regulating Big Tech Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation.
The end of liability protections granted by Section 230 of the Communications Decency Act (CDA) , disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word “algorithm” alone was used more than 50 times.
Whereas previous hearings involved more exploratory questions and took on a feeling of Geek Squad tech repair meets policy, in this hearing lawmakers asked questions based on evidence and seemed to treat tech CEOs like hostile witnesses.
Representatives repeatedly cited a May 2020 Wall Street Journal article about an internal Facebook study that found that the majority of people who join extremist groups do so because the Facebook recommendation algorithm proposed that they do so. A recent MIT Tech Review article about focusing bias detection to appease conservative lawmakers instead of to reduce disinformation also came up, as lawmakers repeatedly asserted that self regulation was no longer an option. Virtually throughout the entirety of the more than five-hour long hearing, there was a tone of unvarnished repulsion and disdain for exploitative business models and willingness to sell addictive algorithms to children.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Big Tech is essentially handing our children a lit cigarette and hoping they stay addicted for life,” Rep. Bill Johnson (R-OH) said.
In his comparison of Big Tech companies to Big Tobacco — a parallel drawn at Facebook and a recent AI research paper — Johnson quotes then-Rep. Henry Waxman (D-CA), who stated in 1994 that Big Tobacco had been “exempt from standards of responsibility and accountability that apply to all other American corporations.” Some congresspeople suggested laws to require tech companies to publicly report diversity data at all levels of a company and to prevent targeted ads that push misinformation to marginalized communities including veterans.
Rep. Debbie Dingell (D-MI) suggested a law that would establish an independent organization of researchers and computer scientists to identify misinformation before it goes viral.
Pointing to YouTube’s recommendation algorithm and its known propensity to radicalize people , Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) introduced the Protecting Americans from Dangerous Algorithms Act back in October to amend Section 230 and allow courts to examine the role of algorithmic amplification that leads to violence.
Next to Section 230 reform, one of the most popular solutions lawmakers proposed was a law requiring tech companies to perform civil rights audits or algorithm audits for performance.
It might be cathartic seeing tech CEOs whose attitudes are described by lawmakers as smug and arrogant get their come-uppances for inaction on systemic issues that threaten human lives and democracy because they’d rather make more money. But after the bombast and bipartisan recognition of how AI can harm people on display Thursday, the pressure is on Washington, not Silicon Valley.
I mean, of course Zuckerberg or Pichai will still need to answer for it when the next white supremacist terrorist action happens and it’s again drawn directly back to a Facebook group or YouTube indoctrination, but to date, lawmakers have no record of passing sweeping legislation to regulate the use of algorithms.
Bipartisan agreement for regulation of facial recognition and data privacy has also not yet paid off with comprehensive legislation.
Mentions of artificial intelligence and machine learning in Congress are at an all-time high.
And in recent weeks, a national panel of industry experts have urged AI policy action to protect the national security interests of the United States , and Google employees have implored Congress to pass stronger laws to protect people who come forward to reveal ways AI is being used to harm people.
The details of any proposed legislation will reveal just how serious lawmakers are about bringing accountability to those who make the algorithms. For example, diversity reporting requirements should include breakdowns of specific teams working with AI at Big Tech companies. Facebook and Google release diversity reports today, but those reports do not break down AI team diversity.
Testing and agreed-upon standards are table stakes in industries where products and services can harm people. You can’t break ground on a construction project without an environmental impact report, and you can’t sell people medicine without going through the Food and Drug Administration, so you probably shouldn’t be able to freely deploy AI that reaches billions of people that’s discriminatory or peddles extremism for profit.
Of course, accountability mechanisms meant to increase public trust can fail. Remember Bell, the California city that regularly underwent financial audits but still turned out to be corrupt ? And algorithm audits don’t always assess performance.
Even if researchers document a propensity to do harm, like analysis of Amazon’s Rekognition or YouTube radicalization showed in 2019, that doesn’t mean that AI won’t be used in production today.
Regulation of some kind is coming, but the unanswered question is whether that legislation will go beyond the solutions tech CEOs endorse. Zuckerberg voiced support for federal privacy legislation, just as Microsoft has done in fights with state legislatures attempting to pass data privacy laws. Zuckerberg also expressed some backing for algorithm auditing as an “important area of study”; however, Facebook does not perform systematic audits of its algorithms today , even though that’s recommended by a civil rights audit of Facebook completed last summer.
Last week, the Carr Center at Harvard University published an analysis of the human rights impact assessments (HRIAs) Facebook performed regarding its product and presence in Myanmar following a genocide in that country. That analysis found that a third-party HRIA largely omits mention of the Rohingya and fails to assess if algorithms played a role.
“What is the link between the algorithm and genocide? That’s the crux of it. The U.N. report claims there is a relationship,” coauthor Mark Latonero told VentureBeat. “They said essentially Facebook contributed to the environment where hateful speech was normalized and amplified in society.” The Carr report states that any policy demanding human rights impact assessments should be wary of such reports from the companies, since they tend to engage in ethics washing and to “hide behind a veneer of human rights due diligence and accountability.” To prevent this, researchers suggest performing analysis throughout the lifecycle of AI products and services, and attest that to center the impact of AI requires viewing algorithms as sociotechnical systems deserving of evaluation by social and computer scientists. This is in line with a previous research that insists AI be looked at like a bureaucracy , as well as AI researchers working with critical race theory.
“Determining whether or not an AI system contributed to a human rights harm is not obvious to those without the appropriate expertise and methodologies,” the Carr report reads. “Furthermore, without additional technical expertise, those conducting HRIAs would not be able to recommend potential changes to AI products and algorithmic processes themselves in order to mitigate existing and future harms.” Evidenced by the fact that multiple members of Congress talked about the perseverance of evil in Big Tech this week, policymakers seem aware AI can harm people, from spreading disinformation and hate for profit to endangering children, democracy, and economic competition. If we all agree that Big Tech is in fact a threat to children, competitive business practices, and democracy, if Democrats and Republicans fail to take sufficient action, in time it could be lawmakers who are labeled untrustworthy.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,629 | 2,019 |
"Mavenlink raises $48 million for its project management service | VentureBeat"
|
"https://venturebeat.com/2019/04/11/mavenlink-raises-48-million-for-its-project-management-service"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mavenlink raises $48 million for its project management service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Big bucks continue to flow into startups that are building and selling productivity and management services. Enterprise collaboration startup Mavenlink , which offers a cloud-based start-to-finish project management service, announced today that it has raised $48 million.
The series E round for the Irvine-based startup was led by existing investors Carrick Capital Partners and Goldman Sachs Growth Equity. In total, Mavenlink has raised $111.5 million in its 11-year journey , the startup’s cofounder and CEO Ray Grainger told VentureBeat.
Through its eponymous service, Mavenlink offers a range of features to help companies manage their projects, especially when engaging with outside clients and contractors. These tools allow a team to track time and finances, share files with one another, and pool their resources. Mavenlink’s offerings also include integration with popular third-party services such as Google Apps.
The startup, which has 2,500 clients including big names such as Salesforce, Qualtrics, Vizient, WPP, Genpact, IPG, and 84,000 paying users, offers its service in multiple tiers at a recurring monthly cost. It also maintains a free plan with limited functionalities for small businesses.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The startup, which has expanded to European and APAC regions in recent years, will invest the fresh capital in building new customer programs, Grainger told VentureBeat. “We’re working on new features, solutions, and thinking in key areas where the fast-changing, fast-growing services market can most benefit from innovation. Areas like resource management, services supply chain, machine learning and AI, mobile, and extensibility.” Talks of the town Today’s announcement underscores investors’ growing interest in startups that are building management and productivity applications to replace the archaic ways many companies engage internally. San Francisco-based Asana, which offers a work management platform, raised north of $100 million last year.
Monday.com, another startup with similar offerings, raised $50 million last July.
Atlassian acquired Trello for a whopping $425 million in 2017. Slack, which offers a team communications service and is valued at more than $8 billion, filed for an IPO earlier this year.
But growing these businesses, especially as they compete with some of the biggest technology giants such as Google and Microsoft , is a capital-taxing job. Grainger declined to share how much revenue Mavenlink clocked last year, but said the startup does see profitability in the future.
“An IPO of the company is always a consideration,” Grainger said. “We’re focused today on the success of our clients as we pursue a vision of industry innovation, and believe that’s what we need to do for Mavenlink to sustain market leadership. We expect this round to sustain our high growth rate for the foreseeable future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,630 | 2,018 |
"Asana raises $50 million at $1.5 billion valuation, expands into Japan and Australia | VentureBeat"
|
"https://venturebeat.com/2018/11/29/asana-raises-50-million-at-1-5-billion-valuation-expands-into-japan-and-australia"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Asana raises $50 million at $1.5 billion valuation, expands into Japan and Australia Share on Facebook Share on X Share on LinkedIn Asana's homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Asana , the business productivity platform cofounded by Facebook co-creator Dustin Moskovitz, has raised $50 million in a series E round of funding led by Generation Investment Management, a London-based investment firm cofounded by Al Gore. Participants in the round included 8VC, Benchmark Capital, Founders Fund, Lead Edge Capital, and World Innovation Lab.
With its $75 million cash injection back in January , the San Francisco-based company has now raised a total of $125 million in 2018 — giving it a $1.5 billion valuation, up from $900 million at its last round.
Task force Asana is a team-based task-management app founded by Moskovitz and software engineer Justin Rosenstein, who is credited with developing such early Google products as Gmail chat and who helped develop the original Facebook “Like” button. The duo left Facebook in 2008 to kickstart Asana, though the product didn’t launch to the public for another four years. Asana has managed to lure some notable financial backers, including Mark Zuckerberg , Sean Parker, and Peter Thiel.
For context, the task-management and team collaboration market is seemingly ripe for investment. Fellow San Francisco-based startup Airtable recently raised a $100 million round , while Monday.com — formerly known as Dapulse — closed a $50 million round.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Asana has now raised around $210 million in total since its inception, and with another $50 million in the bank it plans to double down on international and enterprise growth.
“As the work management imperative becomes increasingly global, we’re focused on supporting our expanding international team and customer base to meet the growing demand to democratize project management and improve team coordination,” Moskovitz said. “We’re more focused than ever on our vision to enable organizations to align their missions with clarity of plan, purpose, and responsibility so they can focus on the work that matters most.” Going global Though Asana has always been available to any company around the world, the company kicked off its international aspirations last year when it elected to expand beyond English for the first time, starting with French and German and followed by Spanish and Portuguese.
According to an Asana spokesperson, 50 percent of its new revenue comes from outside the U.S., which is why it’s continuing to invest in new markets. Asana recently started rolling out in Japanese , and today the company revealed it is now officially launching in Japan with fully localized support, including hiring a local team in Tokyo next year.
Additionally, Asana revealed it is now touching down in Australia, where it will open its first Asana office in Sydney later this month — covering sales and marketing.
In related news, Asana revealed plans to open its first datacenter outside the U.S., with expectations of another in Frankfurt, Germany sometime in early 2019.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,631 | 2,018 |
"Monday.com raises $50 million to build the "next generation" team collaboration tools | VentureBeat"
|
"https://venturebeat.com/2018/07/11/monday-com-raises-50-million-to-build-the-next-generation-team-collaboration-tools"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Monday.com raises $50 million to build the “next generation” team collaboration tools Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Monday.com , the team and project management tool formerly known as Dapulse, has raised $50 million in a series C round of funding led by Stripes Group, with participation from Insight Venture Partners and Entrée Capital.
Founded out of Tel Aviv in 2012, Monday.com is an all-in-one team communication, planning, and management tool that lets you create and assign tasks, plan workloads, make comments, notify teammates, share files, and visualize everything that everyone is currently working on.
Above: Monday.: Visual timeline Alongside its funding news today, Monday.com also lifted the lid on a handful of new tools and features, such as Monday Stories, which is basically a community of Monday.com users where use cases and best practices are shared across businesses. And there’s something called “board views,” which serves as a new way to visualize projects and processes.
Above: Monday.com: Board view Monday.com isn’t alone in trying to enhance workers’ productivity through team planning tools. Asana, which was cocreated by Facebook cofounder Dustin Moskovitz, recently raised $75 million before going on to launch a new timeline tool that helps companies visualize project tasks, similar to what Monday.com offers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elsewhere, Atlassian snapped up competing platform Trello last year for $425 million.
What’s in a name? At its previous $25 million series B raise back in early 2017, the company was still called Dapulse, but later in the year it rebranded to avoid the distractions that its original branding apparently caused. In a self-deprecating video announcing the name change, employees read out tweets posted by the public that questioned the meaning of the company’s name. “ Dapulse, really? Wat, we rappers now ?” is one such hilarious message.
The company has previously raised more than $34 million in outside funding, and with another $50 million in the bank it will expedite plans to build the “new generation of workplace collaboration tools,” according to a statement.
“We are determined to bring the quality and ease of use typically reserved for consumer products to the enterprise,” said Monday.com cofounder and CEO Roy Mann. “To effectively change how teams work, they must actually love the tools they use, and that is our goal in building Monday.com.” Monday.com said it tripled its revenues over the past year, reaching into the “tens of millions of dollars,” while also tripling its customer base to 35,000 companies, with big-name clients including Carlsberg, Discovery Channel, McDonald’s, and WeWork.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,632 | 2,022 |
"Why the ethical use of data and user privacy concerns matter | VentureBeat"
|
"https://venturebeat.com/2022/02/26/why-the-ethical-use-of-data-and-user-privacy-concerns-matter"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why the ethical use of data and user privacy concerns matter Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article was contributed by Charlie Fletcher Data and its uses pervade the digital economy. From online data mining to AI/ML-enhanced analytics, the range of data sources and tools available on the web are boundless. For every digital user accessing applications, however, there are just as many privacy concerns.
Cybercrime has risen exponentially in recent years, leaving online user information more vulnerable than ever. Facing these risks, organizations of every size and purpose must commit to ethical uses of data as they better secure their information systems.
This process starts with understanding the many privacy concerns that affect users as they interact with digital platforms. From fraud to data selling, users fear the exploitation of their information for purposes outside of their own best interests. Understand the privacy concerns inherent in data collection, then explore the ethical use of data through these actionable data applications. Doing so isn’t just good business sense; it’s your moral obligation to consumers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! User privacy concerns The first step in using data ethically is addressing the privacy concerns that are inherent with data collection and utilization. Applying the wrong data privacy strategy can cost an organization billions in fees and damages amidst ongoing efforts to strengthen cybersecurity efforts and privacy gaps. Meanwhile, attempted cyberattacks have been continuously on the rise.
To address consumer concerns regarding data privacy, businesses must be prepared to combat the biggest challenges involved in data privacy. These challenges include: Embedding data privacy — To best protect user data, identifying factors have to be hidden from the beginning. This requires incorporating privacy as an embedded aspect of data collection, not just an afterthought.
Accommodating a range of devices securely — These days remote work and bring-your-own-device (BYOD) policies add layers of network security concerns to the average data collection process. To remain secure, data has to travel through various devices and access points while retaining privacy standards.
Protecting a constantly growing range of data — With big data changing the ways we explore and uncover information, scaling protections to match this growth is difficult. Doing so requires a culture of data responsibility, including policies for minimizing data storage and deleting excess or used information.
These are just a few of the many privacy concerns that come with implementing data for any business process. However, the scope of your data privacy concerns can also be influenced by the regulations that exist in your market.
For example, the European Union maintains the General Data Protection Regulation (GDPR) guidelines that enforce principles of transparency and data security on any information collected within the EU. Additionally, a range of other guidelines may apply if you operate in areas like China or California, where additional data collection and privacy standards are emerging.
A failure to protect consumer data leads to all kinds of risks for consumers and companies alike. From compliance failure fees to a damaged reputation, the cost of poorly-managed data is typically too great for businesses to bear. Instead, organizations should adopt a commitment to ethical data use.
Using data ethically An unethical approach to data has contributed to some of the worst accounting scandals in human history. Take WorldCom, for instance. This organization manipulated financial data on income statements and balance sheets to make their company look much better to investors. Through data manipulation, WorldCom ended up costing these investors billions while the company racked up nearly $4 billion in accounting fraud.
Scandals like these damage the reputation of every institution that collects and applies data. Contrary to this belief that’s been built, data can be used ethically. By nature, data supports all kinds of efficiency and quality benefits for virtually any operation. That’s because data represents the facts.
By structuring these raw facts into comprehensive software and silos for data management, researchers are better prepared to make improvements to products, services, financial models, and more.
Ethics, then, is the baseline for integrating these improvements. An ethical approach to data use can be defined as one that intends to improve value to consumers without putting them at greater risk. Such an approach complies with privacy regulations while constantly striving for improvements in an increasingly dangerous digital environment. You too can apply data ethically by striving to incorporate ethics principles in your use of information.
Across the data economy, experts have assembled a consensus when it comes to ethical principles that guide data-driven decision-making. These principles are: Empathy — Data ultimately involves and affects human beings. By focusing on the human being at the center of every data transaction, analysts can make more ethical decisions when it comes to applying that data.
Data control — Our data is an extension of ourselves. In turn, organizations should prioritize user ownership and control of their own data. The user decides what they’re comfortable with, and organizations should support that.
Transparency — Everyone has encountered Terms of Service (ToS) agreements too lengthy and jargon-filled for the average user to understand. An ethical approach to data management makes it clear to the user what data is being collected and why.
Accountability — An organization is responsible for maintaining the security of the information it collects. This means a consistent, cutting-edge security process must be maintained if data is to be utilized.
Equality — You might think that data can’t be biased, and while that might be true, our means of gathering, collecting, and applying data can be. Evaluate your process to ensure that it doesn’t reflect prejudice of any kind, conscious or unconscious.
By considering each instance of data application through the lens of these ethical principles, you can better address every privacy concern that comes with data collection. After all, businesses in the modern economy need the customer trust that comes from a secure data management system. Use these tips and tools to make your use of data more ethical.
Charlie Fletcher is a freelance writer passionate about workplace equity, and whose published works cover sociology, technology, business, education, health, and more.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,633 | 2,021 |
"Taking the world by simulation: The rise of synthetic data in AI | VentureBeat"
|
"https://venturebeat.com/2021/12/21/taking-the-world-by-simulation-the-rise-of-synthetic-data-in-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Taking the world by simulation: The rise of synthetic data in AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Would you trust AI that has been trained on synthetic data, as opposed to real-world data? You may not know it, but you probably already do — and that’s fine, according to the findings of a newly released survey.
The scarcity of high-quality, domain-specific datasets for testing and training AI applications has left teams scrambling for alternatives. Most in-house approaches require teams to collect, compile, and annotate their own DIY data — further compounding the potential for biases, inadequate edge-case performance (i.e. poor generalization), and privacy violations.
However, a saving grace appears to already be at hand: advances in synthetic data.
This computer-generated, realistic data intrinsically offers solutions to practically every item on the list of mission-critical problems teams currently face.
That’s the gist of the introduction to “Synthetic Data: Key to Production-Ready AI in 2022.” The survey’s findings are based on responses from people working in the computer vision industry. However, the findings of the survey are of broader interest. First, because there is a broad spectrum of markets that are dependent upon computer vision, including extended reality, robotics, smart vehicles, and manufacturing. And second, because the approach of generating synthetic data for AI applications could be generalized beyond computer vision.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Lack of data kills AI projects Datagen , a company that specialized in simulated synthetic data, recently commissioned Wakefield Research to conduct an online survey of 300 computer vision professionals to better understand how they obtain and use AI/ML training data for computer vision systems and applications, and how those choices impact their projects.
The reason why people turn to synthetic data for AI applications is clear. Training machine learning models require high-quality data, which is not easy to come by. That seems like a universally shared experience.
Ninety-nine percent of survey respondents reported having had an ML project completely canceled due to insufficient training data, and 100% of respondents reported experiencing project delays as a result of insufficient training data.
What is less clear is how synthetic data can help. Gil Elbaz, Datagen CTO and cofounder, can relate to that. When he first started using synthetic data back in 2015, as part of his second degree at the Technion University of Israel, his focus was on computer vision and 3D data using deep learning.
Elbaz was surprised to see synthetic data working: “It seemed like a hack, like something that shouldn’t work but works anyway. It was very, very counter-intuitive,” he said.
Having seen that in practice, however, Elbaz and his cofounder Ofir Chakon felt that there was an opportunity there. In computer vision, like in other AI application areas, data has to be annotated to be used to train machine learning algorithms. That is a very labor-intensive, bias- and error-prone process.
“You go out, capture pictures of people and things at large scale, and then send it to manual annotation companies. This is not scalable, and it doesn’t make sense. We focused on how to solve this problem with a technological approach that will scale to the needs of this growing industry,” Elbaz said.
Datagen started operating in garage mode, and generating data through simulation. By simulating the real world, they were able to create data to train AI to understand the real world. Convincing people that this works was an uphill battle, but today Elbaz feels vindicated.
According to survey findings, 96% of teams report using synthetic data in some proportion for training computer vision models. Interestingly, 81% share using synthetic data in proportions equal to or greater than that of manual data.
Synthetic data, Elbaz noted, can mean a lot of things. Datagen’s focus is on so-called simulated synthetic data. This is a subset of synthetic data focused on 3D simulations of the real world. Virtual images captured within that 3D simulation are used to create visual data that’s fully labeled, which can then be used to train models.
Simulated synthetic data to the rescue The reason this works in practice is twofold, Elbaz said. The first is that AI really is data-centric.
“Let’s say we have a neural network to detect a dog in an image, for instance. So it takes in 100GB of dog images. It then outputs a very specific output. It outputs a bounding box where the dog is in the image. It’s like a function that maps the image to a specific bounding box,” he said.
“The neural networks themselves only weigh a few megabytes, and they’re actually compressing hundreds of gigabytes of visual information and extracting from it only what’s needed. And so if you look at it like that, then the neural networks themselves are less of the interesting. The interesting part is actually the data.” So the question is, how do we create data that can represent the real world in the best way? This, Elbaz claims, is best done by generating simulated synthetic data using techniques like GANs.
This is one way of going about it, but it’s very hard to create new information by just training an algorithm with a certain data set and then using that data to create more data, according to Elbaz. It doesn’t work because there are certain bounds of the information that you’re representing.
What Datagen is doing — and what companies like Tesla are doing too — is creating a simulation with a focus on understanding humans and environments. Instead of collecting videos of people doing things, they’re collecting information that’s disentangled from the real world and is of high quality. It’s an elaborate process that includes collecting high-quality scans and motion capture data from the real world.
Then the company scans objects and models procedural environments, creating decoupled pieces of information from the real world. The magic is connecting it at scale and providing it in a controllable, simple fashion to the user. Elbaz described the process as a combination of directorial aspects and simulating aspects of the real world dynamics via models and environments such as game engines.
It’s an elaborate process, but apparently, it works. And it’s especially valuable for edge cases hard to come by otherwise, such as extreme scenarios in autonomous driving, for example. Being able to get data for those edge cases is very important.
The million-dollar question, however, is whether generating synthetic data could be generalized beyond computer vision. There is not a single AI application domain that is not data-hungry and would not benefit from additional, high-quality data representative of the real world.
In addressing this question, Elbaz referred to unstructured data and structured data separately. Unstructured data, like images or audio signals, can be simulated for the most part. Text, which is considered semi-structured data, and structured data such as tabular data or medical records — that’s a different thing. But there, too, Elbaz noted, we see a lot of innovation.
Many startups are focusing on tabular data, mostly around privacy. Using tabular data raises privacy concerns. This is why we see work on creating the ability to simulate data from an existing pool of data, but not to expand the amount of information. Synthetic tabular data are used to create a privacy compliance layer on top of existing data.
Synthetic data can be shared with data scientists around the world so that they can start training models and creating insights, without actually accessing the underlying real-world data. Elbaz believes that this practice will become more widespread, for example in scenarios like training personal assistants, because it removes the risk of using personally identifiable data.
Addressing bias and privacy Another interesting side effect of using synthetic data that Elbaz identified was removing bias and achieving higher annotation quality. In manually annotated data, bias creeps in, whether it’s due to different views among annotators or the inability to effectively annotate ambiguous data. In synthetic data generated via simulation, this is not an issue, as the data comes out perfectly and consistently pre-annotated.
In addition to computer vision, Datagen aims to expand this approach to audio, as the guiding principles are similar. Besides surrogate synthetic data for privacy, and video and audio data that can be generated via simulation, is there a chance we can ever see synthetic data used in scenarios such as ecommerce? Elbaz believes this could be a very interesting use case, one that an entire company could be created around. Both tabular data and unstructured behavioral data would have to be combined — things like how consumers are moving the mouse and what they’re doing on the screen. But there is an enormous amount of shopper behavior information, and it should be possible to simulate interactions on ecommerce sites.
This could be beneficial for the product people optimizing ecommerce sites, and it could also be used to train models to predict things. In that scenario, one would need to proceed with caution, as the ecommerce use case more closely resembles the GAN generated data approach, so it’s closer to structured synthetic data than unstructured.
“I think that you’re not going to be creating new information. What you can do is make sure that there’s a privacy compliant version of the Black Friday data, for instance. The goal there would be for the data to represent the real-world data in the best way possible, without ruining the privacy of the customers. And then you can delete the real data at a certain point. So you would have a replacement for the real data, without having to track customers in a borderline ethical way,” Elbaz said.
The bottom line is that while synthetic data can be very useful in certain scenarios, and are seeing increased adoption, their limitations should also be clear.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,634 | 2,021 |
"Machine learning's rise, applications, and challenges | VentureBeat"
|
"https://venturebeat.com/2021/06/21/machine-learnings-rise-applications-and-challenges"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Machine learning’s rise, applications, and challenges Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The terms “artificial intelligence” and “machine learning” are often used interchangeably, but there’s an important difference between the two. AI is an umbrella term for a range of techniques that allow computers to learn and act like humans. Put another way, AI is the computer being smart. Machine learning, however, accounts for how the computer becomes smart.
But there’s a reason the two are often conflated: The vast majority of AI today is based on machine learning. Enterprises across sectors are prioritizing it for various use cases across their organizations, and the subfield tops AI funding globally by a significant margin. In the first quarter of 2019 alone, a whopping $28.5 billion was allocated to machine learning research.
Overall, the machine learning market is expected to grow from around $1 billion in 2016 to $8.81 billion by 2022. When VentureBeat collected thoughts from the top minds across the field , they had a variety of predictions to share. But one takeaway was that machine learning is continuing to shape business and society at large.
Rise of machine learning While AI is ubiquitous today, there were times when the whole field was thought to be a dud. After initial advancements and a lot of hype in the mid-late 1950s and 1960s, breakthroughs stalled and expectations went unmet. There wasn’t enough computing power to bring the potential to life, and running such systems cost exorbitant amounts of money. This caused both interest and funding to dry up in what was dubbed the “ AI winter.
” The pursuit later picked up again in the 1980s, thanks to a boost in research funds and expansion of the algorithmic toolkit. But it didn’t last, and there was yet another decade-long AI winter.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Then two major changes occurred that directly enabled AI as we know it today. Artificial intelligence efforts shifted from rule-based systems to machine learning techniques that could use data to learn without being externally programmed. And at the same time, the World Wide Web became ubiquitous in the homes (and then hands) of millions (and eventually billions) of people around the world. This created the explosion of data and data sharing on which machine learning relies.
How machine learning works Machine learning enables a computer to “think” without being externally programmed. Instead of programming it by hand to accomplish specific tasks, as is the case with traditional computers, machine learning allows you to instead provide data and describe what you want the program to do.
The computer trains itself with that data, and then uses algorithms to carry out your desired task. It also collects more data as it goes, getting “smarter” over time. A key part of how this all works is the data labeling. If you want a program to sort photos of ice cream and pepperoni pizza, for example, you first need to first label some of the photos to give the algorithm an idea of what ice cream and pepperoni pizza each look like.
This labeling is also a key difference between machine learning and a popular subset within the field, called deep learning.
Deep learning doesn’t require any labeling, instead relying on neural networks, which are inspired by the human brain both in structure and name. To sort the photos of ice cream and pepperoni pizza using this technique, you instead have to provide a significantly larger set of photos. The computer then puts the photos through several layers of processing — which make up the neural network — to distinguish the ice cream from the pepperoni pizza one step at a time. Earlier layers look at basic properties like lines or edges between light and dark parts of the images, while subsequent layers identify more complex features like shapes or even faces.
Applications Machine learning and its subsets are useful for a wide range of problems, tasks, and applications. There’s computer vision , which allows computers to “see” and make sense of images and videos. Additionally, natural language processing (NLP) is a rising part of machine learning, which allows computers to extract the meaning of unstructured text. There’s also voice and speech recognition, which powers services like Amazon’s Alexa and Apple’s Siri and introduced many consumers to AI for the first time.
Across industries, enterprises are using machine learning in their products as well as internally within their organizations. Machine learning can simplify, streamline, and enhance supply chain operations , for example. It’s also widely used for business analytics, security, sales, and marketing. Machine learning has even been used to help fight COVID-19.
Facebook leans on machine learning to take down harmful content.
Google uses it to improve search. And American Express recently tapped NLP for its customer service chatbots and to run a predictive search capability inside its app. The list goes on and on.
Limitations and challenges While machine learning holds promise and is already benefiting enterprises around the globe, there are challenges and issues associated with the field. For example, machine learning is useful for recognizing patterns, but it doesn’t perform well when it comes to generalizing knowledge. For users, there’s also the issue of “ algorithm fatigue.
” Some of the issues related to machine learning have significant consequences that are already playing out today. The lack of explainability and interpretability — known as the “black box problem” — is one. Machine learning models create their own behaviors and decisions in ways that even their creators can’t understand. This makes it difficult to fix errors and ensure the information a model puts out is accurate and fair. When people noticed Apple’s algorithm for credit cards was offering women significantly smaller lines of credit than men , for example, the company couldn’t explain why and didn’t know how to fix the issue.
This is related to the most significant issue plaguing the field: data and algorithmic bias. Since the technology’s inception, machine learning models have been routinely and primarily built on data that was collected and labeled in biased ways , sometimes for specifically biased purposes. It’s been found that algorithms are often biased against women , Black people , and other ethnic groups.
Researchers at Google’s DeepMind, one of the world’s top AI labs, warned the technology poses a threat to individuals who identify as queer.
This issue is widespread and widely known, but there is resistance to taking the significant action many in the field are urging is necessary. Google itself fired the co-leads of its ethical AI team, Timnit Gebru and Margaret Mitchell , in what thousands of the company’s employees called a “retaliatory firing,” after Gebru refused to rescind research about the risks of deploying large language models. And in a survey of researchers, policy leaders, and activists, the majority said they worry the evolution of AI by 2030 will continue to be primarily focused on optimizing profits and social control, at the expense of ethics. Legislation regarding AI — especially immediately and obviously harmful uses, like facial recognition for policing — is being debated and adopted across the country.
These deliberations will likely continue. And the changing data privacy laws will soon affect data collection, and thus machine learning, as well.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,635 | 2,022 |
"Fortinet focuses on ESG, unveils new energy-efficient firewall | VentureBeat"
|
"https://venturebeat.com/security/fortinet-esg-goals"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fortinet focuses on ESG, unveils new energy-efficient firewall Share on Facebook Share on X Share on LinkedIn Padlock on wire mesh network and glowing particle data.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Traditional data centers may be the bedrock of enterprise insights, but they’re also extremely power intensive.
Research shows that traditional data centers have an energy demand of 32.61 Terawatt hours (TWH).
With such high-energy consumption, enterprises are under pressure to reduce the amount of power needed to support their data centers and on-premise environments to achieve their environmental social governance (ESG) goals.
In an attempt to help organizations do so, Fortinet today announced the release of FortiGate 100F, a next-generation firewall (NGFW) that it claims produces 83% fewer watts per Gbps of firewall throughput.
For enterprises, the product has the potential to offer a more energy-efficient option for securing ingoing and outgoing traffic.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Working to realize ESG goals More enterprises are recognizing the importance of the ESG journey, even in the realm of cybersecurity, where Gartner anticipates that 30% of large organizations will have publicly shared ESG goals focused on cybersecurity by 2026.
“With CIOs under growing pressure to reduce the carbon footprint of their IT infrastructure, FortiGate 1000F continues Fortinet’s legacy of delivering NGFWs that provide the scale, performance, and power savings to get the security requirements of today’s enterprise data centers,” said Nirav Shah, VP of products at Fortinet.
Shah highlights that while this approach is energy efficient, it also has the potential to safeguard users, devices, and applications from unwanted cyberthreats.
“With a single FortiOS operating system covering networking and security, networking and security teams can deliver coordinated, automated threat protection while maintaining a superior user experience,” Shah said. “This is all accomplished in an eco-friendly way by consuming 80% less power than competitive solutions to help achieve corporate sustainability goals.” The new firewalls offer seven times higher IPsec VPN performance, and seven times higher SSL throughput than the industry average, to ensure that security teams have more visibility over encrypted traffic without adversely impacting user performance.
A brief look at the next-generation firewall market Fortinet falls within the next-generation firewall market, which researchers valued at $2,570.49 million in 2017, and project will reach $6,719.56 million by 2025.
The organization is competing against an array of established vendors, including Palo Alto Networks , which has a line of NGFWs that use machine learning to detect unknown zero-day attacks in real-time. Palo Alto Networks also recently announced raising $1.6 billion in fiscal fourth-quarter revenue.
Another key competitor is Sophos , which offers its own next-generation firewall with transport layer security inspection, deep packet inspection, deep learning and sandboxing to protect against cyberthreats.
Thoma Bravo acquired Sophos for $3.8 billion in 2020.
At this stage, the key difference between FortiGate 1000F and other solutions is energy efficiency.
“FortiGate 1000F requires 83% fewer watts per Gbps of firewall throughput and 86% fewer watts per Gbps of IPsec VPN throughput,” Shah said. “FortiGate 1000F also requires less cooling than other solutions, generating only 15% of the BTU/h per Gbps of firewall throughput compared to competitive firewalls.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,636 | 2,022 |
"What is natural language processing (NLP)? Definition, examples, techniques and applications | VentureBeat"
|
"https://venturebeat.com/2022/06/15/what-is-natural-language-processing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is natural language processing (NLP)? Definition, examples, techniques and applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents How are the algorithms designed? How do AI scientists build models? How is natural language processing evolving? What are the established players creating? What are the startups doing? Is there anything that natural language processing can’t do? Teaching computers to make sense of human language has long been a goal of computer scientists. The natural language that people use when speaking to each other is complex and deeply dependent upon context. While humans may instinctively understand that different words are spoken at home, at work, at a school, at a store or in a religious building, none of these differences are apparent to a computer algorithm.
What is natural language processing (NLP)? Over the decades of research, artificial intelligence (AI) scientists created algorithms that begin to achieve some level of understanding. While the machines may not master some of the nuances and multiple layers of meaning that are common, they can grasp enough of the salient points to be practically useful.
Algorithms that fall under the label “ natural language processing (NLP) ” are deployed to roles in industry and homes. They’re now reliable enough to be a regular part of customer service, maintenance and domestic roles. Devices from companies like Google or Amazon routinely listen in and answer questions when addressed with the right trigger word.
How are the algorithms designed? The mathematical approaches are a mixture of rigid, rule-based structure and flexible probability. The structural approaches build models of phrases and sentences that are similar to the diagrams that are sometimes used to teach grammar to school-aged children. They follow much of the same rules as found in textbooks, and they can reliably analyze the structure of large blocks of text.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These structural approaches start to fail when words have multiple meanings. The canonical example is the use of the word “flies” in the sentence: “Time flies like an arrow, but fruit flies like bananas.” AI scientists have found that statistical approaches can reliably distinguish between the different meanings. The word “flies” might form a compound noun 95% of the time, it follows the word “fruit.” How do AI scientists build models? Some AI scientists have analyzed some large blocks of text that are easy to find on the internet to create elaborate statistical models that can understand how context shifts meanings. A book on farming, for instance, would be much more likely to use “flies” as a noun, while a text on airplanes would likely use it as a verb. A book on crop dusting, however, would be a challenge.
Machine learning algorithms can build complex models and detect patterns that may escape human detection. It is now common, for instance, to use the complex statistics about word choices captured in these models to identify the author.
Some natural language processing algorithms focus on understanding spoken words captured by a microphone. These speech recognition algorithms also rely upon similar mixtures of statistics and grammar rules to make sense of the stream of phonemes.
[Related: How NLP is overcoming the document bottleneck in digital threads ] How is natural language processing evolving? Now that algorithms can provide useful assistance and demonstrate basic competency, AI scientists are concentrating on improving understanding and adding more ability to tackle sentences with greater complexity. Some of this insight comes from creating more complex collections of rules and subrules to better capture human grammar and diction. Lately, though, the emphasis is on using machine learning algorithms on large datasets to capture more statistical details on how words might be used.
AI scientists hope that bigger datasets culled from digitized books, articles and comments can yield more in-depth insights. For instance, Microsoft and Nvidia recently announced that they created Megatron-Turing NLG 530B , an immense natural language model that has 530 billion parameters arranged in 105 layers.
The training set includes a mixture of documents gathered from the open internet and some real news that’s been curated to exclude common misinformation and fake news. After deduplication and cleaning, they built a training set with 270 billion tokens made up of words and phrases.
The goal is now to improve reading comprehension, word sense disambiguation and inference. Beginning to display what humans call “common sense” is improving as the models capture more basic details about the world.
In many ways, the models and human language are beginning to co-evolve and even converge. As humans use more natural language products, they begin to intuitively predict what the AI may or may not understand and choose the best words. The AIs can adjust, and the language shifts.
What are the established players creating? Google offers an elaborate suite of APIs for decoding websites, spoken words and printed documents. Some tools are built to translate spoken or printed words into digital form, and others focus on finding some understanding of the digitized text. One cloud APIs, for instance, will perform optical character recognition while another will convert speech to text.
Some, like the basic natural language API, are general tools with plenty of room for experimentation while others are narrowly focused on common tasks like form processing or medical knowledge.
The Document AI tool, for instance, is available in versions customized for the banking industry or the procurement team.
Amazon also offers a wide range of APIs as cloud services for finding salient information in text files, spoken word or scanned documents. The core is Comprehend , a tool that will identify important phrases, people and sentiment in text files. One version, Comprehend Medical , is focused on understanding medical information in doctors’ notes, clinical trial reports and other medical records. They also offer pre-trained machine learning models for translation and transcription.
For some common use cases like running a chatbot for customer service, AWS offers tools like Lex to simplify adding an AI-based chatbot to a company’s web presence.
Microsoft also offers a wide range of tools as part of Azure Cognitive Services for making sense of all forms of language. Their Language Studio begins with basic models and lets you train new versions to be deployed with their Bot Framework.
Some APIs like Azure Cognative Search integrate these models with other functions to simplify website curation. Some tools are more applied, such as Content Moderator for detecting inappropriate language or Personalizer for finding good recommendations.
What are the startups doing? Many of the startups are applying natural language processing to concrete problems with obvious revenue streams.
Grammarly , for instance, makes a tool that proofreads text documents to flag grammatical problems caused by issues like verb tense. The free version detects basic errors, while the premium subscription of $12 offers access to more sophisticated error checking like identifying plagiarism or helping users adopt a more confident and polite tone. The company is more than 11 years old and it is integrated with most online environments where text might be edited.
SoundHound offers a “voice AI platform” that other manufacturers can add so their product might respond to voice commands triggered by a “wake word.” It offers “speech-to-meaning” abilities that parse the requests into data structures for integration with other software routines.
Shield wants to support managers that must police the text inside their office spaces. Their “communications compliance” software deploys models built with multiple languages for “behavioral communications surveillance” to spot infractions like insider trading or harassment.
Nori Health intends to help sick people manage chronic conditions with chatbots trained to counsel them to behave in the best way to mitigate the disease. They’re beginning with “digital therapies” for inflammatory conditions like Crohn’s disease and colitis.
Smartling is adapting natural language algorithms to do a better job automating translation, so companies can do a better job delivering software to people who speak different languages. They provide a managed pipeline to simplify the process of creating multilingual documentation and sales literature at a large, multinational scale.
Is there anything that natural language processing can’t do? The standard algorithms are often successful at answering basic questions but they rely heavily on connecting keywords with stock answers. Users of tools like Apple’s Siri or Amazon’s Alexa quickly learn which types of sentences will register correctly. They often fail, though, to grasp nuances or detect when a word is used with a secondary or tertiary meaning. Basic sentence structures can work, but not more elaborate or ornate ones with subordinate phrases.
The search engines have become adept at predicting or understanding whether the user wants a product, a definition, or a pointer into a document. This classification, though, is largely probabilistic, and the algorithms fail the user when the request doesn’t follow the standard statistical pattern.
Some algorithms are tackling the reverse problem of turning computerized information into human-readable language. Some common news jobs like reporting on the movement of the stock market or describing the outcome of a game can be largely automated. The algorithms can even deploy some nuance that can be useful, especially in areas with great statistical depth like baseball. The algorithms can search a box score and find unusual patterns like a no hitter and add them to the article. The texts, though, tend to have a mechanical tone and readers quickly begin to anticipate the word choices that fall into predictable patterns and form clichés.
[Read more: Data and AI are keys to digital transformation – how can you ensure their integrity? ] The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,637 | 2,022 |
"What is artificial intelligence (AI)? Definition, types, ethics, examples and future trends | VentureBeat"
|
"https://venturebeat.com/2022/06/15/what-is-artificial-intelligence"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is artificial intelligence (AI)? Definition, types, ethics, examples and future trends Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents What is artificial intelligence? How are the biggest companies approaching AI? How are startups approaching AI? Challenges to enterprise artificial intelligence The words “artificial intelligence” (AI) have been used to describe the workings of computers for decades, but the precise meaning shifted with time. Today, AI describes efforts to teach computers to imitate a human’s ability to solve problems and make connections based on insight, understanding and intuition.
What is artificial intelligence? Artificial intelligence usually encompasses the growing body of work in technology on the cutting edge that aims to train the technology to accurately imitate or — in some cases — exceed the capabilities of humans.
Older algorithms, when they grow commonplace, tend to be pushed out of the tent. For instance, transcribing human voices into words was once an active area of research for scientists exploring artificial intelligence. Now it is a common feature embedded in phones, cars and appliances and it isn’t described with the term as often.
Today, AI is often applied to several areas of research: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Machine vision : Which helps computers understand the position of objects in the world through lights and cameras.
Machine learning (ML) : The general problem of teaching computers about the world with a training set of examples.
Natural language processing (NLP) : Making sense of knowledge encoded in human languages.
Robotics : Designing machines that can work with some degree of independence to assist with tasks, especially work that humans can’t do because it may be repetitive, strenuous or dangerous.
A brief history of artificial intelligence There is a wide range of practical applicability to artificial intelligence work. Some chores are well-understood and the algorithms for solving them are already well-developed and rendered in software. They may be far from perfect, but the application is well-defined. Finding the best route for a trip, for instance, is now widely available via navigation applications in cars and on smartphones.
Other areas are more philosophical. Science fiction authors have been writing about computers developing human-like attitudes and emotions for decades, and some AI researchers have been exploring this possibility. While machines are increasingly able to work autonomously, general questions of sentience, awareness or self-awareness remain open and without a definite answer.
[Related: ‘Sentient’ artificial intelligence: Have we reached peak AI hype? ] AI researchers often speak of a hierarchy of capability and awareness. The directed tasks at the bottom are often called “narrow AI” or “reactive AI”. These algorithms can solve well-defined problems, sometimes without much direction from humans. Many of the applied AI packages fall into this category.
The notion of “general AI” or “self-directed AI” applies to software that could think like a human and initiate plans outside of a well-defined framework. There are no good examples of this level of AI at this time, although some developers sometimes like to suggest that their tools are beginning to exhibit some of this independence.
Beyond this is the idea of “super AI”, a package that can outperform humans in reasoning and initiative. These are largely discussed hypothetically by advanced researchers and science fiction authors.
In the last decade, many ideas from the AI laboratory have found homes in commercial products. As the AI industry has emerged, many of the leading technology companies have assembled AI products through a mixture of acquisitions and internal development. These products offer a wide range of solutions, and many businesses are experimenting with using them to solve problems for themselves and their customers.
How are the biggest companies approaching AI? Leading companies have invested heavily in AI and developed a wide range of products aimed at both developers and end users. Their product lines are increasingly diverse as the companies experiment with different tiers of solutions to a wide range of applied problems. Some are more polished and aimed at the casual computer user. Others are aimed at other programmers who will integrate the AI into their own software to enhance it. The largest companies all offer dozens of products now and it’s hard to summarize their increasingly varied options.
IBM has long been one of the leaders in AI research. Its AI-based competitor in the TV game Jeopardy , Watson , helped ignite the recent interest in AI when it beat humans in 2011 demonstrating how adept the software could be at handling more general questions posed in human language.
Since then, IBM has built a broad collection of applied AI algorithms under the Watson brand name that can automate decisions in a wide range of business applications like risk management, compliance, business workflow and devops. These solutions rely upon a mixture of natural language processing and machine learning to create models that can either make production decisions or watch for anomalies. In one case study of its applications, for instance, the IBM Safer Payments product prevented $115 million worth of credit card fraud.
Another example, Microsoft’s AI platform offers a wide range of algorithms, both as products and services available through Azure. The company also targets machine learning and computer vision applications and like to highlight how their tools search for secrets inside extremely large data sets. Its Megatron-Turing Natural Language Generation model (MT-NLG), for instance, has 530 billion parameters to model the nuances of human communication. Microsoft is also working on helping businesses processes shift from being automated to becoming autonomous by adding more intelligence to handle decision-making. Its autonomous packages are, for instance, being applied to both the narrow problems of keeping assembly lines running smoothly and the wider challenges of navigating drones.
Google developed a strong collection of machine learning and computer vision algorithms that it uses for both internal projects indexing the web while also reselling the services through their cloud platform. It has pioneered some of the most popular open-source machine learning platforms like TensorFlow and also built custom hardware for speeding up training models on large data sets. Google’s Vertex AI product, for instance, automates much of the work of turning a data set into a working model that can then be deployed. The company also offers a number of pretrained models for common tasks like optical character recognition or conversational AI that might be used for an automated customer service agent.
In addition, Amazon also uses a collection of AI routines internally in its retail website, while marketing the same backend tools to AWS users. Products like Personalize are optimized for offering customers personalized recommendations on products.
Rekognitition offers predeveloped machine vision algorithms for content moderation, facial recognition and text detection and conversion. These algorithms also have a prebuilt collection of models of well-known celebrities, a useful tool for media companies. Developers who want to create and train their own models can also turn to products like SageMaker which automates much of the workload for business analysts and data scientists.
Facebook also uses artificial intelligence to help manage the endless stream of images and text posts. Algorithms for computer vision classify uploaded images, and text algorithms analyze the words in status updates. While the company maintains a strong research team, the company does not actively offer standalone products for others to use. It does share a number of open-source projects like NeuralProphet , a framework for decision-making.
Additionally, Oracle is integrating some of the most popular open-source tools like Pytorch and Tensorflow into their data storage hierarchy to make it easier and faster to turn information stored in Oracle databases into working models. They also offer a collection of prebuilt AI tools with models for tackling common challenges like anomaly detection or natural language processing.
How are startups approaching AI? New AI companies tend to be focused on one particular task, where applied algorithms and a determined focus will produce something transformative. For instance, a wide-reaching current challenge is producing self-driving cars. Startups like Waymo , Pony AI , Cruise Automation and Argo are four major startups with significant funding who are building the software and sensor systems that will allow cars to navigate themselves through the streets. The algorithms involve a mixture of machine learning, computer vision, and planning.
Many startups are applying similar algorithms to more limited or predictable domains like warehouse or industrial plants. Companies like Nuro , Bright Machines and Fetch are just some of the many that want to automate warehouses and industrial spaces. Fetch also wants to apply machine vision and planning algorithms to take on repetitive tasks.
A substantial number of startups are also targeting jobs that are either dangerous to humans or impossible for them to do. Against this backdrop, Hydromea is building autonomous underwater drones that can track submerged assets like oil rigs or mining tools. Another company, Solinus , makes robots for inspecting narrow pipes.
Many startups are also working in digital domains, in part because the area is a natural habitat for algorithms, since the data is already in digital form. There are dozens of companies, for instance, working to simplify and automate routine tasks that are part of the digital workflow for companies. This area, sometimes called robotic process automation (RPA), rarely involves physical robots because it works with digital paperwork or chit. However, it is a popular way for companies to integrate basic AI routines into their software stack. Good RPA platforms, for example, often use optical character recognition and natural language processing to make sense of uploaded forms in order to simplify the office workload.
Many companies also depend upon open-source software projects with broad participation. Projects like Tensorflow or PyTorch are used throughout research and development organizations in universities and industrial laboratories. Some projects like DeepDetect , a tool for deep learning and decision-making, are also spawning companies that offer mixtures of support and services.
There are also hundreds of effective and well-known open-source projects used by AI researchers.
OpenCV , for instance, offers a large collection of computer vision algorithms that can be adapted and integrated with other stacks. It is used frequently in robotics, medical projects, security applications and many other tasks that rely upon understanding the world through a camera image or video.
Challenges to enterprise artificial intelligence There are some areas where AI finds more success than others. Statistical classification using machine learning is often pretty accurate but it is often limited by the breadth of the training data. These algorithms often fail when they are asked to make decisions in new situations or after the environment has shifted substantially from the training corpus.
Much of the success or failure depends upon how much precision is demanded. AI tends to be more successful when occasional mistakes are tolerable. If the users can filter out misclassification or incorrect responses, AI algorithms are welcomed. For instance, many photo storage sites offer to apply facial recognition algorithms to sort photos by who appears in them. The results are good but not perfect, but users can tolerate the mistakes. The field is largely a statistical game and succeeds when judged on a percentage basis.
A number of the most successful applications don’t require especially clever or elaborate algorithms, but depend upon a large and well-curated dataset organized by tools that are now manageable. The problem once seemed impossible because of the scope, until large enough teams tackled it. Navigation and mapping applications like Waze just use simple search algorithms to find the best path but these apps could not succeed without a large, digitized model of the street layouts.
Natural language processing is also successful with making generalizations about the sentiment or basic meaning in a sentence but it is frequently tripped up by neologisms, slang or nuance. As language changes or processes, the algorithms can adapt, but only with pointed retraining. They also start to fail when the challenges are outside a large training set.
Robotics and autonomous cars can be quite successful in limited areas or controlled spaces but they also face trouble when new challenges or unexpected obstacles appear. For them, the political costs of failure can be significant, so developers are necessarily cautious on leaving the envelope.
Indeed, determining whether an algorithm is capable or a failure often depends upon criteria that are politically determined. If the customers are happy enough with the response, if the results are predictable enough to be useful, then the algorithms succeed. As they become taken for granted, they lose the appellation of AI.
If the term is generally applied to the topics and goals that are just out of reach, if AI is always redefined to exclude the simple, well-understood solutions, then AI will always be moving toward the technological horizon. It may not be 100% successful presently, but when applied in specific cases, it can be tantalizingly close.
[Read more : The quest for explainable AI ] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,638 | 2,022 |
"What is a data lake? Definition, benefits, architecture and best practices | VentureBeat"
|
"https://venturebeat.com/2022/03/10/what-is-a-data-lake-definition-benefits-architecture-and-best-practices"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is a data lake? Definition, benefits, architecture and best practices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents What is a data lake? Key benefits of having a data lake Architecture of a data lake: Storage and analysis process Data lake challenges Data lake security: 6 best practices for enterprises in 2022 What is a data lake? A data lake is defined as a centralized and scalable storage repository that holds large volumes of raw big data from multiple sources and systems in its native format.
To understand what a data lake is, consider a data lake as an actual lake, where the water is raw data that flows in from multiple sources of data capture and can then flow out to be used for a range of internal and customer-facing purposes. This is much broader than a data warehouse , which would be more like a household tank, one that stores cleaned water (structured data) but just for use of one particular house (function) and not anything else.
Data lakes can be executed using in-house-built tools or third-party vendor software and services. According to Markets and Markets, the global data lake software and services market is expected to grow from $7.9 billion in 2019 to $20.1 billion in 2024. A number of vendors are expected to drive this growth, including Databricks, AWS, Dremio, Qubole and MongoDB. Many organizations have even started providing the so-called lakehouse offering, combining the benefits of both data lakes and warehouses through a single product.
Data lakes work on the concept of load first and use later, which means the data stored in the repository doesn’t necessarily have to be used immediately for a specific purpose. It can be dumped as-is and used all together (or in parts) at a later stage as business needs arise. This flexibility, combined with the vast variety and amount of data stored, makes data lakes ideal for data experimentation as well as machine learning and advanced analytics applications within an enterprise.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Key benefits of having a data lake Unlike data warehouses , which only store processed structured data (organized in rows and columns) for some predefined business intelligence/reporting applications, data lakes bring the potential to store everything with no limits. This could be structured data, semi-structured data, or even unstructured data such as images (.jpg) and videos (.mp4).
The benefits of a data lake for enterprises include the following: Expanded data types for storage: Since data lakes bring the capability to store all data types, including those critical to the performance of advanced forms of analytics, organizations can leverage them to identify opportunities from insights that could help with improving operational efficiency, increasing revenue, cost efficiency, risk management etc.
Revenue growth from expanded data analytics: According to an Aberdeen survey , organizations that implemented a data lake outperformed competitors by 9% in organic revenue growth. These companies were able to perform new types of analytics on previously unusable and siloed data — log files, data from click-streams, social media and IoT devices — now centrally stored in the data lake.
Unified data from silos: Data lakes can centralize information from disparate departmental silos, mainframes, and legacy systems, thereby offloading their individual capacity and preventing data duplication while increasing data usability when connected with the larger data structure. This helps formulate a 360-degree customer view for enterprises, which in turn helps improve customer targeting and marketing campaign orchestration. Unified data is also less expensive to store than siloed data.
Omnichannel data orchestration: An organization can implement a data lake to ingest data from across multiple sources, including IoT equipment sensors in factories and warehouses. These sources can be internal and/or customer-facing for a data lake of unified data. Customer-facing data enables marketing, sales and account management teams to orchestrate omni-channel campaigns using the most updated and unified information available for each customer, whereas internal data is used for holistic employee and finance management strategies.
Learn more: What is data orchestration? Architecture of a data lake: Storage and analysis process Data lakes use a flat architecture, and can have many layers depending on technical and business requirements. No two data lakes are built exactly alike. However, there are some key zones through which the general data flows: the ingestion zone, landing zone, processing zone, refined data zone and consumption zone.
1.
Data ingestion This component, as the name suggests, connects a data lake to external relational and nonrelational sources — such as social media platforms and wearable devices — and loads raw structured, semi-structured and unstructured data into the platform. Ingestion is performed in batches or in real time, but it must be noted that a user may need different technologies to ingest different types of data.
Currently, all major cloud storage providers offer solutions for low-latency data ingestion. They include Amazon S3, Amazon Glue, Amazon Kinesis, Amazon Athena, Google Dataflow, Google BigQuery, Azure Data Factory, Azure Databricks and Azure Functions.
2.
Data landing Once the ingestion completes, all the data is stored as-is with metadata tags and unique identifiers in the landing zone. As per Gartner , this is usually the largest zone in a data lake today (in terms of volume) and serves as an always-available repository of detailed source data, which can be used/reused for analytic and operational use cases as and when the need arises. The presence of raw source data also makes this zone an initial playground for data scientists and analysts, who experiment to define the purpose of the data.
3.
Data processing When the purpose(s) of the data is known, its copies move from landing to the processing stage, where the refinement, optimization, aggregation and quality standardization takes place by imposing some schemas. This zone makes the data analysis-worthy for various business use cases and reporting needs.
Notably, data copies are moved into this stage to ensure that the original arrival state of the data is preserved in the landing zone for future use. For instance, if new business questions or use cases arise, the source data could be explored and repurposed in different ways, without the bias of previous optimizations.
4.
Refined data zone When the data is processed, it moves into the refined data zone, where data scientists and analysts set up their own data science and staging zones to serve as sandboxes for specific analytic projects. Here, they control the processing of the data to repurpose raw data into structures and quality states that could enable analysis or feature engineering.
5.
Consumption zone The consumption zone is the last stage of general data flow within a data lake architecture. In this layer, the results and business insights from analytic projects are made available to the targeted users, be it a technical decision-maker or a business analyst, through the analytic consumption tools and SQL and non-SQL query capabilities.
Data lake challenges Over the years, cloud data lake and warehousing architectures have helped enterprises scale their data management efforts while lowering costs. However, the current set-up has some challenges, such as: Lack of consistency with warehouses: Companies may often find it difficult to keep their data lake and data warehouse architecture consistent. It is not just a costly affair; teams also need to employ continuous data engineering tactics to ETL/ELT data between the two systems. Each step can introduce failures and unwanted bugs, affecting the overall data quality.
Vendor lock-in: Shifting large volumes of data into a centralized EDW becomes quite challenging for companies, not only because of the time and resource required to execute such a task, but also because this architecture creates a closed loop, causing vendor lock-in.
Data governance: While the data in the data lake tend to be mostly in different file-based formats, a data warehouse is mostly in database format, and it adds to the complexity in terms of data governance and lineage management between the two storage types.
Data copies and associated costs: Data available in data lakes and data warehouses leads to an extent of data copies and has associated costs. Moreover, commercial warehouse data in proprietary formats increases the cost of migrating data. A data lakehouse addresses these typical limitations of a data lake, as well as data warehouse architecture, by combining the best elements of data warehouses and data lakes to deliver significant value for organizations.
Data lake security: 6 best practices for enterprises in 2022 1.
Identify data goals In order to prevent your data lake from becoming a data swamp, it is recommended to identify your organization’s data goals — the business outcomes — and appoint an internal or external data curator who could assess new sources/datasets and govern what goes into the data lake based on those goals. Clarity on what type of data has to be collected can help an organization dodge the problem of data redundancy, which often skews analytics.
2.
Document incoming data All incoming data should be documented as it is ingested into the lake. The documentation usually takes the forms of technical metadata and business metadata, although new forms of documentation are also emerging. Without proper documentation, a data lake deteriorates into a data swamp that is difficult to use, govern, optimize and trust. Users fail to discover the required data.
3.
Maintain quick ingestion time The ingestion process should run as quickly as possible. Eliminating prior data improvements and transformations increases ingestion speed, as does adopting new data integration methods for pipelining and orchestration. This helps make the data available as soon as possible after data is created or updated, so that some forms of reporting and analytics can operate on it.
4.
Process data in moderation The main goal of a data lake is to provide detailed source data for data exploration, discovery and analytics. If an enterprise processes the ingested data with heavy aggregation, standardization and transformation, then many of the details captured with the original data will get lost, defeating the whole purpose of the data lake. So, an enterprise should make sure to apply data quality remediations in moderation while processing.
5.
Focus on subzones Individual data zones in the lake can be organized by creating internal subzones. For instance, a landing zone can have two or more subzones depending on the data source (batch/streaming). Similarly, the data science zone under refined datasets layer can include subzones for analytics sandboxes, data laboratories, test datasets, learning data and training, while the staging zone for data warehousing may have subzones that map to data structures or subject areas in the target data warehouse (e.g. dimensions, metrics and rows for reporting tables and so on).
6.
Prioritize data security Security has to be maintained across all zones of the data lake, starting from landing to consumption. To ensure this, connect with your vendors and see what they are doing in these four areas: user authentication, user authorization, data-in-motion encryption and data-at-rest encryption. With these elements, an enterprise can keep its data lake actively and securely managed, without the risk of external or internal leaks (due to misconfigured permissions and other factors).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,639 | 2,019 |
"Coursera raises $103 million to prepare online learners for the 'fourth industrial revolution' | VentureBeat"
|
"https://venturebeat.com/2019/04/25/coursera-raises-103-million-to-prepare-online-learners-for-the-fourth-industrial-revolution"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Coursera raises $103 million to prepare online learners for the ‘fourth industrial revolution’ Share on Facebook Share on X Share on LinkedIn Coursera cofounder Andrew Ng.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Online education giant Coursera has raised $103 million in a series E round of funding led by Seek Group, with participation from Future Fund and NEA.
A spokesperson confirmed to VentureBeat that the company is now valued at well over $1 billion, making Coursera the latest to join the unicorn brigade.
Founded in 2012, Coursera is one of a number of well-funded MOOCs — massive open online courses — to emerge. Coursera partners with universities and other educational institutions to deliver online courses to 40 million students, covering subjects like technology, business, science, and even autonomous cars.
Coursera also has specific product offerings for the corporate market , as well as for governments and nonprofits.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Coursera for Business Coursera had previously raised $210 million, including its $64 million series D two years ago.
With its latest cash injection, it plans to grow its platform globally and further develop its offering in preparation for the “fourth industrial revolution” — often defined by developments across artificial intelligence (AI), robotics, quantum computing, and the internet of things (IoT), among other transformative technologies.
“The fourth industrial revolution, marked by advancements in automation and artificial intelligence, is dramatically reshaping our lives, businesses, and jobs,” noted Coursera CEO Jeff Maggioncalda. “Coursera is at the forefront of preparing individuals, companies, and governments to meet that challenge head-on and turn this disruption into opportunity. The additional funding gives us the resources and flexibility to further expand internationally and to accelerate the development of a learning platform that currently serves 40 million learners, 1,800 businesses, and over 150 top universities.” San Francisco-based Coursera was cofounded by Daphne Koller, a Stanford University computer science professor, and Andrew Ng, a fellow Stanford academic and renowned computer scientist, who created the Google Brain deep learning project before joining Chinese tech titan Baidu as chief scientist. More recently, Ng unveiled the AI Fund , a $175 million incubator that backs small teams looking to solve big problems with machine learning.
Coursera was inspired by Koller and Ng’s experiences of developing online courses at Stanford in 2011. This was similar to another heavily funded MOOC, called Udacity , which was launched around the same time by Sebastian Thrun, the Stanford professor and computer scientist who set up Google’s “moonshot” program, Google X. In 2011, he somehow found the time to launch Udacity, which has gone on to raise $160 million.
The scalability of online courses — anyone can join lessons from anywhere in the world — and the much-discussed impending workforce crisis makes MOOCs appealing to investors. Moreover, while the global AI talent pool may be growing, demand still exceeds supply, according to a recent report from Element AI — people, including established computer scientists, will likely need to retrain to fill these roles.
It’s against this backdrop that online education platforms are expected to grow from a $4 billion industry today to a hefty $21 billion market by 2023.
“This investment reflects our commitment to online education, which is enabling the up-skilling and re-skilling of people and is aligned with our purpose of helping people live fulfilling working lives,” added Seek cofounder and CEO Andrew Bassat. “We have been watching Coursera for many years. They have a great team of people doing terrific work. We are pleased to come on board to partner with them in their next phase of growth.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,640 | 2,019 |
"Automata wants to democratize industrial automation with a $6,600 desktop robotic arm | VentureBeat"
|
"https://venturebeat.com/2019/03/19/automata-wants-to-democratize-industrial-automation-with-a-6600-desktop-robotic-arm"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Automata wants to democratize industrial automation with a $6,600 desktop robotic arm Share on Facebook Share on X Share on LinkedIn Automata: A team of Eva robots Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Despite lingering fears that “robots” are coming to steal our jobs , automation is infiltrating just about every facet of society, from supermarkets to repetitive software-based jobs and even the creative industries.
And one London-based startup wants to capitalize on this growing trend by bringing an affordable desktop-based industrial robot to market.
Founded in 2015 by architects Mostafa ElSayed and Suryansh Chandra, Automata is setting out to help industries automate repetitive physical work, freeing humans to do more fulfilling jobs — more than 90 percent of tasks in manufacturing are still manual, according to Automata.
The company’s first product, a £4,990 ($6,600) robotic arm, dubbed Eva, can be mounted to any flat surface — such as a table or workbench — and configured and controlled via a standard computer browser.
Eva actually soft-launched at the tail end of 2018 when it opened for preorders, but today marks its official launch, and the robotic arm will soon start shipping across Europe and any country that accepts CE certification.
The company is also currently looking at certification requirements to enter the U.S. and Asian markets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Automata also announced today that it has raised $7.4 million in a series A round of funding led by Belgium-based Hummingbird Ventures, with participation from Firstminute Capital, Hardware Club, LocalGlobe, ABB, and Entrepreneur First. The round actually closed last March, but the founders kept it under wraps to avoid some of the skepticism leveled at the robotics industry — in short, they wanted to wait until they had an actual product to show before revealing the funding.
Above: Automata: Eva As for real-world use cases, well, according to Automata the main limit is really your imagination. Eva could be paired with a computer vision-powered camera to inspect and test for faulty products or used to pack sandwiches in factories or to test touchscreen phones.
For more complex tasks, companies can put multiple Eva robots to work.
As demonstrated here, three robotic arms and a computer vision camera work in unison to identify the correct letters on wooden blocks, (eventually) putting them in order to spell out the word “Automata.” Above: Automata: Eva (teamwork) In terms of sourcing all the various components for Eva, Automata is working with U.K. robotics manufacturer Tharsus. However, Automata discovered early on that using existing gearbox providers would make the robot too expensive — so it effectively designed and developed its own patented gearbox from scratch to keep costs down.
“We grew a company around a gearbox,” ElSayed explained at a briefing in London earlier today.
Choreography Moreover, the company developed its own software system, called Choreograph, which allows companies to configure all the sequences they need their robotic arm to complete. It also integrates with third-party programs across design and 3D printing — this is very much about enabling manufacturers to manage their own small-scale production runs, which may be useful for customizing designs or catering to fluctuating seasonal demands.
Automating industry Robotics is now a major part of many industries, with the likes of Amazon leaning heavily on automated warehousing systems to cut costs and expedite deliveries. Internally, Amazon uses Kiva Systems, a company it bought for $775 million in 2012.
U.K grocery giant Ocado is dabbling in all manner of next-gen systems for picking and packing technology, including robotic arms that can handle delicate items such as fruit and computer vision to determine the best “grasp” for more awkward items.
Not surprisingly, a growing number of startups are building robotics for all manner of use cases, such as delivering food and other goods.
However, building physical robots is an expensive endeavor, which is why some companies have been developing novel ways to help smaller companies join the club — Los Angeles-based InVia Robotics, for example, offers a subscription-based warehouse automation platform that negates the need for substantial up-front investments.
Automata’s proposition is different, of course, but built on similar principles: It’s about making robotics more accessible, with a focus on price, user-friendliness, and genuine usefulness.
“Mostafa and I started Automata to democratize robotics and to ultimately allow anyone to seamlessly use a robot,” explained Chandra. “We are extremely proud to offer Eva at the price point we do. People can visit the Automata website and buy a piece of industrial quality equipment on their credit card — it doesn’t get much more accessible than that.” Prior to its series A round, Automata had raised around $2 million, and with another $7.4 million in the bank it is setting out to grow its existing team of 42, all of whom are based out of the company’s headquarters in London, as well as increasing production of its Eva robots.
“This recent funding allows us to bring Eva to an increased number of businesses across multiple sectors,” Chandra added.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,641 | 2,018 |
"InVia Robotics raises $20 million to expand its subscription-based warehouse automation platform | VentureBeat"
|
"https://venturebeat.com/2018/08/01/invia-robotics-raises-20-million-to-expand-its-subscription-based-warehouse-automation-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages InVia Robotics raises $20 million to expand its subscription-based warehouse automation platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
InVia Robotics , a startup that provides fulfillment centers with automated robotics technology, has raised $20 million in a series B round of funding led by Point72 Ventures, with participation from Upfront Ventures and Embark Ventures.
Los Angeles-based InVia Robotics is effectively a robotics-as-a-service platform providing ecommerce distribution hubs with next-gen warehouse automation technologies. The InVia Picker, for example, works in collaboration with humans to transfer goods from shelves to dispatch.
Above: InVia’s robotics Automated robotics technology has been infiltrating the retail realm as businesses search for ways to decrease operational costs and optimize productivity. French startup Exotec recently raised $17.7 million after debuting its autonomous Skypod system , which uses mobile robots that can move in three dimensions. Elsewhere, Bossa Nova last month raised $29 million for its store inventory robots , while grocery giant Ocado has showcased new picking and packing robots that use computer vision to figure out the best way to grasp a product.
Put simply, robotics are here to stay, and they are getting smarter.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! InVia Robotics’ core selling point is the accessibility it affords companies. Rather than requiring huge up-front hardware and software investments from retailers, InVia offers a subscription-based pricing model, enabling companies to get up-to-speed quickly and compete more effectively with warehouse automation stalwarts such as Amazon.
“To compete with behemoths like Amazon, warehouse automation is critical for ecommerce companies; but the overhead cost of purchasing a fleet of robots is often beyond reach,” noted InVia Robotics founder and CEO Lior Elazary.
A few months back, InVia announced it had secured ecommerce company Hollar as a customer, with plans to roll out 100 InVia Picker robots at Hollar’s Los Angeles warehouse.
Prior to now, InVia had raised around $9 million , and with another $20 million in the bank it said that will reinvigorate its efforts to deploy its robotics platform at scale, with plans to double its employee count by the end of 2018.
“Ecommerce industry growth is driving the need for more warehouse automation to fulfill demand, and AI-driven robots can deliver that automation with the flexibility to scale across varied workflows,” added Daniel Gwak, co-head for AI Investments at Point72 Ventures. “Our investment in inVia Robotics reflects our conviction in AI as a key enabler for the supply chain industry.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,642 | 2,017 |
"Fetch Robotics raises $25 million to automate warehouses | VentureBeat"
|
"https://venturebeat.com/2017/12/06/fetch-robotics-raises-25-million-to-help-automate-warehouses"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fetch Robotics raises $25 million to automate warehouses Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Fetch Robotics , a Silicon Valley startup that’s bringing robots and a supporting cloud-based software system to industrial environments, has raised $25 million in a series B round of funding led by Sway Ventures, with participation from O’Reilly AlphaTech Ventures, Shasta Ventures, and SoftBank’s SB Group US.
Founded in 2014, San Jose-based Fetch Robotics develops autonomous robots that operate among humans in locations such as warehouses, factories, and distribution centers. CEO Melonee Wise previously headed up Unbounded Robotics, which was shuttered a few years ago due to its inability to raise VC money — apparently because of an agreement it had with Willow Garage , the robotics company it was spun out of.
But while both Unbounded and Willow Garage focused on personal robots, Wise has taken a sideways step to cater to the fast-growing on-demand ecommerce industries that require the speed and efficiency of automation.
The company unveiled its first two products , the Fetch and the Freight, back in 2015. These are designed to operate in a modern warehouse and help companies cut costs and save time. Robots, you see, don’t need regular breaks, and they don’t usually call in sick.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Fetch Robotics Fetch Robotics had previously raised $23 million, and with its latest cash injection it plans to meet “accelerating worldwide customer demand” for Fetch Robotics software and robots.
“We are seeing first-hand that the growth in ecommerce and an expanding on-demand economy are contributing to unprecedented labor challenges faced by the $5 trillion global logistics industry,” said Wise. “With labor in short supply, our customers are still able to quickly realize significant, measurable productivity increases by deploying our autonomous mobile robots.” Robotics and automation technologies have seen a major spike in investment of late, particularly in industrial applications. Veo Robotics, for example, recently raised $12 million to help machines and humans collaborate more efficiently, while Bossa Nova raised $17.5 million for retail robots that monitor merchants’ in-store inventory. Elsewhere, Abundant Robotics raised $10 million to commercialize its apple-picking robot.
“The warehouse and logistics automation market is estimated at over $40 billion today, and is poised to double over the next five years,” added Brian Nugent, founding general partner at Sway Ventures.
As a result of Sway Venture’s investment, Nugent will join Fetch Robotics’ board of directors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,643 | 2,017 |
"Grocery giant demos robotic arm that can pick and pack delicate items such as fruit | VentureBeat"
|
"https://venturebeat.com/2017/01/31/grocery-giant-demos-robotic-arm-that-can-pick-and-pack-delicate-items-such-as-fruit"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Grocery giant demos robotic arm that can pick and pack delicate items such as fruit Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
If 2016 was the year that robots really began to move from science fiction into the real world, then 2017 is shaping up to be more of the same.
U.K.-based supermarket giant Ocado, one of the world’s biggest online-only grocery retailers, has long embraced technology and automation to streamline its operations.
The company has been working for some time on a humanoid robot called SecondHands , one that understands human speech, has 3D vision, and uses artificial intelligence and machine learning to help the company carry out maintenance work within their premises.
Above: SecondHands While there are many advantages to using automation and fleets of robots in large warehouses, the machines aren’t quite up to human standards in some regards. Using robotic arms to cart heavy boxes is one thing, but delicate goods or odd-shaped items — such as loose fruit and vegetables — are a little trickier, as they can be easily damaged.
With that in mind, Ocado has been working to solve this problem through the SoMa (soft manipulation) project , a Horizon 2020 program for robotics research and innovation funded by the European Union. Through SoMa, the aim is to develop “simple, compliant, yet strong, robust, and easy-to-program manipulation systems” to enable “robust grasping” in dynamic environments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today, Ocado is lifting the lid on some of the progress it’s made, in partnership with Technische Universität Berlin (TUB), using the university’s RBO Hand 2.
They have developed a working robotic arm that can grasp grocery products, including individual apples and oranges, and pack them safely for delivery to customers. The hand is built with a spring-like mechanism, controlled by changing the air pressure, that automatically adjusts the grip based on the product it’s handling. This is important, given that Ocado has around 50,000 distinct items on its shelves, which means it has to be adaptable.
For testing, Ocado set up a production warehouse scenario to establish how the hand would work for Ocado’s specific use-cases.
It is still early days, and Ocado says it plans to carry out more testing throughout 2017, adding more complex scenarios to the mix.
“Robotic picking for grocery is much more complex than general merchandise because of varying form factors, temperatures, shapes and sizes, or gripping techniques,” explained Ocado chief technology officer (CTO) Paul Clarke, to VentureBeat. “Our new warehouses built using the Ocado Smart Platform enable us to have robotic picking and we’ve also automated other extremely repetitive and physically demanding tasks such as the bagging of our crates. We’re on a journey where some robotic picking can be done now for heavier items; SoMa will make robotic picking available at a large scale and deploying it is all about finding the right time.” A number of other players from the technology realm are also building products to help retailers improve operations through automation and robotics.
French startup Exotec is building a fleet of mobile robots to help warehouses prepare orders for delivery. The miniature robots promise to help cut employees’ daily distance covered in giant warehouses from 15km to 4km. Following a number of tests, the first proper robot is expected to launch in the wild shortly.
Then there’s Pittsburgh-based Bossa Nova Robotics , which raised $14 million last year to roll out its robotic technology to retailers.
The company builds robots that allow stores such as supermarkets to analyze their shelves and collect data to optimize inventory.
Above: Bossa Nova While it is fun observing the latest AI-infused robotics at work, and they are arguably designed to make humans’ lives easier, there is no ignoring all the jobs that will likely be lost to machines and automation.
As part of the “fourth industrial revolution,” millions of jobs will be lost over the next three years, according to a World Economic Forum report from last year. The net loss is expected to be as many as five million jobs, though of course a whole new line of jobs will be created due to automation, including jobs in IT and data analysis.
“The Ocado Group is a net employer of more than 11,000 people, none of whom (including myself) would have a job if it hadn’t been for the automation and robotics we’re developing in-house,” added Clarke. “On the other hand, Ocado is an online-only retailer without bricks and mortar stores so our two customer touch points are our drivers and the contact center. Therefore, we have to be very mindful of the importance of human interactions. This is why we’ve taken the approach to enhance our contact center with AI , rather than replace it with chatbots, for example.
Last week, online education company Coursera launched a new program specifically for governments and nonprofits with a view toward closing the growing skills gap.
“The skills gap can no longer be ignored as a major force driving world events,” explained Coursera CEO Rick Levin. “Millions of people lack the skills needed for new and better jobs, and increasing automation will only widen the gap. Governments and nonprofits focused on workforce development are eager to work with us and our university partners to deliver skills education to populations at unprecedented scale.” This is a problem that will likely blight every industry. But stifling technological progress isn’t part of human nature, regardless of any negative consequences, which is why people will need to tool-up and prepare for the growing digital labor market.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,644 | 2,015 |
"Amazon has doubled the number of robots in its warehouses to 30,000 | VentureBeat"
|
"https://venturebeat.com/2015/10/22/amazon-now-has-30000-robots-shipping-you-packages-from-its-warehouses"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon has doubled the number of robots in its warehouses to 30,000 Share on Facebook Share on X Share on LinkedIn Kiva robots at Amazon.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon today provided new information about robots working inside of its facilities. The ecommerce company now has 30,000 Kiva robots working in its fulfillment centers, an executive said during Amazon’s quarterly call with investors today.
That number is up from 15,000 at the end of 2014, said the executive, Phil Hardin, Amazon’s director of investor relations.
“Capital intensity is offset by their density and throughput, so it’s a bit of an investment that has implications for a lot of elements of our cost structure, but we’re happy with Kiva,” Hardin said. “We think it’s a great pairing of our associates with Kiva robots that do some of the hauling of products within the warehouses. It has been a great innovation for us, and we think it makes the warehouse jobs better, and we think it makes our warehouses more effective.” This is all kinds of interesting. It likely helps Amazon save money, because fewer people are on warehouse floors, and that also means a lower risk of injury.
But it also shows Amazon at the cutting edge of automation in the workplace.
Amazon bought Kiva Systems for $775 million in 2012.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,645 | 2,022 |
"AI Weekly: As demand for chatbots increases, so does the need for better offerings | VentureBeat"
|
"https://venturebeat.com/2022/04/02/ai-weekly-as-demand-for-chatbots-increases-so-does-the-need-for-better-offerings"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: As demand for chatbots increases, so does the need for better offerings Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The market for engaging chatbots continues to heat up as consumers increasingly head online for purchases, creating a need for enterprises to offer a differentiated customer experience.
But good, differentiated chatbots are hard to build. With the influx of users and heightened demand, companies have had to turn to any number of places to get started with chatbot technologies. And those places can either be brittle or dominated by players that don’t offer much room for customization. For instance, companies might leverage the virtual assistants offered by Google Assistant, Amazon Alexa and Facebook Messenger. They also might turn to the offerings provided by chatbot providers like IBM Watson, Five9 or Gupshup.
Chatbots need to have it all These bots are blossoming in functionality as well as in other essential areas such as business sales, analytics, customer service and overall end-user satisfaction. But the architectures of these offerings can sometimes be limiting. Some of them are one-stop shops that force companies to build the backend and storage within the same system – seen in offerings like Intercom and Drift.
Or there’s Zendesk’s, which is popular among companies that use Zendesk customer support, but which don’t have open end-points to other ticketing systems or live chat services.
After several years of shake-out, one technology approach is becoming increasingly popular. Many companies are leveraging open source backend technologies, like natural language understanding (NLU) from Dialogflow and Google Cloud Platform, for example. Then they’re adding feature-rich, modularized, frontend tools on top. These tools empower enterprises to integrate, deploy and train AI chat and messaging into their own custom web and mobile experiences.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One new offering seeking to fill a void here is Botcopy , a cloud-compatible custom user interface for web messaging systems, which has just secured late-seed funding.
“What Botcopy customers tend to have in common is they believe there’s value in controlling and managing their own cloud platform services. We couldn’t agree more,” Botcopy’s Head of Product Alexander Seegers said. The challenge, he explained, is that some of these organizations don’t want to compromise on having a best-in-class chat solution for their websites and mobile apps. “So from the beginning, our focus has been on making that frontend piece easy to create and manage.” Other examples of such offerings include FeathersJS and TalkJS — both of which require coding expertise.
AI chatbots are typically used to guide conversations along the best path. One way they do this is by collecting data and making suggestions to the end-user – a tight machine learning feedback loop that gets better with time. For example, let’s say the chatbot learns that new buyers are most likely to ask about payment options when they are looking at a product page. Once the chatbot has that insight, it can optimize the page by moving payment options, like PayPal, closer to the top. Ultimately, it boils down to lining up users’ preferences with what’s on offer.
The statistics surrounding AI messaging only help to further cement its place in the modern ecosystem. Chatbot adoption is estimated to save the healthcare, retail and banking sections alone upwards of $11 billion annually.
Other predictions on the value-driven growth of AI-assisted virtual messaging indicate that 50% of all knowledge workers will utilize some form of a chatbot daily.
Modular vs. custom solutions As the potential for disruption from AI-assisted messaging increases, so, too, do its methods of implementation and deployment across websites, mobile applications and services. Blending the backend AI services of GCP, Azure and AWS, with modular frontend components, has also led to greater flexibility of innovation from companies other than Facebook, Google, Microsoft and Amazon.
The SaaS market has a rich history of counter-balancing the need for speed to market on one hand, with feature-rich customization on the other. In this way, fast-to-market solutions and inexpensive user licenses often battle with the high knowledge and expertise demands that it takes to retrofit broad parts around specific business processes.
Thanks for reading. Please keep tips coming to AI Weekly by messaging us at [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,646 | 2,022 |
"For true digital transformation, invest in game-changing initiatives | VentureBeat"
|
"https://venturebeat.com/2022/03/21/for-true-digital-transformation-invest-in-game-changing-initiatives"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community For true digital transformation, invest in game-changing initiatives Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital innovation calls for jumping over many hurdles, but the most challenging of these is to get the organization behind big, groundbreaking investments. If you take a dispassionate assessment of your portfolio of digital projects , what percentage of these investments is dedicated to a laundry list of small, incremental improvements versus truly impactful projects? For being digital, we see that many of these investments find a place in the portfolio, but they are typically business-as-usual, keep-the-lights-on or compliance projects. Realizing the full potential of digital innovation requires marshaling an organization-wide effort behind game-changing initiatives that either build new capabilities, solve a business or customer problem, or preferably, both.
Digital investments at this scale create new ways of working to better enable individuals to drive work forward or allow distributed teams to work together seamlessly. These large projects don’t simply automate what companies are currently doing. They change the work and ultimately drive significant benefits in efficiency, speed, quality or value. Nearly all bold business improvements today are typically powered by digital.
Focusing investment on big impact projects can be challenging The benefits may be clear, but why do large companies find it difficult to focus investment on big impact projects? Our experience has highlighted some of the reasons: Diffused decision-making : Business unit heads have profit and loss responsibility and budget control, IT has a dedicated budget for projects, and the head of Digital also has a budget but is typically not the sole decision maker regarding digital investments. The challenge: game-changer investments require the alignment of all these players, who typically have different interests at stake.
Misaligned, risk-averse incentives : Typical business unit leader compensation structures emphasize achieving in-year goals. These positions are sought-after springboards up to higher executive leadership roles. Investing for the future and taking large gambles even on “good” bets like digital are not on the standard strategic game plan for success. Again, spending the necessary time to align across business units and functions often proves challenging.
Lack of visibility : Understanding the portfolio of digital projects is a lot harder than it needs to be. We find that miscategorized investments and low transparency leads to a difficulty in understanding the level and focus of digital investment. Without visibility, it’s difficult to understand, and from there, to act.
A good example of the challenges in focusing investment on big impact projects comes from a Fortune 50 company that is a recognized leader in digital innovation in their supply chain. Even for this digital-forward company, when the team sought to understand their digital investments, they weren’t clear. Some projects were incorrectly tagged, while confusing language was used to identify game-changer investments. Plus, they had only just moved to a common portfolio management platform. It took serious work to create some visibility and move on changing the digital strategy: roughly 6%of the budget was applied to game-changing investments, while a further 6% was not making the sought-after impact.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This company is now on the right path to address the above challenges, with a clear mandate to change and evolve its organization and culture. What are other companies doing to solve this problem? Digital transformation investment examples from leading companies One of the largest hospitals in the U.S. recharged its digital path by bringing full visibility to its investments in digital, uncovering hidden pockets of investments in the process. Once new leadership achieved a clear understanding of the investment portfolio, the company set a goal to increase the amount dedicated to transformational investments from 10% to 30%. They were able to achieve 22%, which was a dramatic increase but still short of the goal.
In another example, a leading CPG company started on its digital journey without a clear strategy, having made initial efforts under the IT umbrella.
Challenges soon arose, especially around securing enough investment to implement large-scale digital projects. Change started when a digital strategy was created on top of the overall business strategy, clearly tying digital priorities to business goals. Once this clear link was established, the discussion shifted to digital enablement as the way to drive the business strategy. Gaining clarity on the importance of digital paved the way for an aligned set of big investments dedicated to impactful digital projects.
Will your organization succeed with focus, or go sideways with diffuse spend? Here are three questions that will help a company understand whether it is on the path to digital transformation success: Is bold leadership in place that is willing to advocate for a path and stand behind it? Is the leadership team aligned to a company-first strategy, or to chasing their own individual goals? Is there real visibility into digital investments, and can they be held accountable for progress? We’ve seen that it appears to be safer and easier for individual executives to spread their digital investments across a broad set of projects that will work with minimal disruption and create only a minor amount of change to manage. However, this strategy almost guarantees that the organization won’t make meaningful progress. In fact, this piecemeal approach is not as safe and easy as it might appear at first. Companies that take the “safe” approach and do digital incrementally forfeit the benefits that fundamental digital transformation affords. True change requires the bold leadership, alignment and visibility that we see from the best leaders. That means getting out of the comfort zone.
Jeff Hewitt is a partner in the health practice of global strategy and management consultancy Kearney.
Anshuman Jaiswal is a principal in Kearney’s Digital Transformation practice.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,647 | 2,022 |
"Putting AI to work for CX and sales in 2022 | VentureBeat"
|
"https://venturebeat.com/2022/02/24/putting-ai-to-work-for-cx-and-sales-in-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Putting AI to work for CX and sales in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article was contributed by Ramon Icasiano, chief customer officer at Pathlight As we enter a new year, it’s natural for customer experience (CX), sales, and customer service leaders to ask: What went well? What didn’t? And where can we improve? If you’re the “futurist” type, you might even ask: should I adopt AI to improve business results next year? The problem with AI today is that it’s hard to discern between fact and fiction.
In CX, we see companies using AI to augment human interactions, increase customer satisfaction, and respond to customer service issues faster — with minimal or zero need for a live customer service agent. In sales, managers, and reps tout AI as the new way to forecast sales, boost sales engagement to drive bottom-line results and orchestrate live calls with customers. But, while it sounds forward-looking, you can’t throw AI at just any problem.
Take containment — a long-heralded and oft-misused metric — as a prime example of where AI can go wrong. High containment rates are often deemed good because it means that automated systems — think AI chatbots — resolve customer issues without the need for human interaction. This means the company saves on resources.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But keeping customers away from reps is not always good for the customer, nor for add-on sales. If the AI chatbot isn’t properly designed or trained it can frustrate callers. Can a well-designed system work in these situations? Yes. But it has to be done very carefully. To be clear, it’s not all bad.
Here are some examples where AI is showing promise.
Boosting sales & revenue with AI Companies have realized AI has tangible and practical applications in sales and, if used correctly, can drive gains on the bottom line. The analyst community agrees. Gartner research predicts that 70% of customer experiences will involve some kind of machine-learning component in the next three years.
AI can force-multiply sales reps by automating repetitive tasks like CRM data entry and scheduling meetings. It helps reps prioritize, so they can focus on the high-probability, high-payoff deals, and customers that matter most. AI can also provide analytics on communications between sales reps and potential clients including emails, phone calls and chats. Other use case examples include sales forecasting, lead qualification, routing/matching, and coaching sales and CX reps.
Fueling contact center performance Customer expectations often intensify after the holidays when consumers seek out deals. Now is a great time to find out if your contact centers are prepared to deal with demanding customers who may be stressed out as holiday bills catch up to them during tax season, or by slow turnaround on items they returned several weeks back – or sticker shock when services and renewals leap in price.
Enter predictive call routing, which helps improve CX by looking at past customer behavior and preferences to predict the skills and personality traits an agent needs to provide the best service possible. In this use case, AI cuts escalation time and improves satisfaction for both the caller and the agents.
Today, there are many proven examples where AI is transforming the customer experience. AI customer service bots and virtual employee assistants like Cortana are becoming more commonplace and capable. According to Salesforce’s recent State of Service report, more than half of service providers will be adding chatbots to their lineup in the next year and a half.
Contact centers continually deal with the challenges of training agents at scale. Coaching and development for agents is a top priority, and AI can help by analyzing agent metrics and caller responses to recommend specific on-demand learning and reinforce great performance. AI-enabled performance intelligence helps managers identify issues with agent performance sooner, and in a survey we conducted, 85% of managers said this feedback allowed them to manage more people simultaneously.
AI will get better for sales and more Going to the tried-and-true baseball analogy, we are probably only in the second inning of AI utilization. AI will become progressively more useful to CX, CS, and sales teams in the coming years because the source data it consumes will become richer as departmental data silos become further integrated and more broadly shared enterprise resources.
Ramon Icasiano is chief customer officer at Pathlight DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,648 | 2,021 |
"Yellow.ai raises $78M to expand its AI chatbot platform globally | VentureBeat"
|
"https://venturebeat.com/2021/08/04/yellow-ai-raises-78m-to-expand-its-ai-chatbot-platform-globally"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Yellow.ai raises $78M to expand its AI chatbot platform globally Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Conversational AI platform provider Yellow.ai today announced the close of an over $78.15 million series C round led by WestBridge Capital with participation from Sapphire Ventures, Salesforce Ventures, and Lightspeed Venture Partners, bringing the company’s total funding to more than $102.15 million. Yellow.ai says it will use the funds to build on its existing technology and establish a presence in the U.S., adding 70 employees to its workforce of over 500.
With digitalization on the rise, customer experience has become a major competitive advantage for companies. There’s been a corresponding rise in the adoption of virtual agents as organizations look to more effectively handle greater call volumes with existing teams. According to analysts, some enterprises that embraced chatbots for customer service next deployed them internally, not only to automate customer interactions but to flag HR problems and potential sales opportunities.
Yellow.ai CEO Raghu Ravinutala grew up in South India and worked his way up as an engineer for tech companies that include Broadcom and Texas Instruments. In 2015, he cofounded Yellow.ai with Jaya Kishore Reddy, Raghu Kumar, Rashid Khan, and Anik Das. A chance discussion with friends about poor customer experiences became the germ for an idea to create a conversational AI-driven platform.
“Every year, over 400 billion calls are made for customer support globally, and out of those, less than 0.1% have any kind of automation solution implemented,” Ravinutala told VentureBeat via email. “Over the next 10 years, the scope for implementing AI-powered automation solutions across channels is huge, and as a company we aim to capitalize on the opportunity, with a focus on high quality customer experience. Any enterprise that has over 100,000 customers or over 20,000 calls per month coming to their call center can benefit by deploying a voice bot.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI-powered conversations Yellow.ai’s natural language understanding engine, which trains on billions of sales, marketing, and employee engagement conversations each month, leverages intent to guide customers to convert. Using its AI-powered virtual assistants, Yellow.ai says companies can resolve queries in over 100 languages and across more than 35 text and voice channels. The platform tracks trends and analyzes sentiment to help brands understand customer needs, offering resolutions to support user experiences and handing off to live agents when necessary.
“Yellow.ai’s platform allows brands to engage with their customers wherever they are, through conversational virtual assistants … for faster conversions [and] lasting loyalty,” Ravinutala added. “AI can’t replace humans, but automation helps self-serve 70% of queries and seamlessly loop in live agents when needed … it is not just a live agent handover; it is an active learning loop across bots and humans.” Yellow.ai claims it already has live deployments for voice bots across over 700 companies (up from 200 pre-pandemic) in telecom, banking, finance, insurance, and governments — including Amazon, Biogen, and Sephora. Revenue is up 4 times compared with last year, with roughly 40% coming from global clients and 60% from customers based in India.
“The pandemic brought unprecedented times for every organizations, and no one was ready for a lockdown and elongated work-from-home scenarios. Enterprises had to quickly adapt to ensure business continuity while minimizing redundancy across geographies, establishments, and teams,” Ravinutala said. “Our growth is driven by continuous product innovation [and the] latest implementations, such as deepening multilingual voice bot capabilities, expanding enterprise integrations, and launching a developer marketplace for virtual assistants. We have been rapidly expanding our footprint across the U.S., Europe, Latin America, Middle East, and Asia-Pacific regions.” Ravinutala said pizza chain Dominos moved 100% of its customer service to “Dom,” an omnichannel AI assistant Yellow.ai developed for India. Waste Connections, a waste management company in the U.S. and Canada, deployed a Yellow.ai chatbot called Trina during the early stages of the pandemic. And Bajaj Finserv, a bank based in India, used Yellow.ai to turn its insurance division’s customer support into an upselling stream that it said brought in $100 million in three years.
“Yellow.ai has broken out of the crowded virtual assistant market with its automation-first, human-assisted model to deliver a higher customer satisfaction and incremental revenue growth to its enterprise clients,” Ravinutala continued. “With our rapid client and revenue expansion across the world, we’re geared to becoming the global leader in the customer experience automation space and are bullish on building our product, partnerships, teams, and community to truly democratize AI in the near future.” Yellow.ai, which is based in Bangalore, plans to open a San Francisco, California-based headquarters and regional offices in the coming months.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,649 | 2,022 |
"The current state of zero-trust cloud security | VentureBeat"
|
"https://venturebeat.com/security/the-current-state-of-zero-trust-cloud-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The current state of zero-trust cloud security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cloud adoption continues to grow and accelerate across a diverse range of environments.
But despite – or perhaps because of – this, IT and security leaders are not confident in their organization’s ability to ensure secure cloud access. Further compounding this is the fact that traditional tools are falling far behind increasingly complex and ever-evolving cybersecurity risks.
One solution to this confluence of factors: zero-trust network access (ZTNA).
This strategic approach to cybersecurity seeks to eliminate implicit trust by continuously validating every stage of digital interaction.
“Clearly what’s showing up time and again is that traditional legacy security tools are not working,” said Jawahar Sivasankaran, president and chief operating officer of Appgate, which today released the findings of a study examining pain points around securing cloud environments and the benefits of ZTNA.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Traditional tools are no longer adequate to mitigate against modern threats that we are seeing,” Sivasankaran said. “There’s a clear need to move toward a zero-trust approach.” Cloud insecurity A new study, “Global Study on Zero Trust Security for the Cloud,” conducted by Ponemon Institute on behalf of Appgate, surveyed nearly 1,500 IT decision makers and security professionals worldwide. Respondents’ organizations represented a diverse mix of public and private cloud and on-premises infrastructure, as well as varying container adoption rates and cloud IT and data processing.
Notably, the survey indicates that there are many motivators for cloud transformation, but organizations still face numerous barriers in securing cloud environments.
Top identified motivators include increasing efficiency (65%), reducing costs (53%), improving security (48%) and shortening deployment timelines (47%).
On the other hand, top barriers identified by respondents include: Network monitoring/visibility (48%).
In-house expertise (45%).
Increased attack vectors (38%).
Siloed security solutions (36%).
The survey also found that 60% of IT and security leaders are not confident in their organization’s ability to ensure secure cloud access. Furthermore, 62% of respondents said that traditional perimeter-based security solutions are no longer adequate to mitigate the risk of threats like ransomware, distributed denial of service (DDoS) attacks, insider threats and man-in-the-middle attacks.
And while cloud-native development practices continue to grow over the next three years, 90% of respondents will have adopted devops and 87% will have adopted containers – yet modern security practices aren’t as widespread.
For instance, only 42% of respondents can confidently segment their environments and apply the principle of least privilege, while just around a third of organizations have no collaboration between IT security and devops — ultimately presenting a significant risk, according to Sivasankaran.
“There are a plethora of security technologies for the cloud,” he said. “What this is highlighting is the low level of confidence that organizations have in these technologies.” Additionally: Just 33% of respondents are confident their IT organization knows all the cloud computing applications, platforms or infrastructure services that are currently in use.
More than half of respondents cite account takeover or credential theft (59%) and third-party access risks (58%) as top threats to their cloud infrastructure.
Security practices identified as being the most important to achieving secure cloud access are enforcing least privilege access (62%); evaluating identity, device posture and contextual risk as authentication criteria (56%); having a consistent view of all network traffic across IT environments (53%); and cloaking servers, workloads and data to prevent visibility and access until the user or resource is authenticated (51%).
Trusting in security According to Markets and Markets, the global zero-trust security market size is expected to reach $60.7 billion by 2027, representing a compound annual growth rate (CAGR) of more than 17% from 2022 (when it was valued at $27.4 billion). There have also been many high-profile calls to action in the area – such as a mandate from the U.S. White House that federal agencies meet a series of zero-trust security requirements by 2024.
Still, the survey appears to indicate that zero-trust security may be dismissed by some as a buzzword or a trendy concept.
For instance, more than half (53%) of respondents that don’t plan to adopt zero trust said they believe that the term is “just about marketing.” Still, many of those same respondents highlight ZTNA capabilities as being essential to protecting cloud resources. This, Sivasankaran noted, points to confusion around what “zero trust” actually means.
At its simplest definition, zero trust works to secure organizations by eliminating implicit trust and continuously validating every stage of digital interaction. This applies to networks, people, devices, workloads and data, Sivasankaran explained.
He identified the key concepts of zero trust as being secure access;, identity-centricity, and least privileged-based access models that only grant access to what users truly need.
From a network perspective, this means: Evaluating identity rather than just IP addresses.
Dynamically adjusting entitlements and privileges in near real time.
Isolating critical systems with “fine-grained microsegmentation.” From a people perspective, it means: Verifying identity based on user context, device security posture and risk exposure.
Only permitting access to approved resources to reduce attack surface.
Streamlining onboarding.
Simplifying policy management and reducing complexity for admins.
From a device perspective: Using device security posture as criteria for access.
Keeping unmanned and hard-to-patch devices isolated.
Enhancing secure access with endpoint-protection data.
Dynamically adjusting entitlements based on risk level.
From a workload perspective: Preventing lateral movement with the principle of least privilege.
Automating security to scale with elastic workloads.
Deploying multifactor authentication to legacy apps without refactoring.
Using available metadata to dynamically grant entitlements/auto-provision or deprovision access.
Mitigating data loss via policy enforcement and device ring-fencing.
Establishing local and bidirectional firewalls that segment critical data across any IT environment.
Establishing granular policies to control access and ingress and egress traffic.
Segmenting data via microperimeters.
Ultimately, Sivasankaran said, “the key for customers is to focus on zero trust as a framework, a principle; not as a product.” It is essential, he added, to provide for remote access, enterprise access, cloud access, and IoT access. “You want to make sure customers and organizations are getting access to the right data so that they can make quick decisions.” Zero trust done right As Sivasankaran said, adopting zero trust doesn’t just help organizations safeguard their hybrid cloud environments, it actually enables – and even accelerates – cloud transformation initiatives.
Survey respondents identified the top benefits of adopting ZTNA as: Increased productivity of the IT security team (65%) Stronger authentication using identity and risk posture (61%) Increased productivity for devops (58%) Greater network visibility and automation capabilities (58%) “When done right, zero trust can drive meaningful efficiency and innovation across the entire IT ecosystem for both the security and business sides of an organization,” Sivasankaran said, “rather than just being an add-on security tool.” Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, agreed and described organizations as being at a crossroads: They understand that legacy security solutions “aren’t cutting it in the cloud,” but they also have growing needs when it comes to mitigating risk.
“Zero trust can help address such challenges,” he said, “while also offering benefits beyond cloud security, particularly around increased productivity and efficiency for IT teams and end users alike.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,650 | 2,022 |
"IriusRisk simplifies security for developers with new infrastructure-as-code capability | VentureBeat"
|
"https://venturebeat.com/security/iriusrisk-simplifies-security-for-developers-with-new-infrastructure-as-code-capability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IriusRisk simplifies security for developers with new infrastructure-as-code capability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Infrastructure-as-code (IaC) has been made available as a component of IriusRisk ‘s automated threat-modeling platform for application security. Software-defined infrastructure may now be automatically managed and provisioned by development or operational teams using IaC, eliminating the need for human configuration.
Stephen De Vries, CEO and cofounder of IriusRisk, told VentureBeat in an email interview that the company provides automated threat modeling and secure design so that organizations can “start left” with cybersecurity in software, progressing the “shift left” movement. He noted that organizations gain visibility into potential threats in their software through the process of threat modeling within the IriusRisk platform, which then provides developers and security teams with detailed countermeasures to fix the threats and embeds security into existing developer workflows.
IriusRisk said this latest version of its threat-modeling platform is designed to make it easier for teams to generate threat models for cloud architectures. It added that customers can generate a threat model from an IaC descriptor from cloud orchestration tools, such as AWS CloudFormation and HashiCorp Terraform, as well as from diagramming tools such as Microsoft Visio, while also containing the applicable threats and prescriptive security controls.
Automated threat modeling Due to the rapid increase in cybersecurity risks, businesses that develop applications are now paying closer attention to security solutions created using cautious principles. According to Synopsys , these guidelines include threat modeling, which is now essential for hardening applications to withstand potential attacks in the future.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to a Security Compass report , only 25% of firms polled perform threat modeling throughout the requirements-gathering and design stages of software development, which comes before moving on to application development. However, another study says one strategy to encourage excellent security engineering is to limit the necessity of manually creating system and threat models by using automation instead to lessen the workload and satisfy the demands of the company and the security team.
Less than 10% of those polled in the Synopsys study reported that their companies undertake threat modeling on 90% or more of the applications they create, while more than 50% of companies report difficulty automating and integrating their threat-modeling operations.
De Vries said IriusRisk’s automated approach takes threat modeling from a static, slow and manual process, conducted on whiteboards, to an easily implemented security practice that is baked into the development cycle from the very beginning. He noted that IriusRisk delivers time and cost savings by identifying potential security risks earlier during design, which speeds up time to deployment. Most importantly, he added, it ensures software isn’t launched with high-risk insecure design flaws that would need to be tested for and fixed in post-production, or that potentially couldn’t be identified at all through application security scanning, leaving software vulnerable.
According to IriusRisk, its most recent updates enable customers to build fully automated end-to-end processes using cloud-native designs. The company says that this straightforward procedure makes it simpler and more scalable t to construct a threat model with built-in, usable countermeasures. An enterprise can use infrastructure-as-code to automatically generate threat models in IriusRisk if it uses AWS CloudFormation or HashiCorp Terraform.
Addressing the global shortage of talent U.S.
labor statistics estimate that as of December 2020, there were 40 million skilled workers globally who were in high demand. By 2030, businesses globally run the danger of losing $8.4 trillion in revenue due to a skills shortage, if this pattern continues. This has a number of effects, including a strong demand for developer talent and the pressure it places on security teams.
De Vries said that IriusRisk lessens the load on nonsecurity specialists, such as developers, through automation (like IaC) and its score system, which provides prioritized countermeasures and instruction as needed. De Vries noted that as security continues to move up the executive board’s list of priorities, this helps to foster a culture of secure development inside an organization and lessens the load on security specialists and bottlenecks caused by the rework needed during testing.
De Vries said IaC is a vital next step in our drive to continue pushing the boundaries of threat modeling and our mission to make it easier than ever to implement in more environments, and at scale. IaC makes further automation possible and will help to put threat modeling into the hands of more nonsecurity people.” De Vries said that other threat modelers are major competitors in this space. However, he said the IriusRisk threat-modeling platform is differentiated by its open architecture and pattern-based approach, rather than sticking to a few methodologies such as STRIDE, PASTA or VAST. He added that it is this open approach that allows such methodologies to be incorporated but also allows organizations to define their own particular organizational threat-modeling requirements or industry-specific requirements and standards (such as OWASP or NIST recommendations).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,651 | 2,022 |
"Ghost Security reinvents app security with unsupervised machine learning | VentureBeat"
|
"https://venturebeat.com/security/ghost-security-reinvents-app-security-with-unsupervised-machine-learning"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ghost Security reinvents app security with unsupervised machine learning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Software applications are at the core of organizations of all sizes across all industries. Using APIs and microservices creates an ecosystem between users and the information they need. Because of this, there has been an exponential expansion in the development and use of applications and APIs — often leaving them unaccounted for and unsecured, according to Ghost Security , an application security company.
The industry has been grappling with how to solve the security risks that cloud applications face. Several subcategories of products attempt to support that goal from cloud security posture management (CSPM) to identity access management (IAM), web application firewall (WAF), data-loss prevention (DAP), runtime protection tools, static analysis and dynamic analysis.
However, despite all of these point products, application compromises are on the rise, the company said.
Coming at AI with unsupervised machine learning Ghost Security, which emerged from stealth mode today, says it’s taking a different approach and using machine learning (ML) as a core component of its platform. The technology lets security pros profile normal behavior versus abnormal behavior and detect when something anomalous happens. “The great thing about that is you have capabilities to detect attacks no one has seen before,’’ Ghost cofounder and CEO Greg Martin told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company claims its platform will help tech leaders continue rapid application development without disrupting existing processes — as well as providing detection and response teams with comprehensive and automated application protection.
“We’re trying to build a lot of innovation into creating the defense for not just today’s applications, but for the next decade or two,’’ Martin said. “In practice, that means using technology not available 10 or 12 years ago,’’ such as machine learning, artificial intelligence (AI) and horizontal cloud scale systems.
Many app security products use supervised machine learning, which is where algorithms are trained using good and bad data so the system understands what to look for, according to Martin. But Ghost is using an unsupervised machine learning approach, “where you don’t have to feed it any training data; it’s learning in a different way,’’ he explained.
Another differentiator is “we architect our software in a way that is compatible with whatever [cloud provider] the customer uses,’’ Martin said. “So if [they use] Google or Amazon Web Services or Microsoft Azure — or something totally different — we’re going to build compatibility for every customer.” That includes customers running on-premises data centers, Martin added.
A better approach is needed to secure assets “What’s exciting about the Ghost platform is that it removes the complex and invasive processes required to protect applications and APIs, making this type of technology more accessible to organizations across the globe,” said Florian Leibert, general partner and cofounder at 468 Capital, in a statement. “They’re building a solution that scales without affecting productivity and harnesses the power of machine learning in a way that will identify unknown vulnerabilities and stop more threats.” Ghost Security is backed by a combined $15 million investment from 468 Capital, DNX Ventures and Munich Re Ventures. In announcing the funding, the company said it will use this influx of capital to continue focusing on building “a world-class team with the experience and passion required for developing disruptive technologies.’’ “The surge in adoption of applications, APIs, and microservices represents great growth potential for businesses, but also introduces many new attack surfaces,” said Hiro Rio Maeda, managing partner at DNX Ventures, in a statement. “A better approach to securing these assets is needed, and Ghost is well-positioned to address that challenge.” Ghost is competing against companies including Imperva , F5 and Akamai , Martin said. “The space we’re disrupting has traditionally been called ‘web application firewalls,’ but the tools are so simplistic we think with what we’re doing, we won’t be the only ones jumping in and doing this,’’ Martin said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,652 | 2,022 |
"DDR: Comprehensive enterprise data security made easy | VentureBeat"
|
"https://venturebeat.com/security/ddr-comprehensive-enterprise-data-security-made-easy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DDR: Comprehensive enterprise data security made easy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data is precious to any organization, serving as the foundation of day-to-day operations.
And it’s also highly coveted by outsiders.
Data is the target of most cyberattacks and is one of the easiest ways to profit from hacking. And hackers don’t discriminate; no organization is immune, as evidenced by numerous recent high-profile breaches and insider threats – from the Supreme Court to Facebook to TikTok.
This has led many to question the effectiveness of existing cybersecurity tools, particularly with the proliferation of cloud computing and multicloud environments, and the complexity and decreased transparency that ensue. But a new model is emerging, and some say it is set to reinvent the cybersecurity space: data detection and response (DDR).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This new data-centric approach, according to companies specializing in it, provides instant visibility into data stores and real-time protection and response capabilities.
“DDR is a new form of enterprise data protection, a radically different approach to protecting enterprise data,” said Howard Ting, CEO of DDR platform company Cyberhaven.
“It offers more comprehensive coverage of data, is much more accurate in classification and risk identification and is much simpler to deploy and manage.” Breaches at an all-time high According to research from the Ponemon Institute , the cost of a data breach is at an all-time high – averaging $4.2 million in 2021. This reflects a 10% year-over-year increase from 2020 ($3.86 million), due in large part to the near-overnight shift to remote work and digital transformation amidst the pandemic. Costs are also amplified by system complexity and compliance failures, according to the Institute.
The “most common initial attack vector” was compromised credentials. These accounted for 20% of breaches. The second most common was phishing (17%); the third, cloud misconfiguration (15%). The highest average breach costs were due to business email compromise and malicious insider threat, the Institute reports.
Organizations that were able to successfully mitigate breaches were those with strong security AI tools and those that observed a zero-trust approach. What’s more, organizations further along in their cloud modernization contained breaches on average 77 days faster, according to Ponemon.
As risks and threats increase, the cybersecurity and cloud security markets continue to expand.
Fortune Business Insights , for instance, forecasts that the overall cybersecurity market will grow to more than $376 billion by 2029, representing a compound annual growth rate (CAGR) of 13.4%. The global cloud security market, meanwhile, is anticipated to grow to $36.43 billion by 2028, as reported by Fior Markets – up from $8.33 billion in 2020 and representing a CAGR of 20.25%.
DDR, specifically, is a young enough category that statistics are not yet available, but its leading companies include Cyberhaven and Dig.
Data: The only thing that matters Cyberhaven was founded in 2014 and calls itself the inventor of the industry’s first DDR platform. It raised $33 million in an oversubscribed series B funding round in December.
As Ting explained, Cyberhaven endpoint sensors monitor various events on a user’s machine, recording and tracking every time a user acts on data. For instance, if they upload or download something or attach an email. User actions trigger and capture events, correlate and “stitch them together” with graph analytics for analysis and risk identification.
“At the end of the day, it’s the data that matters – it’s the only thing that really matters,” said Ting. Existing tools “are not doing a very good job securing that asset, as you can tell from all the breaches that you read about all the time.” Dig, which emerged from stealth and announced its raise of $11 million in seed funding in May, also identifies itself as the industry’s first DDR solution.
The company discovers all data assets stored in platform-as-a-service (PaaS), infrastructure-as-a-service (IaaS) and database-as-a-service (DBaaS) environments. It classifies structured and unstructured data and provides real-time protection and response, while helping organizations understand how data is being used, according to CEO and cofounder Dan Benjamin.
The company says that it moves “beyond posture solutions” by helping organizations discover, monitor, detect, protect and govern cloud data. As noted by Benjamin, Dig’s engine “responds instantly” to cloud data threats, triggering alerts on suspicious or anomalous activity, helping thwart attacks, exfiltrations and employee data misuse.
It also tracks whether data sources support compliance, ensures that data assets have assigned owners and that access is regularly reviewed, and generates data security and compliance reports to “keep key stakeholders informed and auditors happy.” Where DLP and DDR diverge In the report “Getting DLP right: 4 elements of a successful DLP program,” Gartner analyst Andrew Bales acknowledges that DLP (data loss prevention) strategies that are developed independently of business initiatives fail to correctly identify sensitive data, thus exposing organizations to excessive risk of data loss and noncompliance.
Immature DLP programs are “systemically inundated” with recurrent violations and repeat offenders, and many are implemented as a “set and forget” technology without continuous development, he writes. Security and risk management leaders can miss key points in DLP vendor consideration due in part to misidentification of their business’ data-handling use cases and outstanding architectural gaps.
“Many organizations struggle to develop an effective data loss prevention program, viewing success as unattainable,” Bales writes.
A successful DLP program comes about when leaders focus on business objectives, identify data risk factors, decrease DLP violations and take heed of stakeholder frustration, he says.
But DDR providers say it’s still not enough.
“DLP is an ugly four-letter word,” Ting said. “Because it’s caused so much pain.” Historically, according to Ting, DLP tools have looked just in specific areas. But DDR looks “at all data, all the time, wherever it goes,” he said. “We act on all the data that users interact with.” The main advantage of DDR is that it’s much more comprehensive and accurate, he said. The solution “can protect any type of file, any type of data, regardless of the file type, regardless of whether it has a well-formed pattern to it,” Ting said.
Traditional DLP tools, by contrast, are narrowly defined to well-formed patterns. But there are a lot of “crown jewels” that enterprises have to protect today that have no patterns, he said. For example, source code, machine learning (ML) models and clinical research data.
Platforms solely basing classification on patterns and specific content result in “high noise,” false positives and user frustration. As a result, organizations will turn off enforcement tools or block them altogether.
“The Achilles heel today is accuracy,” Ting said. In nearly all cases, Cyberhaven’s platform replaces DLP tools. Customers are understanding that DDR is a “transformative approach” and “much richer and accurate” when it comes to classification and securing data.
Data-centricity As Benjamin pointed out, the number and variety of data assets per organization is exploding. And in the cloud, data is fragmented across multiple clouds and services – a typical enterprise stores its data on more than 20 types of services and thousands of instances. This hampers visibility, context and control over their cloud data, Benjamin said, while also limiting an organization’s enforcement capabilities.
Lack of security and control over these assets leads to shadow data assets, ransomware, data misuse, data exfiltration and compliance breaches, he said.
And ultimately, existing data security tools weren’t built to protect data in the cloud, he contended.
“I’ve spoken to more than a hundred CISOs and hear the same complaints over and over,” said Benjamin. “Companies don’t know what data they hold in the cloud, where it is, or most importantly how to protect it. They have tools to protect endpoints, networks, APIs, but nothing to actively secure their data in public clouds.” Ting agreed, noting that existing categories have not solved the problem of enterprise data protection, “not even their slice of the problem.” In the case of insider threat, they are also intrusive upon a user’s personal data.
“Our approach is to really focus on the data, as opposed to the user,” he said. With insider threat and insider risk becoming ever more prevalent and significant, this provides a “much more narrowly scoped investigation” and “much more fidelity” into whether a user will become an insider threat.
Overall, Ting contended, people have “kind of given up” on the cybersecurity category.
“There’s a lot of pent-up demand in the market, a lot of pain,” he said.
But he predicted a “resurrection,” saying that data-centric security models will result in a major shift in the cybersecurity industry over the next decade.
As he put it: “DDR is a category that’s ready to explode.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,653 | 2,022 |
"Confidential computing: A quarantine for the digital age | VentureBeat"
|
"https://venturebeat.com/security/confidential-computing-a-quarantine-for-the-digital-age"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Confidential computing: A quarantine for the digital age Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Undoubtedly, cloud computing is a mainstay in the enterprise.
Still, the increased adoption of hybrid and public clouds, combined with continued security breaches from both inside and outside forces, leave many with lingering concerns about cloud security. And rightly so.
This makes it all the more critical to have advanced, 21st-century privacy safeguards in place – even as this has often proved problematic in the security space.
“At a high level, cybersecurity has largely taken an incremental form, leveraging existing traditional tools in response to new attacks,” said Eyal Moshe, CEO of HUB Security.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But this is a “costly and unwinnable” endeavor, he pointed out, given the “determination and resources of malicious players” who can reap massive profits. Therefore, a “security paradigm shift is needed that incorporates traditional defenses but also simultaneously assumes they will not work and that every system is always vulnerable.” The solution, he and others say: Confidential computing , an emerging cloud computing technology that can isolate and protect data while it is being processed.
Closing the security gap Before an app can process data, it goes through a decryption in memory. This leaves data briefly unencrypted – and therefore exposed – just before, during, and just after its processing. Hackers can access it, encryption-free, and it is also vulnerable to root user compromise (when administrative privileges are given to the wrong person).
“While there have been technologies to protect data in transit or stored data, maintaining security while data is in use has been a particular challenge,” explained Justin Lam, data security research analyst with S&P Global Market Intelligence.
Confidential computing seeks to close this gap, providing cybersecurity for highly sensitive information requiring protection during transit. The process “helps to ensure that data remains confidential at all times in trusted environments that isolate data from internal and external threats,” Lam explained.
How confidential computing works By isolating data within a protected central processing unit (CPU) during processing, the CPU resources are only accessible to specially authorized programming code, otherwise making its resources invisible to “everything and anyone else.” As a result, it is undiscoverable by human users as well as cloud providers, other computer resources, hypervisors, virtual machines and the operating system itself.
This process is enabled through the use of a hardware-based architecture known as a trusted execution environment (TEE). Unauthorized entities cannot view, add, remove or otherwise alter data when it is within the TEE, which denies access attempts and cancels a computation if the system comes under attack.
As Moshe explained, even if computer infrastructure is compromised, “data should still be safe.” “This involves a number of techniques of encryption, decryption and access controls so information is available only at the time needed, only for the specific user who has the necessary permissions within that secure enclave,” Moshe said.
Still, these enclaves are “not the only weapon in the arsenal.” “Ultra-secure firewalls” that monitor messages coming in and going out are combined with secure remote management, hardware security modules and multifactor authentication.
Platforms embed access and approval policies in their own enclaves, including CPUs and/or GPUs for apps, Moshe said.
All told, this creates an accessibility and governance system that can be seamlessly customized without impeding performance, he said. And confidential computing has a wide scope, particularly when it comes to software attacks, protocol attacks, cryptographic attacks, basic physical attacks and memory dump attacks.
“Enterprises need to demonstrate maximum trustworthiness even when the data is in use,” said Lam, underscoring that this is particularly important when enterprises process sensitive data for another entity. “All parties benefit because the data is handled safely and remains confidential.” Evolving concept, adoption The concept is rapidly gaining traction. As predicted by Everest Group, a “best-case scenario” is that confidential computing will achieve a market value of around $54 billion by 2026, representing a compound annual growth rate (CAGR) of 90% to 95%. The global research firm emphasizes that “it is, of course, a nascent market, so big growth figures are to be expected.” According to an Everest Group report , all segments – including hardware, software and services – are expected to grow. This exponential expansion is being fueled by enterprise cloud and security initiatives and increasing regulation, particularly in privacy-sensitive industries including banking, finance and healthcare.
Confidential computing is a concept that has “moved quickly from research projects into fully deployed offerings across the industry,” said Rohit Badlaney, vice president of IBM Z Hybrid Cloud, and Hillery Hunter, vice president and CTO of IBM Cloud, in a blog post.
These include deployments from cloud providers AMD, Intel, Google Cloud, Microsoft Azure, Amazon Web Services, Red Hat and IBM. Cybersecurity companies including Fortinet , Anjuna Security , Gradient Flow and HUB Security also specialize in confidential computing solutions.
Everest Group points to several use cases for confidential computing, including collaborative analytics for anti-money laundering and fraud detection, research and analytics on patient data and drug discovery, and treatment modeling and security for IoT devices.
“Data protection is only as strong as the weakest link in end-to-end defense – meaning that data protection should be holistic,” said Badlany and Hunter of IBM, which in 2018 released its tools IBM Hyper Protect Services and IBM Cloud Data Shield. “Companies of all sizes require a dynamic and evolving approach to security focused on the long-term protection of data.” Furthermore, to help facilitate widespread use, the Linux Foundation announced the Confidential Computing Consortium in December 2019. The project community is dedicated to defining and accelerating confidential computing adoption and establishing technologies and open standards for TEE. The project brings together hardware vendors, developers and cloud hosts and includes commitments and contributions from member organizations and open-source projects, according to its website.
“One of the most exciting things about Confidential Computing is that although in early stages, some of the biggest names in technology are already working in the space,” lauds a report from Futurum Research. “Even better, they are partnering and working to use their powers for good.” Confidential confidence Enterprises always want to ensure the security of their data, particularly before transitioning it to a cloud environment. Or, as a blog post from cybersecurity company Fortinet describes it, essentially “trusting in an unseen technology.” “Confidential computing aims to give a level of security that acknowledges the fact that organizations are no longer in a position to move freely within their own space,” said Moshe.
Company data centers can be breached by external parties, and are also susceptible to insider threat (whether through maliciousness or negligence). With public clouds, meanwhile, common standards can’t always be assured or verified against sophisticated attacks.
Perimeters that provide protection are increasingly easy to breach, Moshe pointed out, especially when web services serve so many clients all at once. Then there’s the increased use of edge computing, which brings with it “massive real-time data processing requirements,” particularly in highly dispersed verticals such as retail and manufacturing.
Lam agreed that confidential computing will be increasingly important going forward to demonstrate regulatory compliance and security best practices. It “creates and attests” trusted environments for programs to execute securely and for data to remain isolated.
“These trusted environments have more tangible importance, as overall cloud computing is increasingly abstracted in virtualized or serverless platforms,” Lam said.
Still, enterprises should not consider confidential computing an end-all-be-all.
Given the growing dynamics and prevalence of the cloud, IoT, edge and 5G, “confidential computing environments will have to be resilient to rapid changes in trust and demand,” he said.
Confidential computing may require future hardware availability and improvements at “significant scale,” he said. And, as is the case with all other security tools, care must be taken to secure other components, policies, identities and processes.
Ultimately, Lam pointed out, like any other security tool, “it’s not a complete or foolproof solution.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,654 | 2,022 |
"Building a business case for zero-trust, multicloud security | VentureBeat"
|
"https://venturebeat.com/security/building-a-business-case-for-zero-trust-multicloud-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Building a business case for zero-trust, multicloud security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Bottom Line: Building a business case for securing multicloud configurations needs to surpass the costs and benefits, while recognizing that public clouds lack advanced zero-trust features and unified reporting.
The pace enterprises want to move at when it comes to digital transformation goals often surpasses their infrastructures’ security. It’s especially the case when they’re relying on multicloud configurations. For example, each public cloud provider has its version of Identity Access Management (IAM), Privileged Access Management (PAM), Policy Management, configuring admin & user access controls and more.
The typical enterprise needs domain experts for each public cloud they integrate with. That’s why choosing to invest heavily in training needs to be one of the costs enterprises get right when creating a business case for multicloud security. Another reason for prioritizing training is that data integration in multicloud configurations often increases the data complexity of the data itself, making data consumption, security and compliance more complex. The greater the data complexity, the more the risk of misconfiguration breaches.
Invest in people first Cyberattacks on multicloud configurations succeed more due to human error than other factors. For instance, 82% of data breaches involve mistakes configuring databases and administrator options and accidentally exposing entire networks to cybercriminals.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What makes multicloud so challenging to get right from a security standpoint is its dependence on training people and keeping them current on new integration and security techniques. In addition, the more manual the hybrid cloud integration process, the easier it is to make an error and expose applications, network segments, storage and applications.
Multicloud security business cases need to start with intensive cloud security training, including offering to pay for security certifications for members of the IT and security teams. A core part of any business case for multicloud security needs to budget enough time and funding to turn training and configuration knowledge into a strength.
Defining multicloud security’s benefits Building a business case for multicloud security needs to start by auditing all cloud configurations. Making auditing the first step helps immediately identify configuration gaps. It’s a good idea to build the business case of multicloud security on core zero-trust principles and the data obtained from auditing multicloud configurations first. The Shared Responsibility Model is a commonly used framework to explain which areas of mulitcloud security are owned by the cloud provider versus the enterprise customer. It’s a useful framework for communicating to senior management why zero trust needs to anchor multicloud integrations.
The following are the benefits that need to be included in creating a business case for investing in multicloud security: Reducing gaps in Identity Access Management (IAM) and Privileged Access Management (PAM) across cloud platforms reduces the risks of recurring breaches.
Like all public cloud platforms, AWS delivers a free baseline IAM module that organizations can use to get started. In addition, Microsoft Azure, Google Cloud Platform (GCP) and others offer similar IAM and PAM modules tailored for their specific platforms. They don’t cross-integrate to provide enterprise-wide IAM and PAM security, however.
Enterprises need to consider if the risk of running dedicated IAM and PAM modules in each public cloud instance without securing the integration points are worth the risk. The majority decide to secure the entire cloud infrastructure as part of their zero-trust initiative. They’re opting for cloud-based IAM and PAM platforms that can protect an entire multicloud configuration at the infrastructure level. By 2025, 70% of new access management, governance, administration and privileged access deployments will be on converged identity and access management platforms, according to Gartner.
Reduce the complexity, cost and need for emergency security projects to fix weak multicloud configuration points.
Solving complex cloud configuration, security misconfigurations and hacked connections burn millions of dollars a year and thousands of hours in lost productivity. Defining a business case budget for securing each integration point and removing any implicit-based trust across multicloud integration points are key. Assuming that the 4,000 hours security teams spend on emergency cloud integration security problems could be reduced, organizations could save approximately $400,000 a year.
Reducing the risk of data exfiltration while having better visibility into why multicloud costs were so high saved one organization over $300,000 a year – and averted a malware attack.
Taking an audit-based approach to identifying the gaps in multicloud configurations helped one company identify how to fine-tune each public cloud configuration and improve the performance of their multicloud networking software. Not only did their AWS and Azure bill go down, but they also discovered their configuration changes helped thwart a malware attack that would have easily promoted fileless payloads to users and critical systems if they hadn’t done the audit.
Discovered how much budget was wasted maintaining the first cloud integrations to legacy systems.
One IT department found that the first cloud integrations they had done over a decade ago were for systems that only delivered a few data elements on a report that hardly anyone was using. The multicloud security audit found the legacy integration was over two years overdue for an upgrade, and the data elements weren’t as important to the business unit that had requested them years before. So, IT pulled the plug on the integration and re-allocated the budget to the zero-trust intuitive. Cost savings amounted to approximately $25,000 a year.
Closing multicloud integration gaps reduce compliance costs and the risk of regulatory fines.
The more regulated the business, the more audits look at how well data is secured, especially in multicloud configurations. The Health Insurance Portability and Accountability Act (HIPAA), General Data Protection Regulation (GDPR) and the Payment Card Industry Data Security Standard (PCI DSS) all require ongoing audits, for example. Providing the reporting and audit histories, these and other regulatory agencies require specific to how data is stored more efficient if multicloud integration is in place. The time and cost savings of automating audits by organizations vary significantly. It’s a reasonable assumption to budget at least a $75,000 savings per year in audit preparation costs alone.
Evaluating multicloud security costs The following are the most significant multicloud security costs that need to be included in the business case: Annual, often multi-year licensing costs for IAM are minimal, with PAM also offered as part of a suite on large enterprise deals.
IAM providers vary significantly in their pricing models, costs and fees and can range in price significantly, depending on the size of the organization and the number of devices. Vendors have been known to bundle in PAM modules for no charge on large-scale enterprise deals. TrustRadius finds that vendors sell tiers of functionality with enterprise-level pricing.
As IAM is a cornerstone of zero trust, it’s a good idea to begin early on in an organization’s zero-trust roadmap. AWS offers its IAM for free, which is why so many enterprises stick with it despite its lack of multicloud security coverage.
Evaluate if multicloud network software (MCNA) is a good fit for your organization, as it’s proving valuable for addressing network weaknesses in organizations today.
Enterprises often select MCNA software to compensate for the lack of advanced features and consistent management of multi-cloud configurations. Organizations rely on MCNA deployments to achieve a consistent network operations model across all public cloud deployments. Consider using consumption-based pricing for either a one to a three-year contract, and renegotiate based on results. As an example, Arrcus Multi-Cloud Networking (MCN) is available on the AWS Marketplace and is $400,000 a year running on a t2.medium EC2 instance.
Double down on education and change management costs.
Change management, implementation and integration costs increase with the complexity of multicloud security integration. Expect to pay at least $6 for every dollar spent on software for education, implementation, integration and change management costs. For example, if total software costs are $100,000, expect to pay at least an additional $60,000 for all aspects of training, implementation, integration and change management.
Creating a compelling business case for multicloud security The best multicloud security business cases provide a 360-degree view of costs, benefits and why acting now is needed.
Knowing the initial software and services costs to acquire and integrate multiple clouds across your organization, training and change management costs and ongoing support costs are essential. Many include the following equation to provide an ROI estimate in their business cases. The Return on Investment (ROI) for an endpoint security initiative is calculated as follows: ROI on Endpoint Security (ES) = (ES Initiative Benefits – ES Initiative Costs)/ES Initiative Costs x 100.
A financial services company recently calculated the annual benefits of multicloud integration at $800,000 and the costs, $421,840, will yield a net return of $8.90 for every $1 invested.
Additional factors to keep in mind when building a business case for endpoint security: Multicloud ROI estimates fluctuate and it’s best to get started with a pilot to capture live data with budgets available at the end of a quarter.
Typically, organizations will allocate the remaining amounts of IT security budgets at the end of a quarter to multicloud initiatives.
Succinctly define the benefits and costs and gain C-level support to streamline the funding process.
It’s often the CISOs who are driven to achieve greater multicloud security the quickest they can. Today, with every business having their entire workforce virtual, there’s added urgency to accomplish multicloud security.
Define and measure multicloud initiatives’ progress using a digitally enabled dashboard that can be shared across any device, anytime.
Enabling everyone supporting and involved in multicloud security initiatives must know what success looks like. A digitally enabled dashboard that clearly shows each goal or objective and the company’s progress toward them is critical to success.
Zero trust needs to be designed in Multicloud security needs to be included in any zero-trust framework and roadmap, focusing on quick wins in the areas of IAM, PAM and secured identity access for humans and machines across the network infrastructure. In addition, IT and security teams creating the zero-trust roadmap must target those multicloud integration points that rely on implicit trust. They’re everywhere in legacy system integration points. Going after those first will help remove a major risk to the network and future zero-trust progress.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,655 | 2,022 |
"VMware introduces cloud workload protection for AWS | VentureBeat"
|
"https://venturebeat.com/security/aws-cloud-workload-protection"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VMware introduces cloud workload protection for AWS Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Few technologies have had the transformative influence of cloud services, laying the groundwork for the next generation of apps and insights, but it’s come at a cost for security teams.
Today’s cloud-native, public cloud , and hybrid cloud environments are more complex than ever before to the point where security teams are having struggle to secure them.
For instance, in the State of Cloud Security 2021 report , researchers found that 98% of organizations reported having at least one cloud data breach in the past 18 months, with 67% reporting three or more incidents.
In an attempt to offer organizations greater support in maintaining their cloud security, VMware today has announced the release of advanced workload protection for AWS to help offer AWS customers better visibility over on-premises and cloud environments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The new released will expand VMware Carbon Black Workload so it can now automatically gather and list vulnerabilities in AWS environments, while providing a next-generation antivirus (NGAV) to analyze attackers’ behavior patterns, while providing endpoint detection and response capabilities.
Deepening visibility into the cloud As adoption of cloud services has increased during the COVID-19 pandemic, more organizations have found that they lack the internal skills to secure their cloud-facing environments.
Research from Ermetic released today found that 80% of companies report they lack a dedicated security team for protecting cloud resources from threats.
When combined with previous findings that 86% of companies experience a skills gap for implementing cloud technologies, it’s unsurprising that organizations lack the internal resources necessary to secure these complex environments.
The best way to remedy this situation is to give security teams greater visibility over what’s going on.
”By enabling security teams to see workloads that are ephemeral and transient in nature, VMware Carbon Black Workload for AWS proves authoritative context to help AWS customers better secure cloud workloads,” said vice president of product management and co-general manager for VMware’s security business unit, Jason Rolleston.
“Automatic gathering and listing of vulnerabilities helps identify risk and harden workloads to shrink the attack surface, while CI/CD packages for sensor deployment further simplify sensor lifecycle management,” Rolleston said.
By automatically generating a list of vulnerabilities, security teams will be able to have a better understanding of their exposure to threat actors, and will be able to discover ways to optimize their defenses.
The global cloud workload protection market The announcement comes as the global cloud workload protection market is expected to grow from a value of $4.79 billion in 2021 to reach $28.39 billion by 2029 at a compound annual growth rate (CAGR) of 24.9% as more organizations look to enhance their cloud security posture.
As one of the leading desktop-as-a-service and virtual machine providers in the market, VMware’s release of a workload protection for AWS has the potential to assist a wide range of enterprises that are looking to secure their cloud environments.
However, VMware is competing against other cloud workload protection providers like Trend Micro with Trend Micro Cloud One, which offers a workload security solution that automatically protects new and existing workloads. Trend Micro also recently announced earning annual recurring revenue of over $550 million last year.
Another significant competitor is Palo Alto Networks ‘ Prisma Cloud, a cloud workload protection solution designed to protect hosts, containers and serverless applications and manage vulnerabilities through a dashboard view. Palo Alto Networks recently announced raising third-quarter revenue of $1.4 billion.
The main differentiator between VMware’s solution and competitors is its use of VMware Contexta, the organization’s security threat intelligence cloud.
“With enterprise threat-hunting for workloads that includes behavioral EDR, AWS customers can turn threat intelligence into a prevention policy to avoid hunting for the same threat twice. This telemetry feeds into VMware Contexta to shrink the gap between attackers and defenders while enabling greater visibility, control and anomaly detection for workloads,” Rolleston said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,656 | 2,022 |
"API security firm Impart Security promises solutions, not more alarms, for overwhelmed security staff | VentureBeat"
|
"https://venturebeat.com/security/api-security-firm-impart-security-promises-solutions-not-more-alarms-for-overwhelmed-security-staff"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages API security firm Impart Security promises solutions, not more alarms, for overwhelmed security staff Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Application programming interface (API) attacks are a common attack vector for hackers, resulting in data breaches for enterprise web applications. While today’s interconnected business ecosystem allows for innovation and growth, it also comes with security risks. Because APIs enable access to fundamental software functions as well as data, they are becoming a primary network entry point for cyberattacks.
With an increase in the use of APIs comes an increase in risk, so naturally there is increased demand for security.
Impart Security has announced today it has secured $6 million in seed funding led by CRV. Other investors include Haystack, 8-bit Capital and O’Reilly AlphaTech Ventures.
Founded by Signal Sciences alumni The team behind Impart includes several former colleagues from Signal Sciences – a web application security company that sold to Fastly for $775 million in 2020. “From our experience at Signal Sciences, we became familiar with the challenges of API security as an extension of the growth and disruption of the WAF [web application firewall] market,” said CEO and cofounder of Impart, Jonathan DiVincenzo.
“Talent is the heart and the engine of every successful business,” said Murat Bicer, general partner at CRV. “Impart Security’s leadership team is chock-full of high-caliber product, engineering and security DNA and we all worked closely together at Signal Sciences, so this is a bit of a homecoming for all of us.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! About Impart Security’s technology, Bicer said, “This team’s approach to API security is a game changer for enterprises because it provides them with monitoring and observability capabilities as well as tangible security benefits CISOs and their teams can measure.” Protecting your APIs without adding a lot of noise There are more than a dozen products on the market for testing APIs. Choosing the one that’s best for you depends on your current network and your needs. Testing can be done in two main ways: manually and automatically.
Postman and Katalon are very popular options for manual API testing. Manual testing is very time-consuming; you can program automated testing, but that requires extra time and effort.
For maximum efficiency, you might consider a tool that automates the process of testing your APIs. Automated API security tests can save money by reducing the need for manual testers, allowing your staff to focus on other tasks. Impart says its focus is on providing a tool for the mostly overwhelmed security practitioners who are tackling security concerns in areas such as Kubernetes, microservices and cloud computing.
Impart Security promises to detect security threats and vulnerabilities to any API tech stack, and then resolve the problem automatically without human involvement. DiVincenzo said, “While other platforms and technologies can detect security threats, Impart is the only one to also provide out-of-the-box solutions to those vulnerabilities. The last thing we want to do is be a noisy alerting system – we aim to solve problems for security practitioners, not add to their workload.” Impart is currently in closed beta with a select group of commerce and enterprise customers. Based on current beta feedback, the timeline for commercial launch is in the fourth quarter of 2022.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,657 | 2,022 |
"Nvidia reveals QODA platform for quantum, classical computing | VentureBeat"
|
"https://venturebeat.com/quantum-computing/nvidia-reveals-qoda-platform-for-quantum-classical-computing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia reveals QODA platform for quantum, classical computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, at the Q2B conference in Tokyo, GPU and AI kingpin Nvidia is announcing QODA — its Quantum Optimized Device Architecture, designed to create a single programming environment for hybrid classical-quantum computing.
Similar in overall aim (and name) to Nvidia’s CUDA (Compute Unified Device Architecture) platform for parallel computing development, QODA takes the highly specialized quantum development discipline and makes it accessible to a broader range of software developers. But the plotline for Nvidia GPUs in the quantum world is more nuanced than it is even in AI, and QODA’s goal is to make it straightforward.
Brave new quantum world “It’s a very different world than it was a decade ago,” Timothy Costa , Nvidia’s director of HPC and quantum computing products, told VentureBeat. Costa explained what’s behind the progress the quantum industry has made: “What we see is the industry going from one- or two-qubit systems, most of them in academia, up to today, to systems with 200+ qubits based in the cloud.” Qubits are the rough equivalent of bits in classical computing, but while they can be read as having a value of zero or one, qubits can have multiple values simultaneously, making them and the hardware that instantiates them the essence of quantum computers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! QODA welcomes all developers aboard QODA’s credo is helping nonquantum-specialized developers take advantage of this industry progress. Specifically, it’s aimed at developers focused on particular domains, including drug discovery, chemistry, finance and optimization (as a general computing technique), where quantum can accelerate things and make it feasible to attack problems that would otherwise be computationally impractical to address. These areas benefit best from a combination of classical computing (albeit in the powerful form of HPC — high performance computing) and quantum.
Nvidia’s GPU technology is already a dominant platform in the HPC world, of course. But it turns out to have specific applicability on the quantum side as well. That’s because, while GPUs aren’t quantum hardware, they may serve as a more effective medium for quantum circuit emulation than CPUs, since GPUs can implement state vector and tensor network methods, which accelerate quantum circuit simulations.
In effect, this means a big GPU system, like Nvidia’s DGX platform , may be able to handle hybrid scenarios especially well, since it offers one physical infrastructure layer that can service both classical and quantum computing workloads.
QODA addresses this new “split personality” potential of GPUs by offering a single platform for hybrid development. Underlying this is Nvidia’s cuQuantum SDK and its DGX Quantum Appliance. The cuQuantum SDK allows developers to simulate quantum circuits on GPUs. It includes integration with quantum computing frameworks Cirq , Qiskit and Pennylane.
The DGX Quantum Appliance is a software container that integrates the frameworks with cuQuantum and runs on any Nvidia hardware.
With these technologies underlying it, QODA provides two things to help make quantum computing more accessible to conventional developers: A kernel-based programming model for quantum computing development with interfaces for common programming languages, such as C++ and Python, A compiler that can accommodate quantum and classical computing-oriented instructions comingled in the same source code, as seen in the figure below.
Hybrid coding example with block of quantum code on top and GPU-oriented code below.
Credit: Nvidia Combining virtual and physical QODA and cuQuantum work with emulated QPUs (quantum processor units) on GPU hardware, but they work with physical QPUs as well, so code written on the platform is portable between emulated and physical environments.
In fact, QODA and cuQuatum were developed in partnership with numerous vendors in the quantum space, including hardware partners like IQM Quantum Computers , Pasqal , Quantinuum , Quantum Brilliance and Xanadu ; software/algorithm partners like QC Ware and Zapata Computing ; and supercomputing centers including Forschungszentrum Julich , NERSC/Lawrence Berkeley National Laboratory , and Oak Ridge National Laboratory.
The diversity of hardware partners involved means QODA also works across a variety of qubit “modalities,” including superconducting, neutral atom, trapped ion, diamond processors and photonics.
What’s ahead for Nvidia and quantum Costa told VentureBeat that with QODA, Nvidia hopes to provide developers with access to disruptive compute technology and allow domain scientists to leverage quantum acceleration, tightly coupled with the best of GPU supercomputing.
Nvidia sees QODA’s mission as getting developers who are focused on a class of applications (rather than on quantum computing itself) to use quantum and to see it as a technology that can accelerate what they’re already doing. This is a pragmatic approach to adoption of quantum computing, which could be the biggest change in computing since the introduction of the microcomputer — or maybe even the mainframe.
Nvidia’s goal with its partnering strategy with QODA is to bring together many startups with the likely effect of promoting cohesion and an ecosystem in the quantum arena. Doing so is key to helping the space mature and be more attractive for adoption by enterprise customers.
Just as Nvidia has helped make AI and autonomous cars actionable to large customers, the QODA announcement should help make quantum computing more industrialized and commercially viable.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,658 | 2,022 |
"Why cloud-native observability is key to delivering first-class digital experiences | VentureBeat"
|
"https://venturebeat.com/programming-development/why-cloud-native-observability-is-key-to-delivering-first-class-digital-experiences"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why cloud-native observability is key to delivering first-class digital experiences Share on Facebook Share on X Share on LinkedIn Presented by Cisco AppDynamics Across industries, and in the wake of the pandemic, companies report that they’ve been able to slash digital development time from years to months, weeks, and even days, making the ability to deliver best-in-class user experiences a bigger competitive advantage than ever.
“While many organizations still run their mission-critical and revenue-generating systems with traditional applications, the pandemic and hybrid work have accelerated the move toward devops initiatives for modern business apps,” Joe Byrne, Executive CTO, Cisco AppDynamics says. “This trend has made it easier for IT teams to deliver exceptional digital service, spawning an end-to-end experience revolution among consumers and end users.” According to the recent AppDynamics report, “ The Journey to Observability ,” 96% of technologists say being able to monitor their entire IT stack, and directly link performance-to-business outcomes, is the key to ensuring first-class digital experiences. And 79% understand that the technology decisions they make directly impact the performance of the business.
Unfortunately, deep, end-to-end visibility across an application ecosystem is tremendously complicated — in today’s modern business, there’s never been more going on under the hood.
The modern IT (hay) stack “To deliver the consistent, reliable digital experiences that consumers and end users now demand, IT teams must monitor and manage a dynamic set of application dependencies across a mix of infrastructure, microservices, containers and APIs using home-grown IT stacks, multiple clouds, SaaS services and security solutions,” says Byrne. “Some traditional monitoring approaches break down in this vastly complex and dynamic ecosystem.” To scale, companies are leaning on cloud-native technology, as well as distributed infrastructures with microservices and containerized components. Technologists are leveraging more third-party services, like SaaS applications and public internet gateways, to optimize the end-user experience for their applications.
At the same time, potential security vulnerabilities are a constant, ongoing issue for highly distributed and cloud-based solutions, making managing the application landscape even more complex. And because the modern application stack is distributed, narrowing down the underlying cause of application performance issues can be like searching for a needle in a haystack. Cutting through noise to identify what’s going on is a significant challenge, 85% of technologists say.
Achieving full-stack observability Full-stack observability is the secret to digital transformation, enabling teams to pivot quickly in the face of IT issues, and deliver flawless customer experiences. It enables real-time observability across the modern technology stack, from applications and software-defined computers, to storage, services, network and more. It offers in-depth visibility into the behavior, performance and health of the app and supporting infrastructure via high-fidelity telemetry collected from the entire IT estate. IT can pinpoint underlying issues in real time, from third-party APIs down to the code level.
With deep, end-to-end visibility across an application ecosystem, operational silos can be broken down. And with aligned IT teams, technologists can proactively address problems before they ever affect performance, prioritizing issues based on user and business impact. And it lets companies take a good hard look at infrastructure cost and performance with critical key business metrics like conversions, to resolve issues before they impact the bottom line.
“By centralizing and correlating application performance analytics across the full stack, IT teams can better collaborate to isolate issues and optimize application experiences,” Byrne says. “The ability to monitor all technical areas across their IT stack and directly link performance-to-business outcomes is now essential to delivering first-class digital experiences.” Leveraging cloud-native observability With the complexity of distributed architectures and underlying services growing in leaps and bounds, observability platforms need to keep pace. In fact, Cisco AppDynamics recently launched AppDynamics Cloud , a cloud-native observability platform designed to optimize cloud-native applications.
AppDynamics Cloud ingests telemetry data generated across the entire IT stack to deliver actionable insights into application performance and security. It offers flexibility, choice and agility to develop and deploy applications and enhance the digital experience.
The platform enables collaboration across teams including devops, site reliability engineers (SREs) and other business stakeholders to achieve common benchmarks like service-level objectives (SLOs) and organizational KPIs. AppDynamics Cloud users can use supported services from AWS and Azure to develop and deploy API-first applications that enhance and expand the digital experience.
AppDynamics Cloud ingests the deluge of metrics, events, logs and traces (MELT) generated in this environment — including network, databases, storage, containers, security and cloud services — to make sense of the current state of the entire IT stack all the way to the end user. Actions can then be taken to optimize costs, maximize transaction revenue and secure user and organizational data.
Current AppDynamics customers can upgrade to AppDynamics Cloud and leverage their existing application performance monitoring (APM) agents, or feed both solutions concurrently. The platform supports cloud-native, managed Kubernetes environments on Amazon Web Services (AWS), with future expansion to Microsoft Azure, Google Cloud Platform and other cloud providers.
To register your interest, contact Cisco AppDynamics sales here.
For further information about AppDynamics Cloud, go here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,659 | 2,022 |
"The 3 key strategies to slash time-to-market in any industry | VentureBeat"
|
"https://venturebeat.com/programming-development/the-3-key-strategies-to-slash-time-to-market-in-any-industry%ef%bf%bc"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The 3 key strategies to slash time-to-market in any industry Share on Facebook Share on X Share on LinkedIn Presented by PMI In a rapidly changing market driven by megatrends , from labor shortages and digital disruption to the climate crisis and global economic shifts, businesses need to find smarter strategies to get to market fast. The need for speed is the most frequently cited reason organizations are prioritizing innovation and growth, McKinsey found — and fast companies outperform in the market.
“Speed and efficiency have become new benchmarks — and organizations that are too slow, or too comfortable, could soon be left behind,” says Sierra Hampton-Simmons, Vice President of Products at Project Management Institute (PMI). “But gaining speed should not mean breaking the rules. It needs to be done strategically based on the needs of your customer.” Sustainable, transparent and efficient ways to significantly slash time to market requires a few things, including revisiting policies to root out rigidity, and finding ways to boost strategic clarity. PMI research shows businesses must also prioritize flexibility and agility , as well as encourage constant innovation for project professionals.
There are three movements in the business world that are helping organizations work smarter and more efficiently: a focus on the collaborative citizen developer; a turn toward more effective agile development; and an emphasis on innovative problem-solving to swiftly tackle challenges of every size.
Here’s a closer look at these three strategies which can help transform a company’s way of working, approach to innovation and speed to market without jeopardizing strategy.
1.
Low-code and no-code and the citizen developer The deficit of skilled software developers will hit 500,000 by 2024 , IT departments are overworked and the overall tech talent shortage is painting a grim picture. That’s where the citizen developer comes in. Not only does an investment in citizen developers help bridge the tech talent gap, but it also supercharges internal collaboration and a company’s ability to deliver real value to customers and stakeholders, offering greater operational efficiency and productivity across the board.
Low-code and no-code solutions are the key. These platforms replace hand-coding with intuitive drag-and-drop interfaces and pre-coded workflows that can give anyone, regardless of tech expertise or prior coding experience, the ability to build complex interactions, transactions and processes that can be easily automated.
Citizen development eliminates the middleman, taking the pressure off IT, and putting the power to build solutions that solve pain points directly into the hands of the project professionals who are right there in the mix. They’re intimately familiar with the context of each problem they’re solving for, and have the expertise to conceive the practical, innovative solutions necessary. Further, they have the tools at hand to develop, test and iterate, fast and efficiently, to drive digital transformation as a strategic business partner.
An effective citizen developer strategy requires appropriate guardrails to keep these projects in line with the organization’s broader IT strategy and governance policies , from data privacy to security, costs and quality. The PMI® Citizen Developer suite of resources guides organizations in developing a secure, effective low-code and no-code technology strategy that optimizes the power of the citizen developer by overcoming challenges and pitfalls, leading to faster value delivery.
2.
The transformative power of agility Agile is a powerful tool for driving successful projects fast and efficiently. But many organizations look at agile as a one-size-fits-all approach, which limits the full benefits of the technique. Far from a cookie-cutter approach, agile offers a variety of methodologies and frameworks, including Scrum, Lean, Kanban and much more. Each approach is specially suited to help address a variety of project objectives — but locking your team into one specific framework limits innovation, and therefore limits team speed.
The best way of working is specific to every team, and agile frameworks are a good starting point — but a truly agile approach means borrowing the best thinking across the strategies available to you, from agile to lean and traditional sources. PMI’s hybrid tool kit harnesses hundreds of agile practices to guide you in the best way of working for your team or organization in a tailorable and scalable manner.
PMI’s tool kit is architected into four views and four layers. The “Mindset” layer builds on the foundations of agile and lean to address enterprise realities. “People” is about giving any person one or more roles to help create a truly adaptable team. “Flow” is a streamlined way to adopt process in a context-sensitive manner. And the “Practices” layer is about scaling, whether that’s at the team level (tactical agile) or at the organizational level (strategic agility).
PMI also offers both instructor-led and self-paced training , with interactive courses, micro-credentials and certifications that include simulations, activities and supplemental reading to reinforce the toolkit’s guidance. Those who master the toolkit will obtain the skills needed to tailor their way of working, leading to optimized organizational and team effectiveness and faster speed to market.
3.
Solving complex problems with Wicked Problem Solving Traditional problem-solving techniques are not standing up to the increasingly complex array of issues businesses face as economic, technological, environmental and political landscapes keep shifting. Problems of every size, from the major challenges that impact an organization’s future to smaller, everyday obstacles have become resistant to tried-and-true strategies.
For these “wicked problems,” PMI alongside TED speaker, entrepreneur and technology pioneer Tom Wujec has developed PMI Wicked Problem Solving, rooted in cognitive science, and incorporating elements of design thinking, and lean and agile practices. It is a shared operating system for solving problems and boosting greater collaboration, designed to enhance traditional or agile project management approaches.
Tasks are organized into a series of plays, the basic building block — time-bound periods in which a team clearly articulates the problem, creates a visual model of the issue and uses visualizations to make ideas concrete and engaging as the team works collaboratively to build a solution. It’s helpful not only for complex challenges, but for helping teams run more successful meetings and making conversations more productive.
There are many types of plays, from simple ones that apply to most situations to more sophisticated plays used to break down complex issues. More basic plays can be incorporated into a professional’s toolkit almost immediately and, once the system is mastered, any number of different plays can be assembled to break down and solve any issue. Finding innovative ways to diagnose problems and visualizing and identifying solutions more quickly can help teams bring value to customers faster and more consistently.
PMI and Wujec’s Wicked Problem Solving course and tool kit is integrated with Miro , an online whiteboard collaboration tool. The course consists of 20 core video lessons that outline the principles and practical techniques, a workbook, a playbook and three decks of Wicked Problem Solving Principle cards for configuring plays.
Upskilling for success The foundation of all these strategies is an environment where employees have the mindset, skills, knowledge, tools and customer understanding they need to make their work more efficient and to realize positive organizational, environmental and societal impact. But the number-one barrier to developing those capabilities is a lack of strategic prioritization of learning and development (L&D).
“While businesses understand the necessity of L&D, it is important that executives do not offer trainings just for trainings’ sake,” Hampton-Simmons added. “Businesses need to strategically evaluate all options and consider which programs not only help your business improve, but also provide your team members with the skills that they value most. PMI can work with you to understand these needs and find a solution that is customized and impactful.” Organizations that gain speed through strategic upskilling can better prepare their workforce to provide value to customers more quickly, adapt nimbly to new technology, and weather the storm of the next big event — without sacrificing quality.
Learn more about how PMI’s thought leadership, training and tools are helping companies equip their talent with the knowledge and opportunities they need to thrive.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,660 | 2,022 |
"The case for financial operations (finops) in a cloud-first world | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/the-case-for-financial-operations-finops-in-a-cloud-first-world"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The case for financial operations (finops) in a cloud-first world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Accenture research found that a value gap is emerging between planned and actual value realized, with only one in three companies (35%) reporting that they have achieved expected cloud benefits, with cost being cited as a key barrier to achieving this.
As cloud increasingly becomes the foundation of every organization’s digital core, they often encounter common problems that can lead to cloud overspend. From complex pricing and billing to a lack of accountability and transparency to reviewing supplier costs in isolation, overspend is quite common.
Furthermore, technology leaders in organizations are increasingly asked to demonstrate how their spend on cloud is supporting the business strategy and how it is aligned to the associated targets. How can they solve this? Let’s dig deeper.
Showing ROI for cloud investments Investment in cloud and usage across industries is pervasive, growing and constantly evolving. In fact, global spending on cloud services is expected to reach nearly $500 billion this year. Although companies are executing on cloud-migration strategies, many are not yet achieving the benefits they originally imagined.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The answer lies, in part, in the rapidly advancing domain of cloud financial operations (aka finops ), a methodology that advocates for a collaborative working relationship between devops, finance and business teams to mitigate the cost overruns and close the value gap.
Finops principles The fundamental principles of finops include: Teams need to collaborate Everyone takes ownership for their cloud usage Reports should be accessible and timely Decisions need to be driven by the business value of cloud Everyone should take advantage of the variable cost model of cloud Deploying finops capabilities in an organization typically has the immediately measurable benefit of reducing cloud spend by 20-30% while enabling better alignment of cloud spend to business metrics and supporting strategic decision-making.
To be successful, finops requires a change in behaviors and culture that fosters collaboration between devops, finance and business teams. By building financial control, transparency and accountability into the cloud operating model, companies can assign the true financial cost of cloud to each relevant part of the organization. This transparency is vital in optimizing the use of cloud and ensuring individual business units and application owners take responsibility for their own cloud usage and cloud costs, aligning spending decisions with the business value being provided.
In short, the whole organization is better aligned around the total cost of ownership of the cloud estate. What would you say if your cloud costs suddenly doubled? Well, if revenues quadrupled connected to that, twice the cloud spend is great news. Finops enables this level of business visibility.
Adopting the finops model How can leaders put finops in place? It requires internal alignment across IT and the business working together to manage and optimize cloud. We recommend companies take the following actions: Create the ability to accurately estimate, forecast and allocate the costs of cloud back to the consuming business units (non-shared and shared costs). For example, at one tech hardware company, just by showing cloud consumers where the money was going resulted in decommissioning several abandoned sandbox environments.
Enable real-time monitoring, tracking and reporting of cloud costs in line with forecasts to quickly detect and resolve issues. One financial service team saw daily spend on a serverless function go from $0.12 to over $14,000 due to a misconfiguration that got pushed to production. Catching mistakes like these early is crucial.
Continuously optimize cloud usage through reduction of unnecessary spend, as well as purchase of commitments, where suitable to lower the unit costs, and report on achieved savings. The 2021 State of FinOps survey uncovered that the average finops team size at “Walk” level maturity was seven full-time people. Tracking savings is how this team shows measurable value in addition to the soft benefits of improved visibility, accountability and tech value realization.
Leverage the continuous innovation of cloud services to evolve and re-imagine workloads to increase speed, improve value and lower cost. Collectively, the hyperscale cloud providers are investing $10 billion each month into capabilities for their customers.
Get started with the carbon footprint tools now available from the cloud providers. Cloud use has the potential to be a powerful force for good or for ill for sustainability, and more and more companies are setting public carbon goals and reporting them to the stock market and stakeholders.
All in all, finops is an increasingly urgent business imperative across industries. Its value is proven continuously by enabling the organization to instantly mitigate unnecessary costs and increase business value.
Mike Eisenstein is the cloud optimization practice lead for Accenture and Dean Oliver is the cloud finops lead for Accenture Technology Strategy & Advisory DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,661 | 2,022 |
"Software is finally eating the physical world, and that may save us | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/software-is-finally-eating-the-physical-world-and-that-may-save-us"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Software is finally eating the physical world, and that may save us Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In 2011, Mosaic browser inventor Marc Andreessen predicted that software was eating the world. He was correct, except that software mostly ate the “digital” world. Now we are seeing it start to eat the physical world, shifting from bits to atoms. We are starting to solve much harder problems.
We’re at a historic inflection point where compute becomes nearly unconstrained. Just as cloud revolutionized delivery of software services in the digital world, so will cloud transform our physical world. We can create digital twins of the most complex physical things, like our planet and our bodies, and change them for the better.
Imagine a world where a model of the Earth allows us to quickly address extinction-level threats like climate change. Imagine a world of medicine where drug development costs are so low that we can receive truly personalized treatments, with therapies targeted to our specific health problem and our individual DNA.
Innovation at the speed of light Today the pace of innovation advances at the rate of computation. Unconstrained compute promises benefits that we can’t yet even imagine. In the physical world, compute was so expensive historically that it required government-level spending and very long-term plans to try and overcome the challenges – like putting a man on the moon.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now is the time to raise the scope of our ambitions and think much bigger. Human ingenuity has risen to the needs of the moment in the past, and innovation has the potential to solve the technological challenges ahead of us.
What if we pointed unconstrained compute at our planet and our own bodies? What might we accomplish? The concept of digital twins takes us to this future.
We’ve come a long way in computing Back in 1964 when the CDC 6600 was the one and only leader in supercomputing, it was humbly equipped with a single processor capable of completing 3 million calculations per second. While this may sound impressive, the modern smartphone is at least a million times faster. Even leading into the 1990s, high-performance computers dished out slower processing speeds than the latest iPhone today.
The cloud changes that. By pooling vast computing resources, researchers are able to reproduce the physical world in limited and costly ways inside data centers in a manner we called “ high performance computing.
” Digital twins: A way to discover and test solutions digitally As we create digital copies of our physical selves and our planet, we can begin building potential solutions to problems faster while testing them on our digital twins first. This will have profound implications in the quality of our lives – far beyond the changes wrought by software eating the digital world and giving us social media, “likes,” simpler travel and easier banking.
An NFT of a bored digital ape can’t begin to compare to creating new ways to lower CO2 emissions to cool a warming planet.
Today’s climate models currently rely on statistical workarounds that can assess the climate at a global scale but make it hard to understand local effects. Increasing the resolution of models will be crucial for predicting the regional impacts of climate change and address them directly.
Healthcare shows even more promise.
AI predictions might help us get ahead of health problems AI can now predict the shape of proteins in the human genome and other organisms when they fold. Predicting protein folding could help researchers more quickly develop drugs, raising hopes that AI will revolutionize healthcare.
Researchers are only beginning to understand how deep learning could accelerate drug discovery, and the interest is so high that a cottage industry of startups specializing in AI-powered drug discovery has emerged.
Digital twins show tremendous promise in making it easier to customize medical treatments to individuals based on their unique genetic makeup, anatomy, behavior, and other factors. The Alan Turing Institute recently called on the medical community to collaborate on shared scaling digital twins.
One of the often-misunderstood facts in software is that marginal costs eventually go to almost zero. People want to live. So drug discovery will be one of the most exciting markets disrupted in this new world. Since it costs on average $2 billion to $4 billion today to bring a new drug to market, large employers like an Amazon or Walmart could go into the pharma business to cut healthcare spending, one of their largest expense line items.
With all this promise, we should also remain vigilant. Who should have this technology? Technology is neutral. In the end, we’re just loosely organized humans. As we enter a world without computing restraints, we’re going to have to figure out ways to respond as best we can to take advantage of this new power.
Joris Poort is CEO at Rescale.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,662 | 2,022 |
"How hybrid cloud can be valuable to the retail and ecommerce industries | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-hybrid-cloud-can-be-valuable-to-the-retail-and-ecommerce-industries"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How hybrid cloud can be valuable to the retail and ecommerce industries Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Migrating to the cloud is no longer a decision that only forward-thinkers and risk-takers base their business on. It’s common practice. In fact, the cloud migration services market was valued at $119.13 billion in 2020 and is expected to reach $448.34 billion by 2026. Most sectors — including retail and ecommerce — are migrating to the cloud quickly, and for good reasons.
Online shopping grew so fast during 2020 due to the COVID-19 pandemic that the market hit $4 trillion.
69% of Americans have shopped online once, with 25% shopping online at least once per month. Retail services of all types were forced to embrace digitization to stay in business. No longer were brick and mortar shops preferred — or even an option, the only way to engage with customers became online. While many of our habits have gotten back to pre-COVID, online shopping is here to stay.
For most retailers, their technology strategy is now their primary business strategy. This starts with cloud implementation. Scalability and agility are key benefits in pursuing a public, private or hybrid cloud solution. The unlimited size of the public cloud means businesses can scale capacity and computing power either up or down in just minutes — usually critical for external shopping processing. However, the private model offers a more customized set-up and is dedicated to a particular business and can be vital for internal processes. The key is knowing what functions should reside, for your business, in either or both.
Many ecommerce organizations have found the hybrid cloud solution the best way to stitch together their most essential solutions. While most organizations aren’t structured for change, a hybrid cloud solution allows businesses to scale up and down, only paying for what is used. These benefits are vital to serving customers wherever they are and providing them with an enjoyable online experience.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here are three ways that the cloud can help retail and ecommerce businesses manage, store, and analyze their data for real-time insights, Keeping data secure: Security on the web has always been a concern. Consumers are aware of the risk they’re taking — but it’s not just the risk they’re taking, but the risk the company is undertaking. The FBI reported upwards of 4,000 security complaints per day during the pandemic, and another study found the costs for ransomware attacks increased in the United States by $137,000.
Consumers’ personal information can be securely shared via cloud-based solutions, reducing the risk for all involved. Data encryption renders most data completely unusable for anyone without a key. Additionally, cloud providers can often stop cyberattacks before they even begin with advanced alerting and security.
Updating inventory in real-time: For most retail establishments, the way to increase market share is to list products on multiple channels. In order to do so, however, it’s important to keep inventory updated. Customers can grow frustrated quickly when promised items are out of stock, so finding ways to minimize this is key. Cloud-based inventory solutions are the only real way to keep a true understanding of inventory — no matter how big or small. With supply-chain issues showing no signs of easing up, this is more important than ever.
Personalizing the customer experience: One of the most underutilized aspects of ecommerce has been the lack of data analytics.
Every customer has a digital thumbprint, and by using this information, businesses can help anticipate what customers may want and need. With the rise of cloud data storage options, retailers can fully utilize analytics for predictive purchases and beyond. Amazon is one of the best examples of a retailer doing this—and because the majority of online shoppers have shopped on Amazon, they expect this same type of service with every retail experience.
Research suggests that over 60% of the world’s population is online in some fashion, typically spending 40% of their waking life connected to some type of technology. Retail has to move online if they want to keep up. For example, an ecommerce business had a critical breakdown online during the Super Bowl Sunday of retail — Black Friday. When it’s game time in the world of ecommerce, it is important to make sure you are prepared in case blackouts occur during a high-volume purchasing day. Hybrid cloud computing offers a quick recovery, and all the required data is stored, eliminating the need for secondary data centers.
A hybrid cloud approach has infinite resources to scale for massive amounts of customers, regardless of the organization’s sector, thus creating an agile infrastructure from devops down is vital. Global consumers increased their online spending by $900 billion in 2020 compared to 2019, and with the right cloud strategy, retailers are primed to handle this trend as it continues to grow.
Michael Norring is CEO of GCSIT.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,663 | 2,022 |
"3 reasons the centralized cloud is failing your data-driven business | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/3-reasons-the-centralized-cloud-is-failing-your-data-driven-business"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 3 reasons the centralized cloud is failing your data-driven business Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
I recently heard the phrase, “One second to a human is fine – to a machine, it’s an eternity.” It made me reflect on the profound importance of data speed.
Not just from a philosophical standpoint but a practical one. Users don’t much care how far data has to travel, just that it gets there fast. In event processing, the rate of speed for data to be ingested, processed and analyzed is almost imperceptible. Data speed also affects data quality.
Data comes from everywhere. We’re already living in a new age of data decentralization, powered by next-gen devices and technology, 5G , Computer Vision, IoT, AI/ML, not to mention the current geopolitical trends around data privacy. The amount of data generated is enormous, 90% of it being noise, but all that data still has to be analyzed. The data matters, it’s geo-distributed, and we must make sense of it.
For businesses to gain valuable insights into their data, they must move on from the cloud-native approach and embrace the new edge native. I’ll also discuss the limitations of the centralized cloud and three reasons it is failing data-driven businesses.
The downside of centralized cloud In the context of enterprises, data has to meet three criteria: fast, actionable and available. For more and more enterprises that work on a global scale, the centralized cloud cannot meet these demands in a cost-effective way — bringing us to our first reason.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s too damn expensive The cloud was designed to collect all the data in one place so that we could do something useful with it. But moving data takes time, energy, and money — time is latency, energy is bandwidth, and the cost is storage, consumption, etc. The world generates nearly 2.5 quintillion bytes of data every single day. Depending on whom you ask, there could be more than 75 billion IoT devices in the world — all generating enormous amounts of data and needing real-time analysis. Aside from the largest enterprises, the rest of the world will essentially be priced out of the centralized cloud.
It can’t scale For the past two decades, the world has adapted to the new data-driven world by building giant data centers. And within these clouds, the database is essentially “overclocked” to run globally across immense distances. The hope is that the current iteration of connected distributed databases and data centers will overcome the laws of space and time and become geo-distributed, multi-master databases.
The trillion-dollar question becomes — How do you coordinate and synchronize data across multiple regions or nodes and synchronize while maintaining consistency? Without consistency guarantees, apps, devices, and users see different versions of data. That, in turn, leads to unreliable data, data corruption, and data loss. The level of coordination needed in this centralized architecture makes scaling a Herculean task. And only afterward can businesses even consider analysis and insights from this data, assuming it’s not already out of date by the time they’re finished, bringing us to the next point.
It’s slow Unbearably slow at times.
For businesses that don’t depend on real-time insights for business decisions, and as long as the resources are within that same data center, within that same region, then everything scales just as designed. If you have no need for real-time or geo-distribution, you have permission to stop reading. But on a global scale, distance creates latency, and latency decreases timeliness, and a lack of timeliness means that businesses aren’t acting on the newest data. In areas like IoT, fraud detection, and time-sensitive workloads, 100s of milliseconds is not acceptable.
One second to a human is fine – to a machine, it’s an eternity.
Edge native is the answer Edge native, in comparison to cloud native, is built for decentralization.
It is designed to ingest, process, and analyze data closer to where it’s generated. For business use cases requiring real-time insight, edge computing helps businesses get the insight they need from their data without the prohibitive write costs of centralizing data. Additionally, these edge native databases won’t need app designers and architects to re-architect or redesign their applications. Edge native databases provide multi-region data orchestration without requiring specialized knowledge to build these databases.
The value of data for business Data decay in value if not acted on. When you consider data and move it to a centralized cloud model, it’s not hard to see the contradiction. The data becomes less valuable by the time it’s transferred and stored, it loses much-needed context by being moved, it can’t be modified as quickly because of all the moving from source to central, and by the time you finally act on it — there are already new data in the queue.
The edge is an exciting space for new ideas and breakthrough business models. And, inevitably, every on-prem system vendor will claim to be edge and build more data centers and create more PowerPoint slides about “Now serving the Edge!” — but that’s not how it works. Sure, you can piece together a centralized cloud to make fast data decisions, but it will come at exorbitant costs in the form of writes, storage, and expertise. It’s only a matter of time before global, data-driven businesses won’t be able to afford the cloud.
This global economy requires a new cloud — one that is distributed rather than centralized. The cloud native approaches of yesteryear that worked well in centralized architectures are now a barrier for global, data-driven business. In a world of dispersion and decentralization, companies need to look to the edge.
Chetan Venkatesh is the cofounder and CEO of Macrometa.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,664 | 2,022 |
"Top 10 data lake solution vendors in 2022 | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/top-10-data-lake-solution-vendors-in-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top 10 data lake solution vendors in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents What is a data lake solution? 5 must-have features of a data lake solution Top 10 data lake solution vendors in 2022 The importance of choosing the right data lake solution vendor As the world becomes increasingly data-driven, businesses must find suitable solutions to help them achieve their desired outcomes.
Data lake storage has garnered the attention of many organizations that need to store large amounts of unstructured, raw information until it can be used in analytics applications.
The data lake solution market is expected to grow rapidly in the coming years and is driven by vendors that offer cost-effective, scalable solutions for their customers.
Learn more about data lake solutions, what key features they should have and some of the top vendors to consider this year.
What is a data lake solution? A data lake is defined as a single, centralized repository that can store massive amounts of unstructured and semi-structured information in its native, raw form.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s common for an organization to store unstructured data in a data lake if it hasn’t decided how that information will be used. Some examples of unstructured data include images, documents, videos and audio. These data types are useful in today’s advanced machine learning (ML) and advanced analytics applications.
Data lakes differ from data warehouses, which store structured, filtered information for specific purposes in files or folders. Data lakes were created in response to some of the limitations of data warehouses.
For example, data warehouses are expensive and proprietary, cannot handle certain business use cases an organization must address, and may lead to unwanted information homogeneity.
On-premise data lake solutions were commonly used before the widespread adoption of the cloud. Now, it’s understood that some of the best hosts for data lakes are cloud-based platforms on the edge because of their inherent scalability and considerably modular services.
A 2019 report from the Government Accountability Office (GAO) highlights several business benefits of using the cloud , including better customer service and the acquisition of cost-effective options for IT management services.
Cloud data lakes and on-premise data lakes have pros and cons. Businesses should consider cost, scale and available technical resources to decide which type is best.
Read more about data lakes: What is a data lake? Definition, benefits, architecture and best practices 5 must-have features of a data lake solution It’s critical to understand what features a data lake offers. Most solutions come with the same core components, but each vendor may have specific offerings or unique selling points (USPs) that could influence a business’s decision.
Below are five key features every data lake should have: 1. Various interfaces, APIs and endpoints Data lakes that offer diverse interfaces, APIs and endpoints can make it much easier to upload, access and move information. These capabilities are important for a data lake because it allows unstructured data for a wide range of use cases, depending on a business’s desired outcome.
2. Support for or connection to processing and analytics layers ML engineers, data scientists, decision-makers and analysts benefit most from a centralized data lake solution that stores information for easy access and availability. This characteristic can help data professionals and IT managers work with data more seamlessly and efficiently, thus improving productivity and helping companies reach their goals.
3. Robust search and cataloging features Imagine a data lake with large amounts of information but no sense of organization. A viable data lake solution must incorporate generic organizational methods and search capabilities, which provide the most value for its users. Other features might include key-value storage, tagging, metadata, or tools to classify and collect subsets of information.
4. Security and access control Security and access control are two must-have features with any digital tool. The current cybersecurity landscape is expanding, making it easier for threat actors to exploit a company’s data and cause irreparable damage. Only certain users should have access to a data lake, and the solution must have strong security to protect sensitive information.
5. Flexibility and scalability More organizations are growing larger and operating at a much faster rate. Data lake solutions must be flexible and scalable to meet the ever-changing needs of modern businesses working with information.
Also read: Unlocking analytics with data lake and graph analysis Top 10 data lake solution vendors in 2022 Some data lake solutions are best suited for businesses in certain industries. In contrast, others may work well for a company of a particular size or with a specific number of employees or customers. This can make choosing a potential data lake solution vendor challenging.
Companies considering investing in a data lake solution this year should check out some of the vendors below.
1.
Amazon Web Services (AWS) The AWS Cloud provides many essential tools and services that allow companies to build a data lake that meets their needs. The AWS data lake solution is widely used, cost-effective and user-friendly. It leverages the security, durability, flexibility and scalability that Amazon S3 object storage offers to its users.
The data lake also features Amazon DynamoDB to handle and manage metadata. AWS data lake offers an intuitive, web-based console user interface (UI) to manage the data lake easily. It also forms data lake policies, removes or adds data packages, creates manifests of datasets for analytics purposes, and features search data packages.
2.
Cloudera Cloudera is another top data lake vendor that will create and maintain safe, secure storage for all data types. Some of Cloudera SDX’s Data Lake Service capabilities include: Data schema/metadata information Metadata management and governance Compliance-ready access auditing Data access authorization and authentication for improved security Other benefits of Cloudera’s data lake include product support, downloads, community and documentation. GSK and Toyota leveraged Cloudera’s data lake to garner critical business intelligence (BI) insights and manage data analytics processes.
3.
Databricks Databricks is another viable vendor, and it also offers a handful of data lake alternatives. The Databricks Lakehouse Platform combines the best elements of data lakes and warehouses to provide reliability, governance, security and performance.
Databricks’ platform helps break down silos that normally separate and complicate data, which frustrates data scientists, ML engineers and other IT professionals. Aside from the platform, Databricks also offers its Delta Lake solution, an open-format storage layer that can improve data lake management processes.
4.
Domo Domo is a cloud-based software company that can provide big data solutions to all companies. Users have the freedom to choose a cloud architecture that works for their business. Domo is an open platform that can augment existing data lakes, whether it’s in the cloud or on-premise. Users can use combined cloud options, including: Choosing Domo’s cloud Connecting to any cloud data Selecting a cloud data platform Domo offers advanced security features, such as BYOK (bring your own key) encryption, control data access and governance capabilities. Well-known corporations such as Nestle, DHL, Cisco and Comcast leverage the Domo Cloud to better manage their needs.
5.
Google Cloud Google is another big tech player offering customers data lake solutions. Companies can use Google Cloud’s data lake to analyze any data securely and cost-effectively. It can handle large volumes of information and IT professionals’ various processing tasks. Companies that don’t want to rebuild their on-premise data lakes in the cloud can easily lift and shift their information to Google Cloud.
Some key features of Google’s data lakes include Apache Spark and Hadoop migration , which are fully managed services, integrated data science and analytics, and cost management tools. Major companies like Twitter, Vodafone, Pandora and Metro have benefited from Google Cloud’s data lakes.
6.
HP Enterprise Hewlett Packard Enterprise (HPE) is another data lake solution vendor that can help businesses harness the power of their big data. HPE’s solution is called GreenLake — it offers organizations a truly scalable, cloud-based solution that simplifies their Hadoop experiences.
HPE GreenLake is an end-to-end solution that includes software, hardware and HPE Pointnext Services. These services can help businesses overcome IT challenges and spend more time on meaningful tasks.
7.
IBM Business technology leader IBM also offers data lake solutions for companies. IBM is well-known for its cloud computing and data analytics solutions. It’s a great choice if an operation is looking for a suitable data lake solution. IBM’s cloud-based approach operates on three key principles: embedded governance, automated integration and virtualization.
These are some data lake solutions from IBM: IBM Db2 IBM Db2 BigSQL IBM Netezza IBM Watson Query IBM Watson Knowledge Catalog IBM Cloud Pak for Data With so many data lakes available, there’s surely one to fit a company’s unique needs. Financial services, healthcare and communications businesses often use IBM data lakes for various purposes.
8.
Microsoft Azure Microsoft offers its Azure Data Lake solution, which features easy storage methods, processing, and analytics using various languages and platforms. Azure Data Lake also works with a company’s existing IT investments and infrastructure to make IT management seamless.
The Azure Data Lake solution is affordable, comprehensive, secure and supported by Microsoft. Companies benefit from 24/7 support and expertise to help them overcome any big data challenges they may face. Microsoft is a leader in business analytics and tech solutions, making it a popular choice for many organizations.
9.
Oracle Companies can use Oracle’s Big Data Service to build data lakes to manage the influx of information needed to power their business decisions. The Big Data Service is automated and will provide users with an affordable and comprehensive Hadoop data lake platform based on Cloudera Enterprise.
This solution can be used as a data lake or an ML platform. Another important feature of Oracle is it is one of the best open-source data lakes available. It also comes with Oracle-based tools to add even more value. Oracle’s Big Data Service is scalable, flexible, secure and will meet data storage requirements at a low cost.
10.
Snowflake Snowflake’s data lake solution is secure, reliable and accessible and helps businesses break down silos to improve their strategies. The top features of Snowflake’s data lake include a central platform for all information, fast querying and secure collaboration.
Siemens and Devon Energy are two companies that provide testimonials regarding Snowflake’s data lake solutions and offer positive feedback. Another benefit of Snowflake is its extensive partner ecosystem, including AWS, Microsoft Azure, Accenture, Deloitte and Google Cloud.
The importance of choosing the right data lake solution vendor Companies that spend extra time researching which vendors will offer the best enterprise data lake solutions for them can manage their information better. Rather than choose any vendor, it’s best to consider all options available and determine which solutions will meet the specific needs of an organization.
Every business uses information, some more than others. However, the world is becoming highly data-driven — therefore, leveraging the right data solutions will only grow more important in the coming years. This list will help companies decide which data lake solution vendor is right for their operations.
Read next: Get the most value from your data with data lakehouse architecture VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,665 | 2,022 |
"Rising cloud spending may not signal the end of traditional infrastructure | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/rising-cloud-spending-may-not-signal-the-end-of-traditional-infrastructure"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Rising cloud spending may not signal the end of traditional infrastructure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As cloud investment continues to rise , it’s fair to ask if traditional infrastructure has hit its shelf life.
There’s been a mass exodus of mainframe talent from the workforce due to IT professionals aging out, coupled with new generations of entry-level talent raised on an app-based, cloud-driven culture. It’s time to put the old technology out to pasture and commit to a cloud future, right? For some of us, this is an all too common refrain. In the 1980s, Sun Microsystems was going to be the death of the mainframe. PCs and client/server computing would also reportedly be the demise of the mainframe, if not in the 90s then in the early 2000s.
But yet, here we are. While cloud investments have been increasing year-over-year for a decade and investment in data center systems will continue to grow in 2023 by a projected 4.7 percent, according to Gartner, the mainframe lives on.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The continued growth in cloud services reflects organizations’ appetite to have greater command of their data. The addition of cloud resources to augment existing systems — rather than replace them altogether — marries cloud with traditional infrastructure for a more hybrid approach.
Data management challenges ensure the mainframe won’t die The biggest challenge facing large businesses is how to get the most value out of their data as it becomes ever more sprawled across multiple systems, as well as in a hybrid cloud environment. Ensuring that data is accessible and secure across multiple environments — legacy, on-premises, and data center applications running in the cloud — is an increasing headache.
For these companies, large on-prem systems are still the glue to mission-critical applications and processes. But the cloud holds tremendous value. Organizations leverage cloud technologies for analytics and other functions, and it’s critical that they are able to integrate them. Doing so securely, seamlessly and with simplicity, while remaining compliant, can be a daunting task.
In a survey of respondents using mainframe technology, 80% of IT professionals said mainframe technology remains critical to business operations. Enterprise organizations have layer upon layer of technology that has accumulated over time, in an intricate web of applications and processes that support their business.
Enterprises must marry the innovations and tools of today’s world with legacy technology. Ripping and replacing existing technology is disruptive to business, putting a drain on both employee and financial resources — neither of which are in great supply.
As enterprises struggle with this new reality, VC-funded startups and smaller companies may think this hybrid approach to infrastructure has no impact on them.
They would be wrong.
Opportunities of hybrid environments Venture-backed startups are likely never going to have an IBM mainframe. That may come along in a later growth phase as a company grows — but this hybrid approach presents an opportunity.
Any startup writing an enterprise solution running in the cloud must anticipate the value of that application to their largest customers. So, even if an organization doesn’t use traditional infrastructure, they need to be able to speak the language of the enterprise. This includes facing legacy challenges, modernization and cost challenges associated with creating a hybrid cloud environment where cloud and legacy infrastructure live in harmony.
Instead, these cloud-native companies can take a page from the “embrace and extend” playbook, finding ways to welcome the data and integrations of on-premise critical systems into their ecosystems. These hybrid environments are decades from disappearing, and those vendors who can tap the tremendous value baked into the data, processes, and efficiencies of \existing systems will be best positioned to capture enterprise markets.
I had the opportunity recently to speak with a startup that had created a payments app for the restaurant industry, a terrific concept with founders that really understand the financial side of the restaurant industry. What they didn’t understand, however, was the technology. Most restaurants are still reliant on old-school ERP systems.
It’s not just restaurants, either. Dental and medical offices, distributors, and financial services: All are broadly dependent on legacy systems, whether it’s enterprise resource planning (ERP) or customer relationship management (CRM). Startups need experts that sis in between the new and old worlds and can translate both. Modern-world APIs are a wonder — but not if they don’t integrate with older legacy systems.
Founders that don’t understand these systems will dazzle in their promise but fail to deliver against real market needs.
Will the day come when traditional infrastructure meets its demise? Never say never, but it will be a long time before cloud technology fully replaces traditional infrastructure. Enterprise organizations will continue to embrace the cloud and the benefits it enables but remain reliant on the core systems that have run their businesses — and that will continue to have far-reaching ramifications across the technology industry.
Chris Wey is President of the Power Systems Business Unit at Rocket Software DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,666 | 2,022 |
"Red Hat’s new CEO to focus on Linux growth in the hybrid cloud, AI and the edge | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/red-hats-new-ceo-to-focus-on-linux-growth-in-the-hybrid-cloud-ai-and-the-edge"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Red Hat’s new CEO to focus on Linux growth in the hybrid cloud, AI and the edge Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When IBM acquired Linux vendor Red Hat for $34 billion in 2019, Paul Cormier took the reins as Red Hat CEO. After three years, Cormier is now handing those reins over to a new leader.
Yesterday, Red Hat announced that long-time engineering leader at the company, Matt Hicks, will now be the company’s president and CEO. Cormier will move to the chairman role, where he will continue to be an active participant in the company’s activities.
In an interview with VentureBeat, both Cormier and Hicks emphasized that now is the right time for a CEO transition.
“When IBM acquired us three years ago, I thought for me personally, I’ve been here 21 years, maybe that was the right time, but there was a lot of unfinished business for me to complete,” Cormier said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The “unfinished business” was establishing Red Hat as a standalone operating entity within IBM, while continuing to grow the business. As chairman, Cormier will remain active as he helps lead a strategic customer advisory board and looks at potential acquisition opportunities.
During his tenure at Red Hat, Cormier helped oversee over 20 acquisitions and he expects more in the future, as the company continues to build out its application development, security and hybrid cloud capabilities.
Red Hat’s new CEO is technical to the core While Cormier has long had a product focus, Matt Hicks joined Red Hat in 2005 as an engineer.
“I’m a long-term, open source believer,” Hick told VentureBeat. “I got started in my career on Linux.” Hicks rose to prominence at Red Hat in 2012 as the director of OpenShift Engineering. Red Hat acquired a platform as a service vendor called Makara in 2010 and had rebranded the technology as OpenShift. The original Makara code didn’t quite work out, and Red Hat rebuilt and refocused OpenShift as a container and Kubernetes-based system.
OpenShift is now at the core of Red Hat’s overall strategy, to enable hybrid and multicloud application workloads for enterprises.
Hicks explained that Red Hat Enterprise Linux, a product that Cormier helped to bring to market nearly two decades ago, created that foundational platform for enterprises to run applications. With OpenShift, which runs on Red Hat Enterprise Linux, the platform becomes broader, supporting distributed workloads that can run on-premises and across multiple cloud providers.
“Red Hat Enterprise Linux brings a ton of value for single machines and customers have done an amazing job of building complex architectures with it,” Hicks said. “OpenShift lets you take hundreds or thousands of those machines and make them act as one thing for distributed computing.” While Hicks will now be responsible for the overall performance of Red Hat as the company’s CEO, he is hopeful that he won’t stray far from his engineering roots. Over the last three years he noted that he has taken on an increasing business role as executive vice president for products and technologies, which is a role that Cormier held before he became CEO.
“I’ve got a few years of practice with the shift to really focusing on the business and the nice part is I can fall back on my intuition for engineering for open source because I’ve done that for a long time,” Hicks said.
Red Hat’s roadmap lead to the Edge and more AI Red Hat faces no shortage of competition across multiple market segments.
In the core Linux market, Red Hat competes against Suse Linux , Canonical and its Ubuntu Linux.
There are also multiple vendors that have Linux distributions based on Red Hat, including Oracle, Rocky Linux and Alma Linux. On the OpenShift side, Red Hat competes against VMware, Docke r and Mirantis among other vendors that all provide Kubernetes container orchestration capabilities.
While Hicks is well aware of the competition, he noted that in his view the challenge is in enabling a wide set of capabilities from the hybrid cloud out to the edge and supporting those technologies for the long term. Hicks pointed to Red Hat’s recent announcement with General Motors , which will see Red Hat technologies embedded into cars, as a prime example of his company’s value proposition.
“The lifecycle of a car is a really long time and when they look for a partner to collaborate with it’s a 10-year bet for that,” Hicks said. “That’s something we’ve shown we can do in the data center.” Looking forward, a key area of innovation for Red Hat will be in the AI space as organizations of all sizes look to benefit from machine learning.
“We’re investing a lot in the MLops space because our role has always been about how we help you get code from a developer’s fingertips to production,” Hicks said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,667 | 2,022 |
"DeltaStream emerges from stealth to simplify real-time streaming apps | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/deltastream-emerges-from-stealth-to-simplify-real-time-streaming-applications"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeltaStream emerges from stealth to simplify real-time streaming apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Back in the day when data infrastructure was just picking up, businesses were heavily reliant on batch data processing.
They used to leverage at-rest data, stored across systems over a period of time, to drive insights for improving business outcomes. Now, data volumes have exploded at all levels, driving the need to take action on data as it flows through systems – known as streaming data processing.
In a bid to power use cases such as real-time product recommendations and fraud detection, enterprises around the world are using cloud-based streaming storage services such as Amazon Kinesis and Apache Kafka.
They use these platforms to continuously capture gigabytes of data per second from hundreds of thousands of sources, from which developers build the desired real-time streaming applications – capable of processing and reacting to events with sub-second latency.
While the process sounds simple, implementing it has long been a challenging endeavor. First of all, a company needs to have highly skilled developers in distributed systems and data management to build real-time applications. Then, these engineers have to work around the clock and provision servers or clusters to not only ensure delivery guarantees, fault tolerance, and elasticity and security in the product, but also smooth 24/7 operation in a reliable, secure and scalable way.
DeltaStream’s serverless database To simplify this aspect, DeltaStream offers a serverless database that manages, secures and processes all the streams – connected via streaming storage platforms – for various use cases. The company, founded by Hojjat Jafarpour, announced it has emerged from stealth with $10 million in seed funding.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “DeltaStream sits above the streaming storage services such as Apache Kafka and enables users to build real-time streaming applications and pipelines in familiar SQL language,” Jafarpour told VentureBeat. “The solution is serverless: meaning users just need to focus on building their applications and pipelines, and DeltaStream takes care of running them, complete with scaling up and down, fault tolerance and isolation.” With serverless, users can just assume resources for their applications and go ahead by paying only for what is used. Meanwhile, SQL’s simplicity ensures users have a familiar way to manage, secure and query their data-in-motion. DeltaStream also organizes the data in schemas and databases, and provides role-based access controls for restricting who can access the flowing information and what they can do with it.
“DeltaStream’s model of providing the compute layer on top of users’ streaming storage systems … eliminates the need for data duplication and doesn’t add unnecessary latency to real-time applications and pipelines,” the company said in its blog post.
Competitors Other offerings that tackle the same challenge are Confluent’s ksqlDB , Azure Stream Analytics and GCP DataFlow. However, according to Jafarpour, these are all restricted to certain streaming-storage platforms. In contrast, DeltaStream is platform-agnostic and can work with major streaming data stores like Apache Kafka, AWS Kinesis and Apache Pulsar.
“Also, in addition to processing, DeltaStream enables users to organize and secure their streaming data similar to the relational databases, which the other systems don’t,” the CEO added.
Currently, a limited set of customers on AWS can have access to DeltaStream in private beta. The company plans to use the seed round, led by New Enterprise Associates (NEA), to build on the core offering before heading toward general availability. He did not share the exact timeline but has confirmed plans to expand the product to GCP and Azure soon.
Once available, DeltaStream could be accessed through a REST API, CLI application or a web app.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,668 | 2,022 |
"Datadog strengthens API observability with Seekret acquisition | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/datadog-strengthens-api-observability-with-seekret-acquisition"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Datadog strengthens API observability with Seekret acquisition Share on Facebook Share on X Share on LinkedIn Datadog Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
New York-based Datadog , which provides a security-focused cloud monitoring platform for enterprise applications and infrastructure, has announced the acquisition of Seekret – an Israeli company specializing in API (application programming interface) observability. The terms of the deal were not disclosed.
Over the years, APIs have evolved into a core part of modern cloud applications. They sit as an intermediary and enable applications to talk to servers or other applications, building experiences and capabilities (like Google Maps’ support on Uber) that would otherwise be difficult to build.
Even today, when it comes to managing and monitoring a growing number of APIs, many organizations struggle with technical challenges. Their manual monitoring methods often fail to keep up with the explosive growth. This can result in an inaccurate understanding of interdependencies, health, availability and security of APIs and how they affect the experiences of application users.
Seekret’s automated solution With this acquisition, Datadog is looking to finally resolve this problem. According to the company, the Israeli organization provides a platform that automates the complete discovery process of both private and public APIs, visualizes dependencies between them and keeps up-to-date documentation at the rate of change. This gives customers an easy way to discover and manage APIs across their environments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While Datadog has not shared specific details of the plan, it has confirmed that the company will bring key capabilities from Seekret into its security and observability offerings. Beyond this, it also plans to use the deal to develop features that could help developers and operations and security engineers better manage the health, availability and security of their APIs.
Since 2010, Datadog has been offering developers and security teams tools to monitor everything in their stack, aggregating metrics and events across their servers, apps, and databases and presenting the data in a single unified view. API observability has also been a part of the effort. However, beyond API implementations that were explicitly developer-instrumented, API monitoring with Datadog has largely been limited to synthetic tests – the initiation of pre-programmed requests or transactions to API endpoints to measure availability and performance.
“Although synthetic monitoring can be valuable, the addition of Seekret offers a mechanism to non-invasively collect the equivalent of real-user monitoring (RUM) [in APM parlance] telemetry for APIs and substantially broadens API monitoring to include health, performance and security use cases that were previously unavailable,” Gregg Siegfried, VP, analyst on the IT monitoring team within Gartner, told Venturebeat.
“In addition, the product also includes some developer-friendly capabilities that line up well with Datadog’s CI visibility product,” he added.
Previous deals This is not the first time Datadog has moved to improve its offering through an acquisition. A few months ago, the company acquired a real-time collaboration tool called CoScreen.
Last year it also acquired a live-debugging solution ( Ozcode ), a cybersecurity startup ( Sqreen ) that helps developers monitor and protect their web apps from vulnerabilities and attacks, and the developer of a tool for building observability pipelines.
In all, the total number of acquisitions made by Datadog (including Seekret) stands at ten. It competes with players like Dynatrace and Splunk.
“Other products compete with portions of the Datadog platform – some examples are Dynatrace in application and infrastructure monitoring, Splunk in log, application and infrastructure monitoring, Sysdig in container infrastructure and security monitoring and the hyperscale cloud providers with respect to workloads running in their respective clouds. In a crowded space like modern monitoring, there is something for everyone. Other solutions may be faster, affordable or more suitable for specific use cases. Still, it is a challenge to find another monitoring platform that matches Datadog’s breadth,” Siegfried emphasized.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,669 | 2,022 |
"Data chess game: Databricks, MongoDB and Snowflake make moves for the enterprise, part 2 | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/data-game-databricks-mongodb-and-snowflake-make-plays-for-the-enterprise-part-2"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data chess game: Databricks, MongoDB and Snowflake make moves for the enterprise, part 2 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This is the second of a two-part series. Read part 1 dissecting how Databricks and Snowflake are approaching head-to-head competition.
As we noted yesterday , June was quite a month by post-lockdown standards, as back to back, MongoDB , Snowflake and Databricks each held their annual events in rapid succession. Historically, each of these vendors might have crossed paths in the same enterprises, but typically with different constituencies. So, they didn’t directly compete against each other.
Recent declines in financial markets notwithstanding, each of these companies are considered among the hottest growth players on the cloud data platform side, with valuations (private or market) ranging into the tens of billions of dollars. While Databricks is still private, MongoDB and Snowflake have their IPOs well behind them.
Player positions They are each positioning themselves as default destination platforms for the enterprise. Databricks and Snowflake at this point are on each other’s competitive radars and yesterday, we gave our take on the chess game that they are playing. In this installment, we look at what each player must do to appeal to the broader enterprise. While there are differences in target markets, especially with MongoDB , there is a common thread for all three: to grow further, they are going to have to spread beyond their comfort zones.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So, what are those comfort zones? Databricks and Snowflake come from different parts of the analytics worlds, while MongoDB has focused on operational use cases. Historically, they each appealed to different audiences. Databricks to data engineers and data scientists, Snowflake to business and data analysts, and MongoDB to app developers.
But recent moves from all three providers are starting to breach those silos. Let’s start with deployment. Of the three, MongoDB is the only one with on-premises presence (the other two are cloud pure plays), but barely five years into its Atlas cloud database service, the company’s revenues are now mostly cloud-driven. While MongoDB will likely never be a cloud pure play, the cloud is distinctly driving its future.
Next is operations. With Snowflake adding a lightweight transaction processing engine and MongoDB making early moves to start addressing analytics beyond visualization, we were prompted to ask a few weeks back whether they are on a collision course.
Our take? In the short term, they are still in separate universes, but in the long run, never say never.
As for analytics, we noted yesterday, Databricks and Snowflake are more vocal about expanding into each other’s turfs.
Nonetheless, while MongoDB remains the most vocal about sticking to its knitting as an operational database, beneath the surface it’s making the first moves to come to terms with the relational database folks and dip its toes into analytics.
The starting points Let’s look at the messages coming out of each of the summits last month. MongoDB’s was about doubling down on developers. In CTO Mark Porter’s keynote , he spoke of the mounting volume of new applications that would be coming forth over the next few years and, with it, the need for expedient approaches enabling developers to overcome the hurdles to getting apps into production. At Snowflake, it was all about reinforcing the “data cloud” as a destination by expanding its reach, both into transaction processing and machine learning. And for Databricks, it was all about benchmarks, governance and lineage capabilities showing that the data lakehouse is ready for prime time and capitalizing on their open-source strategy.
The starting points for each player places their ambitions into perspective. MongoDB’s official mission is enabling businesses to operate as “software companies.” That reflects the fact that MongoDB’s constituency has traditionally been software developers, and that they must be able to be productive if their organizations are to operate at software company velocity. A recurring message of that strategy is that traditional databases have proven to be hurdles, owing to the rigid nature of relational schema and the inability to scale them out.
For Snowflake, it is about targeting business and data analysts who rely on data warehouses with a cloud-native reinvention tackling the barriers of ease of use, scaling and data sharing.
And for Databricks, it is about harnessing the breadth and scale of the data lake with a soup-to-nuts development and execution environment powered by Apache Spark , Photon and Delta Lake.
The next steps This is where getting outside the comfort zone becomes critical. Let’s examine each provider individually.
MongoDB For MongoDB, it’s not just about app developers, but also the database folks, as we outlined in our piece last month. For MongoDB to become the default operational data platform for new applications, it must go beyond being a developer company to also becoming a data company.
MongoDB has made some early moves in this direction, such as upping its security game and writing a bona fide SQL query engine. The company needs to make deeper cultural shifts, such as pivoting away from the message denigrating SQL and obsolete database practices. MongoDB responds that relational database developers should also pivot, or at least accept the fact that the document model doesn’t mean walking away from the skillsets and disciplines that they’ve developed. The MongoDB platform does support schema validation. But schema tends to be variable in most MongoDB implementations, so we would like to see more focused efforts in the future for developing data lineage capabilities that could track schema evolution.
Either way, our message to MongoDB remains: Don’t alienate a key constituency (SQL database developers) that you will need to extend your enterprise footprint. We would like to see more positive outreach in the future.
Snowflake For Snowflake, it’s convincing data scientists that Snowpark should be an effective execution environment for their models. The company has a new partnership with Anaconda, which curates Python libraries, to optimize them for execution in Snowpark. But doubters remain; for instance, H2O.ai contends that it is more efficient to bite the bullet and run machine learning models in their clusters that can multithread processes, then feed results back to Snowflake.
Since introducing Snowpark a couple of years ago , Snowflake has improved its ability to optimally scale resources for user-defined functions (UDFs) written in languages such as Java or Python.
Of course, the recent announcement of Unistore places operational analytics within Snowflake’s sights. However, we don’t view this as a vast land grab for a new constituency as the company is not going after the SQL Servers, Oracles or MongoDBs of the world.
Databricks For Databricks, it’s about making the data lakehouse more business- and database analyst-friendly. These folks work with data modeling and BI tools, not notebooks; there needs to be another entry path providing a view that makes Delta Lake look more like a data warehouse.
And business analysts expect consistent performance for both interactive queries and batch reporting. The TPC-DS benchmarks are designed around analytics/decision support workloads, but as with EPA gas mileage ratings, your results will vary. Significantly, the next stage for Photon is reducing latencies under more typical query conditions, along with broadening support of table and file formats beyond Delta Lake and Parquet, respectively.
Bringing it all together for a game-winning strategy The common thread is that, coming from different starting points, each provider must connect to new constituencies. The key won’t be technology alone, but culture and structuring of the core business. Go-to-market, field and support teams must be recruited who can talk to the different constituencies. Debates over purity must go out the door.
Can MongoDB talk to relational database people as well as developers? Will Snowflake talk the language of data scientists, and can Databricks cultivate the BI crowd? These are not talking points that you’ll see on a press release.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,670 | 2,022 |
"Data chess game: Databricks vs. Snowflake, part 1 | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/data-chess-game-databricks-vs-snowflake-part-1"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data chess game: Databricks vs. Snowflake, part 1 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This is the first of a two-part series. Read part 2 , which looks at Databricks, MongoDB and Snowflake are making moves for the enterprise Editor’s note : A previous version of this article incorrectly stated that Databricks, unlike Snowflake, “runs within a single region and cloud, as the Databricks service does not currently have cross-region or cross-cloud replication features.” This statement has been removed.
June was quite a month by post-lockdown standards. Not only did live events return with a vengeance after a couple years of endless Zoom marathons, but the start of summer saw a confluence of events from arguably the data world’s hottest trio: in sequential order, MongoDB , Snowflake and Databricks.
There may be stark and subtle differences in each of their trajectories, but the common thread is that each is aspiring to become the next-generation default enterprise cloud data platform (CDP). And that sets up the next act for all three: Each of them will have to reach outside their core constituencies to broaden their enterprise appeal.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Because we’ve got a lot to say from our June trip report with the trio of data hotshots, we’re going to split our analysis into two parts. Today, we’ll focus on the chess game between Databricks and Snowflake.
Tomorrow, in part 2, we’ll make the case for why all three companies must step outside their comfort zones if they are to become the next-generation go-to data platforms for the enterprise.
The data lakehouse sets the agenda We noted that with analytics and transaction processing, respectively, MongoDB and Snowflake may eventually be on a collision course.
But for now, it’s all about the forthcoming battle for hearts and minds in analytics between Databricks and Snowflake, and that’s where we’ll confine our discussion here.
The grand context is the convergence of data warehouse and data lake.
About five years ago, Databricks coined the term “data lakehouse,” which subsequently touched a nerve. Almost everyone in the data world, from Oracle , Teradata , Cloudera , Talend , Google , HPE , Fivetran , AWS , Dremio and even Snowflake have had to chime in with their responses. Databricks and Snowflake came from the data lake and data warehousing worlds, respectively, and both are now running into each other with the lakehouse. They’re not the only ones, but both arguably have the fastest growing bases.
The lakehouse is simply the means to the end for both Databricks and Snowflake as they seek to become the data and analytics destination for the enterprise.
To oversimplify, Snowflake invites the Databricks crowd with Snowpark, as long as they are willing to have their Java, Python or Scala routines execute as SQL functions. The key to Snowpark is that data scientists and engineers don’t have to change their code.
Meanwhile, Databricks is inviting the Snowflake crowd with a new SQL query engine that’s far more functional and performant than the original Spark SQL. Ironically, in these scuffles, Spark is currently on the sidelines: Snowpark doesn’t (yet) support Spark execution, while the new Databricks SQL, built on the Photon query engine, doesn’t use Spark.
The trick question for both companies is how to draw the Python programmer. For Snowflake, the question is whether user-defined functions (UDFs) are the most performant path, and here, the company is investing in Anaconda , which is optimizing its libraries to run in Snowpark. Databricks faces the same question, given that Spark was written in Scala, which has traditionally had the performance edge. But with Python, the differences may be narrowing.
We believe that Snowflake will eventually add capability for native execution in-database of Python and perhaps Spark workloads, but that will require significant engineering and won’t happen overnight.
Meanwhile, Databricks is rounding out the data lakehouse, broadening the capabilities of its new query engine while adding a Unity Catalog as the foundation for governance, with fine-grained access controls, data lineage and auditing, and leveraging partner integrations for advanced governance and policy management. Andrew Brust provided the deep dive on the new capabilities for Delta Lake and related projects such as Project Lightspeed in his coverage of the Databricks event last month.
Who’s more open, and does it matter? Databricks and Snowflake also differ on open source. This can be a subjective concept, which we’ve documented here , here , here , here , and here , and we’re not about to revisit the debate again. Been there, done that.
Suffice it to say that Databricks claims that it’s far more open than Snowflake, given its roots with the Apache Spark project. It points to enterprises that run Presto, Trino, DIY Apache Spark or commercial data warehouses directly on Delta without paying Databricks. And it extends the same argument to data sharing, as we’ll note below. To settle the argument on openness, Databricks announced that remaining features of Delta Lake are now open source.
Meanwhile, Snowflake makes no apologies for adhering to the traditional proprietary mode, as it maintains that’s the most effective way to make its cloud platform performant. But Snowpark’s APIs are open to all comers, and if you don’t want to store data in Snowflake tables, it’s just opened support for Parquet files managed by open-source Apache Iceberg as the data lake table format. Of course, that leads to more debates as to which open-source data lake table storage is the most open: Delta Lake or Iceberg (OK, don’t forget Apache Hudi ). Here’s an outside opinion , even if it isn’t truly unbiased.
Databricks makes open source a key part of its differentiation. But excluding companies like Percona (which makes its business delivering support for open source), it’s rare for any platform to be 100% open source. And for Databricks, features such as its notebooks and the Photon engine powering Databricks SQL are strictly proprietary. As if there’s anything wrong with that.
Now the hand-to-hand combat Data warehouses have been known for delivering predictable performance, while data lakes are known for their capability to scale and support polyglot data and the ability to run deep, exploratory analytics and complex modeling. The data lakehouse, a concept introduced by Databricks nearly five years ago, is intended to deliver the best of both worlds, and to its credit, the term has been adopted by much of the rest of the industry. The operable question is, can data lakehouses deliver the consistent SLAs produced by data warehouses? That’s the context behind Databricks’ promotion of Delta Lake, which adds a table structure to data stored in open-source Parquet files.
That set the stage for Databricks’ TPC-DS benchmarks last fall , which Andrew Brust put in perspective , and of course, Snowflake responded.
At the conference, Databricks CEO Ali Ghodsi updated the results. Watching him extoll the competitive benchmarks vs. Snowflake rekindled cozy recollections of Larry Ellison unloading on Amazon Redshift with Autonomous Database. We typically take benchmarks with grains of salt, so we won’t dwell on exact numbers here. Suffice it to say that Databricks claims superior price performance over Snowflake by orders of magnitude when accessing Parquet files. Of course, whether this reflects configurations representative for BI workloads is a matter for the experts to debate.
What’s interesting is that Databricks showed that it wasn’t religiously tied to Spark. Actually, here’s a fun fact: We learned that roughly 30% of workloads run on Databricks are not Spark.
For instance, the newly released Photon query engine is a complete rewrite, rather than an enhancement of Spark SQL. Here, Databricks replaced the Java code, JVM constructs and the Spark execution engine with the proven C++ used by all the household names. C++ is far more stripped down than Java and the JVM and is far more efficient with managing memory. The old is new again.
Sharing data, spreading the footprint This is an area where Snowflake sets the agenda. It introduced the modern concept of data sharing in the cloud roughly five years ago with the data sharehouse , which was premised on internal line organizations sharing access and analytics on the same body of data without having to move it.
The idea was a win-win for Snowflake because it provided a way to expand its footprint within its customer base, and since the bulk of Snowflake’s revenue comes from compute, not storage, more sharing of data means more usage and more compute. Subsequently, the hyperscalers hopped on the bandwagon, adding datasets to their marketplaces.
Fast forward to the present and data sharing is behind Snowflake’s pivot from cloud data warehouse to data cloud. Specifically, Snowflake cloud should be your organization’s destination for analytics. A key draw of Snowflake data sharing is that, if the data is within the same region of the same cloud, it doesn’t have to move or be replicated. Instead, data sharing is about the granting of permissions. The flip side is that Snowflake’s internal and external data sharing can extend across cloud regions and different clouds, as it does support the necessary replication.
The latest update to Snowflake Data Marketplace, which is now renamed Snowflake Marketplace, is that data providers can monetize their data and, in a new addition, their UDFs via a Native Application Framework , which certifies that those routines will run within Snowpark. They can sell access to the data and native apps sitting in Snowflake without having to pay any commission to Snowflake. The key is that this must happen within the Snowflake walled garden as the marketplace only covers data and apps residing in Snowflake.
Last month, Databricks came out with its answer, announcing the opening of internal and external data marketplaces. The marketplace goes beyond datasets to include models, notebooks and other artifacts. One of the features of Databricks marketplace is data cleanrooms , in which providers maintain full control over which parties can perform what analysis on their data without exposing any sensitive data such as personally identifiable information (PII), a capability that Snowflake already had.
There are several basic differences between the Snowflake and Databricks marketplaces, reflecting policy and stage of development. The policy difference is about monetization, a capability that Snowflake just added while Databricks purposely refrained. Databricks’ view is that data providers will not likely share data via disintermediated credit card transactions, but will instead rely on direct agreements between providers and consumers.
The hands-off policy by Databricks to data and artifacts in its marketplace extends to the admission fee, or more specifically, the lack of one. Databricks says that providers and consumers in its marketplace don’t have to be Databricks subscribers.
Until recently, Databricks and Snowflake didn’t really run into each other as they targeted different audiences: Databricks focusing on data engineers and data scientists developing models and data transformations, working through notebooks, while Snowflake appealed to business and data analysts through ETL and BI tools for query, visualization and reporting. This is another case of the sheer scale of compute and storage in the cloud eroding technology barriers between data lakes and data warehousing, and with it, the barriers between different constituencies.
Tomorrow, we’ll look at the other side of the equation. Databricks and Snowflake are fashioning themselves into data destinations, as is MongoDB. They are each hot-growth database companies, and they will each have to venture outside their comfort zones to get there.
Stay tuned.
This is the first of a two-part series. Tomorrow’s post will outline the next moves that Databricks, MongoDB and Snowflake should take to appeal to the broader enterprise.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,671 | 2,022 |
"How Capital One improves visibility into Snowflake costs | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/capital-one-improves-visibility-for-snowflake-costs"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Capital One improves visibility into Snowflake costs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
[Ed. note: Corrected 8/3/22 from “It also helped reduce Snowflake costs by 27%, improve the cost per query by 45% and save over 55,000 hours of work.” to say “It also helped reduce Snowflake costs by 27%, improve the cost per query by 45% and save over 50,000 hours of work.”] Banking giant Capital One has packaged its extensive work to migrate to the cloud into a new software business called Capital One Software. Its first product, Slingshot, helps enterprises manage cloud costs and automate governance on top of the Snowflake platform.
This is big news for both companies. For Snowflake, this demonstrates the ability of highly regulated industries to go all in on cloud data migration. For Capital One, it reflects a way to monetize its extensive work in data management to unlock lucrative new opportunities in the technology industry.
The new service helps manage costs through intelligent cost savings recommendations. It also improves insight and visibility into Snowflake costs and automated governance using custom workflows, dynamic warehouse provisioning and self-service capabilities. These capabilities were built by necessity as part of Capital One ’s early transition to the cloud.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All in on cloud data Capital One was one of the first banks to go all in on the AWS cloud in 2015. Salim Syed, head of engineering at Capital One Software, told VentureBeat they believed the cloud could help scale the number of queries its 6,000 business analysts could run concurrently and make it easier to share data across the business.
In addition, the cloud promised a way to enable workload isolation so queries from one team of analysts did not impact others. This had been a big challenge with their on-premise database, particularly at busy times of the month.
The cloud also suggested a way to scale and change directions quickly when needed. “We were betting on the fact that the cloud would become this place where apps could be developed quickly,” Syed explained.
In 2017 Capital One began migrating all its data to Snowflake, which helped automate many aspects of data management and sharing. However, this introduced new data management challenges. The platform’s simplicity allowed analysts to develop new analytics as fast as they could combine datasets. But they soon discovered that many of these queries were inefficient and often resulted in provisioning excess cloud infrastructure, which increased costs.
“The challenge was how to make sure you are using Snowflake as efficiently as possible,” Syed said. So, his team worked on a new management tier that helped create and enforce best practices across Capital One.
Developing a new model They found that the old model of trying to centralize analytics was not feasible, since central teams don’t have the expertise and domain knowledge to process data across business units. So, they developed self-service tools that empowered business units, with data management, cost controls, and automated governance baked in. This allowed them to onboard almost 450 new use cases since migrating from the on-premise data infrastructure.
It also helped reduce Snowflake costs by 27%, improve the cost per query by 45% and save over 50,000 hours of work. For example, it can alert analysts when a query is inefficient so they can turn it off and create a more efficient one. It also helps ensure that teams don’t over provision data warehouses. For example, they found that development and testing teams often previously provisioned much larger instances than required.
Governance as a service In the early days of the cloud, many regulated industries were concerned with some governance challenges of moving their processes off their internal servers. Capital One’s success suggests that even some of the largest and most regulated businesses can succeed in going all in on the cloud.
Nick Kramer, leader of applied solutions at SSA & Company, a global consulting firm advising companies on strategic execution, told VentureBeat, “Any Snowflake customer would have to be intrigued by Capital One’s application, particularly for governance.” All banks face stringent regulatory requirements and comprehensive, measurable governance is a cost of doing business. He argues that Capital One’s experience in ensuring governance combined with high data volumes, velocity and privacy, presents a compelling data management value proposition, particularly for medium-sized businesses without the resources to build comparable capabilities.
“The most promising features we see are the focus on and deep partnership with Snowflake, the modular building blocks for customization and the governance workflow,” Kramer said. This combination should enable Slingshot to accelerate feature development in response to customer feedback, underpinned with governance-enabled data quality and usability.
Easing last mile analytics The second key aspect of this partnership demonstrates how Snowflake is building an ecosystem to build on its core competency in cloud data management. Priya Iragavarapu, VP in global management consultant AArete’s Center of Data Excellence, pointed to Snowflake’s lead in massively parallel processing, intelligent indexing and expedited querying as key differentiators.
This enables fast analytics, but leaves it up to each enterprise to figure out how to implement these. Iragavarapu said the analytics user journey is disparate since it involves connecting to the right data source, submitting queries via the Snowflake user interface (UI) and then extracting the data to visualize in more powerful ways using tools like Tableau or Power BI.
Capital One simplified this process by eliminating the need to switch apps by incorporating content management, project management and communication within a single tool. Last mile analytics refers to the enhancements required to complete the final leg of the analytics journey. This is where Iragavarapu sees most inefficiencies because of the need for customizations, limitations on automation and the need to cater to varied, complex team dynamics and communications.
This partnership between Capital One and Snowflake addresses this gap.
“The analytics user journey will never be the same again,” she declared.
A model for the banking transformation Eventually, this suggests a way for banks to not only innovate, but also gain a leg up on new financial technology startups nipping at their heels.
Ronak Doshi, partner at Everest Group, an advisory firm, said, “This FinTech revolution is putting technology as the growth driver for banks and the lead steers in the market like Capital One are now in an advantaged position to take these early bets and monetize them.” Doshi observed that FinTech startups have market valuations of 20-30 times their revenue, which is three to four times higher than large banks. The pivot to a technology organization providing financial services will drive valuation for Capital One as they scale their software business. It could also provide an on-ramp to the payment and banking service business for Capital One.
Banks and payment companies are embedding their financial services and products using APIs into the leading software -as-a-service (SaaS) products.
“The next stage is banks offering these SaaS solutions and providing the banking and payments services APIs out of the box, creating stickiness with consumer relationship and owning the end-to-end experience,” Doshi said. “We see this future of banking as open finance or embedded finance to be a massive growth driver for the industry, and lead steers that have invested in these software businesses will have the capability to capture this opportunity.” It also suggests how Snowflake will continue to grow its lead in the data management space. Iragavarapu said, “As long as vendors are trying to address a particular pain point using data, there is no need for them to re-invent the wheel and can effectively leverage the framework Snowflake already has.” The one caveat is that Snowflake is focused on online analytic processing and is not optimized for transactional application needs. She believes enterprises will turn to other databases for transactions depending on the size, structure of data and need for aggregation.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,672 | 2,022 |
"A deep dive into Capital One's cloud and data strategy wins | VentureBeat"
|
"https://venturebeat.com/cloud/a-deep-dive-into-capital-ones-cloud-and-data-strategy-wins"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event A deep dive into Capital One’s cloud and data strategy wins Share on Facebook Share on X Share on LinkedIn As part of Data Week for VB Transform 2022 , Patrick Barch, senior director of product management at Capital One Software, took to the stage to explain why operationalizing data mesh is critical for operating in the cloud. Then, on the second day of The Data Week, he sat down for a chat with Matt Marshall, CEO of VentureBeat, to dive into the governance piece of cloud strategy, and why a holistic approach is key for managing the influx of data in a new environment.
About six years ago, Capital One went all in on the public cloud. The company shut down its owned and operated data centers, and dove into modernizing the data ecosystem for machine learning.
“How do you manage something like that?” Barch asked rhetorically. “And by the way, you have to get this right, because — pick your phrase. Data is the new oil. Data is the new gold. At Capital One, we say data is the air we breathe. Companies recognize that the key to success in today’s tech-driven landscape is making use of their data. So, no pressure.” Moving to the cloud means more data, from more sources, stored in more places — and a whole company of users demanding self-service access to all of that data in the tool, format and consumption pattern of their choice. That’s all happening against a backdrop of patchwork privacy legislation that’s popping up all over the world.
“There are a lot of challenges when you move to the cloud,” Barch told Marshall. “There are challenges with publishing, getting data into the cloud in a well-managed way. There are challenges with consumption. How do you help your teams find all of this data that’s exploding in quantity, that’s in all these different platforms like AWS and Google and Snowflake and others? How do you govern all this data, especially against a patchwork of emerging privacy legislation that’s popping up all over the world? Finally, this is a new paradigm for infrastructure management. You’re not responsible for servers anymore. You pay as you go. How do you put the right controls in place around all of that?” Early on in the journey, the company invested in product management and user-centered design in the data ecosystem to address the specific challenges of all their customers and users — how they use data and where they struggle with it. That includes everyone from the people publishing high-quality data to a shared environment to the analysts and scientists leveraging that high-quality data for critical business decisions. There are the data governance and risk teams, worried about defining policies and enforcing them across the enterprise, and the teams responsible for managing the underlying infrastructure that powers all of those use cases.
Organizations often end up with an array of point solutions to solve for some of these user needs — and a single person might have to hop between six or seven different tools and processes to complete a simple task like sharing a new data set, or finding data. But that simply doesn’t work, Barch says. Scaling this ecosystem gets extremely complicated for both the engineering teams that have to build and maintain these integrations, and the users who have to navigate across this map.
“For me, the heart of this thing is treating data like a product,” Barch said. “Once your company makes that mindset shift — and it truly is a mindset shift — the rest of these principles fall into place. You need to understand how to organize all of those products, and you need to figure out the right capabilities to enable self-service across a variety of factors.” That’s where data mesh comes in: an operating model that can help scale a well-managed cloud data ecosystem. Capital One approached their own ecosystem through two prongs. A centralized policy tooled into a common platform that enabled federated data management responsibility. The aim was to put more control into the hands of the teams that were closest to the data themselves, because data mesh doesn’t work unless it’s operationalized through self-service. And the overarching goal is to get your data practices operating at the speed of business.
“When you combine common tooling and centralized policy with federated ownership, you make it easy for your practitioners to do their job,” he said. “You transform data from something that’s a bottleneck into something that can turbocharge and power your business.” The tools behind a well-managed cloud data ecosystem Captial One engineers built these tools and infrastructure internally, but Barch recognizes that not every company has the luxury to build things themselves. Fortunately, a vast array of solutions exist today which didn’t exist when when the company was starting its journey.
“You just need to make sure that you’re creating a user experience that works for your user base,” he explained. “The days of a single central data team and data being the IT team’s job — those days are over. Think through the UX. How do you enable your teams to get their jobs done?” To help other companies navigate the cloud journey, the company created a new line of business: Capital One Software, which is bringing to market some of the internal products and platforms the company developed to help navigate its own cloud journey. Capital One Slingshot, the first product, is designed for companies trying to adopt Snowflake in a well-managed way.
The product tackles one of the biggest challenges with any cloud provider: the risk of unexpected costs due to the pay-as-you-go, usage-based consumption model. Slingshot offers a way to create the rules of the road for infrastructure provisioning and management to most efficiently use the cloud resources, particularly Snowflake, which cuts costs, and simplifies the experience of your critical data users — and levels up your data optimization.
“The data transformation was all about better serving our customers,” Barch told Marshall. “Being able to create more real-time experiences around fraud detection. Being able to level up the skills of Eno, our intelligent financial assistant, has been a huge win. Reducing the amount of time it takes our analysts and scientists to find new data for new projects has been a massive time saver and a massive win. We’ve been able to onboard thousands and thousands of new real-time data streams to the platform, all via self-service thanks to these tools.” For Barch’s deep dive into Capital One’s epic data transformation journey and to catch up on all Transform sessions, register for a free virtual pass right here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,673 | 2,022 |
"9 cloud jobs with the biggest salaries | VentureBeat"
|
"https://venturebeat.com/cloud/9-cloud-jobs-with-the-biggest-salaries%ef%bf%bc%ef%bf%bc"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs 9 cloud jobs with the biggest salaries Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There’s no doubt that the pandemic shifted gears up a notch for government departments and businesses when it came to cloud adoption. Those who had already put strategies in place thanked their lucky stars — it was relatively straightforward to send teams home, and work could more or less seamlessly carry on with a few adjustments.
Those that had been on the fence about cloud adoption were forced to dive in, accelerating the rate of cloud adoption globally. In fact, global end-user public cloud spending grew by 23% in 2021, according to Gartner.
Naturally, that has fuelled demand for skilled cloud professionals. As the hiring squeeze tightens for developers and programmers, salaries are rising. The 2022 Cloud Salary Survey from O’Reilly tracks trends in compensation across a number of different job titles and levels within cloud jobs.
From a survey of 778 cloud professionals, the study discovered that on average, salaries in cloud roles increased last year by 4.3%, and those who had participated in 40 or more hours of training in the past year received higher salary increases. Twenty percent of respondents reported changing employers in the past year and the same percentage of workers plans to look for a new job this year because of compensation. In less positive news, the average salary for women is 7% lower than the average salary for men.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So, what are the top salaries in cloud computing this year? Top average annual salaries in cloud computing, 2022 Directors and executives: $235,000-$237,000 Architects, leads, and managers: $188,000-$196,000 Architects: $188,000 Marketing: $187,000 Sales: $186,000 Engineers: $175,000 Product: $162,000 Associates: $140,000 Consultants: $129,000 Interested in looking for a new role? We have three cloud jobs to check out below — and there are plenty more on the VentureBeat Job Board too.
Senior Software Engineer, Chaos Engineering — Remote, Hubspot The Role: A new discipline, HubSpot chaos engineering group, is now adding a Senior Software Engineer.
The Responsibilities: The chaos team will be instrumental in helping product and infrastructure teams better handle system failures. The primary focus of the team will be to safely degrade the state of product applications and infrastructure services to determine how they fail and to ensure they fail in ways that minimize customer pain.
The Requirements: You will have an interest or experience with chaos engineering, SRE culture, and improving reliability with automation as well as experience designing and operating distributed systems and cloud infrastructure at scale.
Find out more about the Senior Software Engineer role or discover more opportunities at HubSpot.
Senior Cloud Engineer (Remote), CrowdStrike The Role: CrowdStrike is looking for a Senior Software Engineer for its Cloud Workload Protection Platform (CWPP), a key, rapidly evolving product area. The team is extending CrowdStrike’s mission of “stopping breaches” into the public cloud and cloud-native workloads.
The Responsibilities: You will build cloud services and detection/prevention capabilities and research and develop new improvements to the existing detection and prevention features.
The Requirements: Six plus years’ of professional experience with a deep understanding of all aspects of cloud services / distributed systems development and maintenance are required, as is deep experience with one or more programming languages such as Golang, C/C++, Python or Java.
More information on the Senior Software Engineer is available as are other roles at CrowdStrike.
Senior Information System Security Officer (Cloud), BAE Systems The Role : The Senior Information Systems Security Officer (ISSO), is the principal point of contact to ensure IT systems implement security controls and processes to develop and maintain a strong security posture.
The Responsibilities: You’ll conduct risk analyses from vulnerability and compliance scans, pen testing results, or other audit activity. You will also support a customer with multiple cloud systems, so cloud security and cloud ATO experience would be ideal.
The Requirements: You will need a strong understanding of cloud security and experience with multiple cloud service providers (CSPs) such as Microsoft Azure (preferred) and Amazon Web Services (AWS) as well as experience or knowledge of the process of migrating IT systems to the cloud, to include overall migration strategy and security guidance.
More detail on the Senior Information Systems Security Officer job is available as are further job openings at BAE Systems.
If you’re thinking about making a job move then check out thousands of open roles on the VentureBeat Job Board VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,674 | 2,022 |
"Report: 78% of orgs have workloads in over 3 public clouds | VentureBeat"
|
"https://venturebeat.com/business/report-78-of-orgs-have-workloads-in-over-3-public-clouds"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 78% of orgs have workloads in over 3 public clouds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A new report by Virtana found that 78% of organizations have workloads deployed in more than three public clouds, and 51% of respondents plan to increase the number of public cloud instances by the end of 2022. As multicloud adoption grows, it is critical for organizations to understand the impact of migrating workloads to the cloud, create a migration strategy and efficiently manage workloads on an ongoing basis to avoid unexpected costs and performance degradation.
For most organizations, 2020 was about getting to the cloud.
While most were already on that path, the overnight shift to “remote everything” forced enterprises to accelerate the journey. 2021 unveiled the downsides of the cloud and the repercussions of suboptimal implementations, including skyrocketing costs and performance problems. As organizations continue to look ahead, 2022 is about reimagining hybrid, multicloud strategies and processes.
Tool sprawl was also found as another challenge within managing multicloud environments. The research showed that organizations use a variety of tools to monitor and manage different aspects of their complex hybrid, multicloud environments. For example, 63% of respondents are using more than five tools for migration, cloud cost optimization , IPM, APM and cloud infrastructure monitoring. This isn’t necessarily a problem as each of these tools provides a specific important function. However, the challenge is that if organizations can’t consolidate data from all these different tools, they cannot get a comprehensive and complete view of their infrastructure. This lack of global visibility creates gaps that can expose an organization to performance, cost and other risks.
As the rapid march to hybrid, multicloud infrastructures continues, enterprises are coming to terms with the challenges of migrating and managing these complex and dynamic environments.
Unified visibility and simplified management are critical to implement and enforce the cross-functional accountability and formal governance needed to control cost and risk and realize the maximum benefits of the cloud.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For its report, Virtana commissioned a survey of 360 cloud decision-makers in the U.S. and the U.K. during February and March 2022.
Read the full report by Virtana.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,675 | 2,022 |
"Why the alternative cloud could rival the big 3 public cloud vendors | VentureBeat"
|
"https://venturebeat.com/business/big-3-public-cloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the alternative cloud could rival the big 3 public cloud vendors Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article was updated on the 13th July 1:13 pm EST Few technologies have generated the attention that cloud services have over the past two years. While many organizations planned to invest in the cloud prior to the pandemic, COVID-19 and the need to support remote working drastically accelerated cloud adoption.
In fact, research shows that 27% of global cloud decision-makers made a significant increase in cloud spending during the pandemic.
However, as the maturity of the cloud computing market increases, more and more organizations are are considering the merits of alternative cloud solutions as an alternative to the big 3 public cloud organizations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over the past four years, adoption of alternative cloud solutions has almost doubled to the point where now 27% of organizations use an alternative cloud provider such as Akamai’s Linode, DigitalOcean or OVHcloud.
This appetite toward the alternative cloud was also highlighted in a new study released by Techstrong Research and Linode yesterday, which showed that while 93% of organizations use the top three cloud providers: Amazon Web Services, Microsoft Azure, and Google Cloud Compute, almost two-thirds are considering or are ready to buy from a trusted alternative cloud vendor.
At this stage, it appears that the appetite for affordable and agile alternative cloud solutions is growing amongst enterprises, and in future could grow to even rival the big 3 public cloud services.
What’s driving the appetite for the alternative cloud? While alternative cloud providers like Linode can be traced all the way back to 2003, it is only as more organizations have experimented with the cloud that alternative cloud adoption has picked up steam.
Though there are many reasons for this increase in adoption, at a high level, organizations are turning to the alternative cloud to improve their operational agility, and to enable themselves to build multicloud environments that meet their exact business needs, rather than a “best fit” solution.
“The core benefits of the alternative public cloud are cost, performance, availability, security, agility for the organization. Some organizations struggle with the complexity of the hyperscale providers,” said head of cloud experience for Akamai, Blair Lyon.
“So, in opting to go with an alternative cloud provider, benefits come with an ‘addition by subtraction’ approach. Alternative cloud providers offer more simplicity of user interface, catalog, pricing, and a more manageable learning curve,” Lyon said.
The alternative cloud offers an avenue for enterprises to simplify their cloud infrastructure, while enabling developers to deploy and manage multicloud environments with access to open APIs.
At the same time, organizations that do decide to move away from reliance on big three cloud vendors can also increase their overall cost-efficiency.
“Pricing is less complex and is more affordable for an organization’s use case, and can often offer more or bundle services with the core offering that a hyperscale provider would not. Additionally, developers are looking for more flexibility in how they pay for their cloud services — Google Pay, Apple Pay, crypto, etc. — and the hyperscale providers are more rigid in their payment methods,” Lyon said.
The alternative cloud market As the alternative cloud market matures, there are a number of key providers dominating, including Akamai’s Linode, Digital Ocean and OVHcloud. One of the main competitors in the market is Linode, which Akamai Technologies Inc.
announced it had acquired for $900 million at the start of this year.
Linode has carved out a position in the market as an alternative to AWS that provides organizations with access to Linux cloud resources with a full-featured API and cost-efficient pricing options. It also has over 1 million customers.
One of its main competitors, DigitalOcean, recently reported raising $127.3 million in revenue in the first quarter of 2022, an increase of 36% since last year.
DigitalOcean positions itself as “the developer cloud” and offers a range of solutions including scalable virtual machines, managed Kubernetes clusters and serverless computing solutions, designed to help developers develop applications more effectively.
OVHcloud also plays a key role in the market, offering a mix of bare metal cloud, hosted private cloud and public cloud services, providing enterprises with access to high-performance dedicated servers. OVHcloud recently announced it has raised €202 million ($203 million) in revenue in the third quarter of 2022.
The key difference between these offerings, and the solutions offered by AWS, Azure, and Google Cloud is that these solutions offer high performance at a lower price point. Making them a more cost effective solution for enterprise users.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,676 | 2,022 |
"AWS re:Inforce: BigID looks to reduce risk and automate policies for AWS cloud | VentureBeat"
|
"https://venturebeat.com/business/aws-reinforce-news-bigid-unveils-intelligent-access-for-aws-cloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS re:Inforce: BigID looks to reduce risk and automate policies for AWS cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data intelligence platform provider BigID today announced extended auto-detection and automated discovery for AWS accounts and datasets, as well as security and privacy-aware access control for AWS cloud infrastructure.
Because the average AWS customer has multiple accounts, the addition of the new intelligent access control feature is designed to reduce risk and automate role-based policies across AWS including S3, Redshift, Athena, EMR, and more with extended integrations with AWS Lake Formation and AWS Glue, the company said.
Intelligent access for streamlining security with BigID The intelligence access control feature is important because it simplifies and streamlines data security within AWS — not just finding the data at risk but giving IT the ability to set role-based policies in AWS using intelligence from BigID, said Eran Gewurtz, director of product management, security.
The idea is to help AWS customers automate intelligent access control to enable and restrict access to their sensitive data while creating business policies based on data sensitivity and context, the company said. Customers can improve their cloud data risk posture and reduce complexity by automatically protecting high-risk data while applying tags and labels to their AWS data based on sensitivity and risk.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Automated discovery to search across the cloud By providing automated discovery, BigID is aiming to make it easier for organizations to automatically find their accounts and data and extend data protection across the cloud.
“You can’t protect what you don’t know: Automated discovery gives you a head start on finding the most vulnerable data — and accounts — across an organization,’’ Gewurtz said. This type of granular autodetection layers in automation and machine learning (ML)-powered insight to accelerate actionable data protection in the cloud, he said.
The new capabilities are designed to let customers automatically find sensitive data, assets and accounts inside AWS without manual processes or configurations. The features are also designed to automatically provision roles and permissions across their multi-account environments to save time and costs and automate data scanning across multiple accounts and data sources.
“Data in the cloud can be tricky to protect: You have to know what it is, whose it is, where it is — and on top of that, you need to configure everything to have the right protection and access controls,’’ Gewurtz said. “Data at risk is a top priority for security teams, and one of the most important issues is how to protect that data.” Available via the AWS Marketplace , organizations that use AWS Control Tower can now automate the deployment of BigID. These expanded capabilities are geared toward reducing complexity, cost and time-to-insight for organizations managing their data in AWS.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,677 | 2,022 |
"Enabling new ISV experiences for mobile laptops | VentureBeat"
|
"https://venturebeat.com/automation/enabling-new-isv-experiences-for-mobile-laptops"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Enabling new ISV experiences for mobile laptops Share on Facebook Share on X Share on LinkedIn Presented by Qualcomm Technologies After “business as usual” was disrupted by the pandemic, and two years of working in a distributed model, workers don’t want to return to the status quo. They’re reluctant to head back to the office full time, and give up the work/life balance and increased productivity they’ve achieved by going remote.
As a result, 80% of professionals are asking their business and IT leaders to expand the hybrid arrangement they prefer, according to a study by Harvard Business School. And a McKinsey study found that 30% of employees say they are likely to switch jobs if they are not allowed to work in a hybrid environment.
“Enterprises and IT must keep this in mind as they formulate strategies and make sound investments on how to expand their hybrid offerings to attract and retain top talent,” says Kedar Kondap, Senior Vice President and General Manager, Compute and Gaming, Qualcomm Technologies.
As IT leaders start to make these investments in digital transformation and connected technologies, they’re turning to their partners — device manufacturers, network operators, and channel resellers — for the innovative solutions that can meet the needs of their workforce and business.
Mobile solutions for a mobile workforce In order to work from anywhere, employees are looking for mobile experiences that match the capabilities of their smartphones and PCs that keep them connected to their teams and the company in order to do their best work. They rely on cloud tools such as Microsoft Teams, Zoom, OneDrive, CITRIX, Office 365, Crowdstrike, VMware and more.
They also need powerful video tools, because collaboration can happen at a moment’s notice. Users should to be able to jump on a call from anywhere with cellular, high-quality camera and audio, and advanced AI accelerators, for features like noise suppression, background blur and auto-framing. And they need to access these tools wherever and whenever they need them.
As IT administrators move towards modern technologies and cloud-based solutions, they must adopt the hardware that can maximize these investments.
Choosing hybrid work devices IT admins have a whole grocery list of requirements for the devices that can power these solutions. They’re looking for powerful connectivity, performance and AI to optimize digital transformation investments, and deliver compelling hybrid work experiences for end users. And they want peace of mind when managing and securing endpoints.
That’s because their biggest challenge with the “work from anywhere” workforce is in securing and managing their corporate fleet of devices.
“We have heard from our enterprise customers that x86 devices can fall out of compliance for months because of the lack of visibility,” Kondap says. “They need devices that stay visible to the corporate network, no matter where the employee is, and always-connected PCs to help manage policies at a distance and push software patches to users, even when the devices are idle or in connected standby.” In addition, they want these devices to not only leverage cloud intelligence but on-device AI to help detect, protect and remediate threats faster than on traditional PCs.
Finally, for a device to be truly mobile, battery life is essential.
A look at the increasingly light, high-powered hardware on the market shows that PC companies are taking note of the way the wind is blowing, and stepping up to the plate to deliver laptops that are designed specifically to support the work format that employees prefer, while also maximizing productivity, collaboration and security.
One of the strongest entrants into the field, the new Snapdragon compute-powered Lenovo ThinkPad X13s, won several awards at Mobile World Congress. This new ThinkPad marks greater investments by OEMs to deliver laptops for the shift to modern PC platforms and usage.
Mobile laptops designed for ISVs Lenovo announced the first ever ThinkPad powered by the new Snapdragon 8cx Gen 3 Compute Platform, the Lenovo ThinkPad x13s at MWC 2022 and now is commercially available to order. The device takes full advantage of Windows on Snapdragon features and capabilities, including 5G connectivity, AI-accelerated experiences, exceptional performance and multi-day battery life to deliver next-generation experiences for business users and IT.
Not only that, over 100 of the top Windows ISVs are optimizing their applications for Windows on Snapdragon and 225+ enterprises are testing or deploying Windows on Snapdragon devices within their environment.
“ISVs are excited to work with us because it creates opportunities to enhance their applications,” says Kondap. “The efficiency, connectivity and on-device 5G connectivity and AI can be leveraged by ISVs to improve existing experiences and create entirely new ones.” For instance, Zoom has utilized dedicated technology blocks, 5G connectivity and hardware accelerators for more immersive video communications while connecting longer with multi-day battery life. Since Office 365 is optimized for cloud collaboration and experiences with Outlook, OneDrive and more, connectivity plays a central role in ensuring the software keeps teams in sync.
“Whether it is for productivity, collaboration or security, Windows on Snapdragon only creates more use cases for an ISV to tap into,” he says. “Plus, our network knowledge and expertise, a differentiator in the PC ecosystem, is helping to facilitate and streamline device certifications for OEMs and operators.” Qualcomm is also working directly with enterprises to help them devise a digital transformation strategy to leverage wireless technologies to make them more modern, efficient and competitive within their industry.
Learn more here about the Snapdragon compute-powered Lenovo ThinkPad X13s, from detailed specs to how it’s changing the way mobile employees work.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,678 | 2,022 |
"Transform: The Data Week continues with a dive into data analytics | VentureBeat"
|
"https://venturebeat.com/ai/transform-the-data-week-continues-with-a-dive-into-data-analytics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Transform: The Data Week continues with a dive into data analytics Share on Facebook Share on X Share on LinkedIn This year, Transform , the leading event on applied AI for enterprise business and technology decision makers, returns both in-person and virtually. We’re digging deep with two full weeks of the most critical topics in applied AI, including data, AI and the edge.
The Data Week kicks off the virtual coverage on July 20. Every day, in-depth talks and panels from leaders across industries will shine a light on a key topic in the data world. On Day One, (July 20) we’ll take a deep dive into data architecture, and on Day Two (July 21), it’s all about data governance. Day Three (July 22) brings a look at data analytics, or the AI that spins your data into gold.
Below you’ll find a closer look at the what’s happening in the world of data analytics, from leaders at FedEx, Orangetheory, Walgreens and more. (And don’t miss the agendas for Day One and Day Two ).
A deep-dive into Data Analytics: July 22 The panels kick off at 9:00 am PT with a look back at the historic winter storm Uri, which paralyzed much of the southern United states, and significantly impacted FedEx service levels for critical healthcare accounts. In this chat with Sriram Krishnasamy, CEO of Fedex Dataworks, you’ll learn how FedEx Dataworks leveraged AI to decongest the network and get lifesaving medical products to their destinations.
You’ll also understand how the company’s fully automated dashboard empowers operators to manage exceptions, how data helps the team solve challenges in the face of a crisis and more.
Next up at 9:20 am PT is a look at why data is valuable to companies of every size — and how simple data analytics can put your data to work even before you are ML-ready.
Ameen Kazerouni, Chief Data & Analytics Officer at Orangetheory Fitness joins Allison Ryder, Senior Project Editor at MIT Sloan Management Review to talk about why it doesn’t take large ML and AI investments to start seeing immediate value in your data. Plus you’ll learn how basic data analytics methods and the right organizational structure can help drive even smaller, less technical companies toward digital transformation, and more.
Walgreens is one of the world’s largest retail pharmacy healthcare destinations, with more than 450,000 team members globally, 13,000 stores across 25 countries and approximately 9 million store visits and online interactions with customers every year. At 9:40 am PT, VentureBeat Executive Editor Llanor Alleyne will chat with Mike Maresca, CTO at Walgreens Boots Alliance to learn how Walgreens is using data and insights to transform the customer experience at every touchpoint.
The morning continues with a dive into the next era of personal mobility at 10:00 am PT. Turo, a peer-to-peer car sharing company, is no stranger to using data and AI to make their customer experiences seamless. Now the company is using real-time analytics to ensure a safer ride for their customers.
In this talk by Avinash Gangadharan, Chief Technology Officer at Turo, you’ll learn about the evolution of the company’s data feature sets to its final form: the Turo Risk Score, which is designed to deliver a safe transaction and experience for the entire community.
At 10:20 am PT, dive into native parallel graphs, the most advanced type of graph analytics.
Dan McCreary, Distinguished Engineer — AI at Optum, (the technology division of UnitedHealth Group) will sit down with Mike Booth, VP Americas Sales at TigerGraph, to talk about how UHG leverages TigerGraph technology. You’ll learn how the graph, with AI algorithms on top, can help not only monitor an enterprise but make predictions, to avoid problems before they happen, how UHG improved quality of care while lowering costs, and more.
Finally, welcome to our multi-cloud reality. Today a solid, cloud-agile strategy is a must for enterprises that need to stay at the leading edge. In this fireside chat, Shiv Ramji, Chief Product Officer at Auth0 and Andrew Davidson, SVP of Cloud Products at MongoDB, will talk about re-engineering global platforms for a multi-cloud world.
You’ll hear about what growing competitive challenges to the Big Three hyperscalers mean, the growing number of data privacy laws and how they’re impacting organizations, the migration to the edge and more.
Two full weeks of VB Transform is coming up fast, so don’t forget to register now! July 19: The Data & AI Executive Summit | LIVE at The Palace Hotel, San Francisco, CA Join over 500 data and AI executives in-person to network with peers and hear success stories from c-level thought leaders on applied data and AI strategies.
July 20-22 : The Data Week July 26-28 : The AI & Edge Week | Virtual Cutting across the most critical topics in Data and Al, pick, learn from wherever you are across the two weeks.
Get the full agenda here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,679 | 2,022 |
"Intel on why orgs are stalling in their AI efforts -- and how to gun the engine | VentureBeat"
|
"https://venturebeat.com/ai/transform-2022-intel-on-why-orgs-are-stalling-in-their-ai-efforts-and-how-to-gun-the-engine"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Intel on why orgs are stalling in their AI efforts — and how to gun the engine Share on Facebook Share on X Share on LinkedIn Right now, 60% of the global GDP is digital , according to the World Economic Forum — and 80 zettabytes worth of data is going to be generated in 2022 alone. That’s going to grow to 180 zettabytes worth of data by 2025 and 2026. Driven by digitization, industries around the world are on the front edge of an era of sustained growth, if they can unlock the secret to turning infinite amounts of data into actionable insights.
The democratization of data is key, Kavitha Prasad, VP & GM datacenter, AI and cloud execution and strategy at Intel told VB CEO Matt Marshall at Transform on Tuesday.
“It’s not easy to get the insights with so much data by using traditional methods,” Prasad said. “AI is going to change that. For that to happen, we need to invest in AI today, both in people and in technology.” While the rate of AI innovation is growing exponentially, AI is still in its early stages of deployment. Analyst reports find that 80% of businesses might be investing in AI, but only 20% of them are actually reaping the benefits.
And even where AI is deployed broadly, it’s in places where the consequences of failure are minimal. Traditional ML and probabilistic methods and other intelligence have existed for as long as data has — but with the rate at which the data is growing, these traditional methods need to be augmented with advanced technologies like deep learning to reach the necessary business outcomes.
“A lot of the world today is focused on training these large-scale models, and less so on deploying it in production environments,” Prasad said. “If we’re really talking about democratizing AI, we need to see that 80% of the remaining use cases — the brick and mortar stores, the systems on the roads, the telco infrastructure — everything needs to be reaping the benefits of AI.” There are several challenges to recouping the benefits from investment. One comes from the fact that many companies forget that first, AI is a software problem — you need to cater to the demands of the developers and ecosystem, and not just focus on the pure performance per watt or pure power of your hardware. Add to this that AI in itself is a continuous and iterative process. Naturally, most organizations start with training. But when it comes to deploying these models, the whole process needs to be holistic, from gathering the data to training the model, testing it, deploying it, and then maintaining and monitoring it.
Cloud companies leverage MLOps, making it relatively easy to deploy, maintain and monitor. But edge and hybrid companies have to develop in the cloud and deploy on the edge — but then monitor on the edge and maintain back in the cloud. What you have trained your data against, versus what your ground truth is, can be completely different. And without maintenance, models start decaying and performance deteriorates.
The third piece of the puzzle is the data: accessibility, quality and sharing. From privacy issues to the garbage-in-garbage-out conundrum — where biased data becomes a liability to security and explainability — managing data can become a tremendous undertaking.
“It’s all these processes that need to come together for us to actually go deploy AI meaningfully in the industry,” Prasad said. “That’s when we can say we’re closer to democratizing AI.” It’s essential to start working through these challenges as soon as possible, in order to stay abreast of the inexorable changes bearing down on the world. But unless you want to run into issues down the road, you need to take a moment to think through the system and architecture you already have in place. You need to consider the cost and complexity that opening your doors to AI and data will bring, from moving workloads to scaling your projects.
More companies than ever offer point solutions for every permutation of AI use case. But from an enterprise perspective, the key is determining how to put all these disparate pieces together, how to manage your data at every step of the process, how to ensure it is high quality and secure, and how to deploy your model — or in other words, how to combine your predictive analytics with your business acumen to get meaningful results.
And while AI is a software problem, hardware is still an essential piece of the puzzle. Intel is addressing the compute needs with AI embedded into its FPGAs, GPUs and CPUs. The company is also focusing on homogenizing the hardware with the software to offer customers a solid foundation for their AI efforts.
From a security perspective, it’s also investing in technologies like federated learning and homomorphic encryption: case in point, its partnership with the University of Pennsylvania in the largest federated learning use case deployed to that date , running on 25,000 MRI scans across seven continents to detect a rare brain disease that happens in one out of 100,000 people, all while preserving the privacy of the medical data used in the project.
“We work with partners and disruptor programs to make sure we bring the ecosystem along with us, to make sure there are continuous advancements in AI,” she said.
Get more insight into launching an AI strategy — including how companies are adding intelligence everywhere from the data center, through the networks, to the edge and out to consumer devices –by registering for a free virtual Transform pass right here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,680 | 2,022 |
"Shining the spotlight on data governance at Transform 2022 | VentureBeat"
|
"https://venturebeat.com/ai/shining-the-spotlight-on-data-governance-at-transform-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Shining the spotlight on data governance at Transform 2022 Share on Facebook Share on X Share on LinkedIn Transform is the leading event on applied AI for enterprise business and technology decision makers. As part of that, we’ve tapped industry thought leaders for insights into one of the most critical issues in the world of applied AI. On July 21, 2022, data governance takes center stage, with a series of virtual sessions on building a solid data governance program, managing the onrush of data from the cloud, a new perspective on healthcare data and more.
You’ll walk away with actionable insights on managing the availability, usability, integrity and security of your data, how to make it work harder for you, and more. Read on for a closer look at these talks on data governance and AI from leaders at American Express, Janssen R&D and Capital One.
Check out your July 21 agenda Operating models lay the foundation for solid data governance programs — but how can business leaders make sure operating models are meeting the requirements of their organizations? At 9:00 a.m. PDT, Pascale Hutz, chief data officer at American Express, makes the case for a federated data governance operating model with a look at how American Express has done it. He’ll share best practices for opening data governance offices across business units, including hiring and funding dedicated data governance roles and ensuring data executive sponsorship.
At 9:20 a.m., get a new perspective on healthcare data, when Hal Stern, VP & CIO at Janssen R&D, sits down with VentureBeat’s head of data and AI content strategy, Hari Sivaraman. In this Fireside Chat, you’ll learn how Janssen’s integrated and standardized data ecosystem helps the company derive deep insights and make timely decisions for its therapeutic portfolio — plus why healthcare data is music, not oil.
And at 9:40 a.m., get a glimpse into the future when you learn why holistic data governance is key to getting the most out of your data.
Patrick Barch, Sr. director of product management at Capital One Software, will dive into the best ways to manage and govern the influx of data that cloud brings, why companies are struggling with data governance and how to implement a holistic data governance strategy that captures all your data and turns it into insight.
Two full weeks of VB Transform are coming up fast, so don’t forget to register now! July 19: The Data & AI Executive Summit | LIVE at The Palace Hotel, San Francisco, CA Join over 500 data and AI executives in-person to network with peers and hear success stories from C-level thought leaders on applied data and AI strategies.
July 20-22: The Data Week | Virtual July 26-28: The AI & Edge Week | Virtual Cutting across the most critical topics in data and Al, learn from wherever you are across the two weeks.
Get the full agenda here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,681 | 2,022 |
"Rescale and Nvidia partner to automate industrial metaverse | VentureBeat"
|
"https://venturebeat.com/ai/rescale-nvidia-partner-to-automate-industrial-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rescale and Nvidia partner to automate industrial metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Rescale has integrated the Nvidia GPU Cloud (NGC) into its library of containers for high-performance computing (HPC) apps. This will make it easier for enterprises to build digital twins and enable digital transformation as a foundational building block for the industrial metaverse.
The partnership with Nvidia adds over 150 additional containerized artificial intelligence (AI) and HPC applications and hundreds of pretrained models optimized for Nvidia GPUs. This joins more than 900 other applications already pre-integrated into the Rescale platform. The companies are also working on adding integration to the Nvidia Base Command AI training platform early next year.
Optimized for digital twins Manufacturers are increasingly using computational engineering for product innovation. The Rescale integration with Nvidia will simplify the process of designing and testing digital twins of products. This automation will help researchers assess multiple iterations of digital twins representing different assumptions about the product to identify and mitigate more kinds of problems.
Rescale chief product officer Edward Hsu told VentureBeat, “Tackling digital twins and the industrial metaverse require tremendous computing power best supported by specialized computing architectures and software codes optimized to take advantage of them.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This is true for workloads ranging from AI/ML (machine learning) to physics-based simulations that predict how products would perform in the real world. Rescale customers can now use a single platform to do AI-assisted computational engineering, taking advantage of the latest Nvidia architectures and performance-optimized software tools on any cloud.
Rescale had previously supported integrations with specific AI/ML offerings such as Modulus, PyTorch, and TensorFlow, as well as NGC-optimized software. The new integration will improve workflows for over one hundred prebuilt containers out of the box. Hsu said this effectively increases the number of AI/ML-related workflows that can be automated on the platform by 10 times.
“What’s unique about NGC is that these applications are tuned, tested and optimized by Nvidia,” Hsu said.
Digital industry ecosystem The scientific and engineering community is slowly migrating away from relatively cumbersome software development processes compared to other kinds of enterprise software. One challenge is that these apps typically require considerably more customization and must be deployed with dependencies, making it harder to deploy new versions quickly. Gartner observed that pioneering enterprises are increasingly adopting cloud infrastructure for engineering workloads, which could help drive the HPC market to $55 billion by 2024.
The new partnership between Rescale and Nvidia will allow enterprises to connect workflows between Rescale’s existing catalog of engineering and scientific containers, Nvidia’s extensive NGC offerings, and enterprises’ standard containers of their own models and supporting software. This new containerized approach to engineering software means teams can specify the software libraries and configurations that reflect industry best practices.
The recent Nvidia and Siemens partnership is an ambitious effort to bring together physics-based digital models and real-time AI. Rescale’s announcement with Nvidia enhances this partnership, as accelerated computing combined with high-performance computing is the foundation that powers these use cases.
For example, enterprises can take advantage of Nvidia’s work on Modulus, which uses AI to speed up physics simulations hundreds or thousands of times. Siemens estimates that integrating physics and AI models could help save the power industry $1.7 billion in reduced turbine maintenance. The partnership could also make it easier for companies to integrate other apps that work on these tools.
The rubber meets the digital road Companies like Hankook Tire in South Korea are already taking advantage of the new integration to accelerate R&D for new tire designs. The partnership allows Hankook to combine its proprietary algorithms with off-the-shelf components from Nvidia and other Rescale partners to accelerate the design, optimization and testing of new product iterations.
Hankook Tire is developing digital twins of their tires and virtually testing them through various conditions with their own custom-made models and codes, aiming to develop new products more quickly and improve performance.
There is high demand for the latest GPU architectures in the cloud, and capacity in specific regions may not meet demands. With Rescale, the company can automate its proprietary engineering software across multicloud operations by load-balancing across different hardware architectures and geographic locations. Through automated optimization, Hankook Tire can prioritize running the latest Nvidia GPU solutions available in Korea and switch to other new Nvidia architectures and global cloud capacities seamlessly, without impacting any use by R&D engineers or manual work by IT teams.
This recent partnership demonstrates the power of improved automation, high-performance computing and digital twins to drive industrial digital transformation efforts. “As organizations look to do AI-assisted engineering, these capabilities allow companies to have a single platform to tackle the software, hardware, and workflow challenges needed to design and deliver engineered products,” Hsu said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,682 | 2,022 |
"Nvidia AI Enterprise 2.1 bolsters support for open source | VentureBeat"
|
"https://venturebeat.com/ai/nvidia-ai-enterprise-2-1-bolsters-support-for-open-source"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia AI Enterprise 2.1 bolsters support for open source Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia is updating its AI Enterprise software suite today to version 2.1, providing users with new commercially-supported tools to help run artificial intelligence (AI) and machine learning (ML) workloads for enterprise use cases.
Nvidia AI Enterprise first became generally available in August 2021 as a collection of supported AI and ML tools that run well on Nvidia’s hardware. In the new release, a core component of the software suite is an updated set of supported versions of popular open-source tools, including PyTorch and TensorFlow.
The new Nvidia Tao 22.05 low-code and no-code toolkit for computer vision and speech applications is also included, as is the 22.04 update for Nvidia’s Rapids open-source libraries for running data science pipelines on GPUs.
“Over the last couple of years, what we’ve seen is the growth of AI being used to solve a bunch of problems and it is really driving automation to improve operational efficiency,” Justin Boitano, VP of enterprise and edge computing at Nvidia. “Ultimately, as more organizations get AI into a production state, a lot of companies will need commercial support on the software stack that has traditionally just been open source.” Bringing enterprise support to open-source AI A common approach with open-source software is to have what is known as an “upstream” community, where the leading edge of development occurs in an open approach. Vendors like Nvidia can and do contribute code upstream, and then provide commercially supported offerings like Nvidia AI Enterprise, in what is referred to as the “downstream.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “When we talk about popular AI projects like TensorFlow, our goal is absolutely to commit as much as possible back into the upstream,” Boitano said.
With Nvidia AI Enterprise, the open-source components also benefit from integration testing across different frameworks and on multiple types of hardware configurations to help ensure that the software works as expected.
“It’s very similar to the early Linux days, where there are those companies that are totally happy running with the open-source frameworks and then there’s another part of the community that really feels more comfortable having that direct engagement,” Boitano said.
Enterprise support and cloud-native deployment options for AI Another key element of enterprise support is making it easier to actually deploy different AI tools in the cloud. Installing and configuring AI tools is often a complicated challenge for the uninitiated.
Among the most popular approaches to cloud deployment today is the use of containers and Kubernetes in a cloud-native model. Boitano explained that Nvidia AI Enterprise is available as a collection of containers. There is also a Helm chart, which is an application manifest for Kubernetes deployment, to help automate the installation and configuration of the AI tools in the cloud.
An even easier approach is provided by Nvidia LaunchPad labs, which is a hosted service on Nvidia infrastructure for trying out the tools and frameworks that are supported by the Enterprise AI software suite.
The TAO of Nvidia Making it easier to build models for computer vision and speech recognition use cases is a key goal of Nvidia’s TAO toolkit, which is part of the Nvidia Enterprise AI 2.1 update.
Boitano explained that TAO provides a low-code model for organizations to take an existing pretrained model and tune it to a user’s own specific environment and data. One particular example of where TAO can help is with computer vision applications in factories.
Lighting conditions can be variable in different factories, creating glare on cameras that can impact recognition. The ability to relabel an amount of data inside a specific environment, where the light might be different from the pretrained model, can help improve accuracy.
“TAO provides a lightweight way to retrain models for new deployments,” Boitano said.
Looking forward to future Nvidia AI Enterprise releases, Boitano said that the plan is to continue making it easier for organizations to use different toolkits for deploying AI and ML workflows in production.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,683 | 2,022 |
"Nvidia adds functionality to edge AI management | VentureBeat"
|
"https://venturebeat.com/ai/nvidia-adds-functionality-to-edge-ai-management"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia adds functionality to edge AI management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia already has a worldwide reputation and a No. 1 market share designation for making top-flight graphics processing units (GPUs) to render images, video, and 2D or 3D animations for display. Lately, it has used its success to venture into IT territory, but without making hardware.
One year after the company launched Nvidia Fleet Command, a cloud-based service for deploying, managing, and scaling AI applications at the edge, it launched new features that help address the distance between these servers by improving the management of edge AI deployments around the world.
Edge computing is a distributed computing system with its own set of resources that allows data to be processed closer to its origin instead of having to transfer it to a centralized cloud or data center. Edge computing speeds up analysis by reducing the latency time involved in moving data back and forth. Fleet Command is designed to enable the control of such deployments through its cloud interface.
“In the world of AI, distance is not the friend of many IT managers,” Nvidia product marketing manager Troy Estes wrote in a blog post. “Unlike data centers, where resources and personnel are consolidated, enterprises deploying AI applications at the edge need to consider how to manage the extreme nature of edge environments.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cutting out the latency in remote deployments Often, the nodes connecting data centers or clouds and a remote AI deployment are difficult to make fast enough to use in a production environment. With the large amount of data that AI applications require, it takes a highly performative network and data management to make these deployments work well enough to satisfy service-level agreements.
“You can run AI in the cloud,” Nvidia senior manager of AI video Amanda Saunders told VentureBeat. “But typically the latency that it takes to send stuff back and forth – well, a lot of these locations don’t have strong network connections; they may seem to be connected, but they’re not always connected. Fleet Command allows you to deploy those applications to the edge but still maintain that control over them so that you’re able to remotely access not just the system but the actual application itself, so you can see everything that’s going on.” With the scale of some edge AI deployments, organizations can have up to thousands of independent locations that must be managed by IT. Sometimes these must run in extremely remote locations, such as oil rigs, weather gauges, distributed retail stores, or industrial facilities. These connections are not for the networking faint of heart.
Nvidia Fleet Command offers a managed platform for container orchestration using Kubernetes distribution that makes it relatively easy to provision and deploy AI applications and systems in thousands of distributed environments, all from a single cloud-based console, Saunders said.
Optimizing connections is also part of the task Deployment is only one step in managing AI applications at the edge. Optimizing these applications is a continuous process that involves applying patches, deploying new applications, and rebooting edge systems, Estes said. The new Fleet Command features are designed to make these workflows work in a managed environment with: Advanced remote management: Remote management on Fleet Command now has access controls and timed sessions, eliminating vulnerabilities that come with traditional VPN connections. Administrators can securely monitor activity and troubleshoot issues at remote edge locations from the comfort of their offices. Edge environments are extremely dynamic — which means administrators responsible for edge AI deployments need to be just as dynamic to keep up with rapid changes and ensure little deployment downtime. This makes remote management a critical feature for every edge AI deployment.
Multi-instance GPU (MIG) provisioning: MIG is now available on Fleet Command, enabling administrators to partition GPUs and assign applications from the Fleet Command user interface. By allowing organizations to run multiple AI applications on the same GPU, MIG enables organizations to right-size their deployments and get the most out of their edge infrastructure.
Several companies have been using Fleet Command’s new features in a beta program for these use cases: Domino Data Lab , which provides an enterprise MLops platform that allows data scientists to experiment, research, test and validate AI models before deploying them into production; video management provider Milestone Systems , which created AI Bridge, an application programming interface gateway that makes it easy to give AI applications access to consolidated video feeds from dozens of camera streams; and IronYun AI platform Vaidio, which applies AI analytics to helping retailers, bands, NFL stadiums, factories, and others fuel their existing cameras with the power of AI.
The edge AI software management market is projected by Astute Analytics to reach $8.05 billion by 2027. Nvidia is competing in the market along with Juniper Networks, VMWare, Cloudera, IBM and Dell Technologies, among others.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,684 | 2,022 |
"Intel, Wayfair, Red Hat and Aible on getting AI results in 30 days | VentureBeat"
|
"https://venturebeat.com/ai/intel-wayfair-red-hat-and-aible-on-getting-ai-results-in-30-days"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Intel, Wayfair, Red Hat and Aible on getting AI results in 30 days Share on Facebook Share on X Share on LinkedIn Companies are rushing to invest in AI — but less than 20% of AI investments are resulting in the transformations that AI promises.
VB Transform 2022 brought together business leaders from Intel, Wayfair, Red Hat and Aible to discuss how they’re beating the odds to actually harness the full value of AI.
“The word ‘transformative’ is the catchphrase there,” said Arun K. Subramaniyan, vice president cloud and AI, strategy and execution at Intel. “Twenty percent of the investments are actually reaping the benefits they were supposed to when you sold the project. And then whether they’re getting you the business outcomes at the level you wanted for that investment is really the question.” Companies are beginning to walk rather than crawl; now it’s a question of how quickly they can get to the running phase, and then sustain that level of transformation. But transformation and business outcomes can take months, said Fiona Tan, CTO of Wayfair.
As a tech-enabled company in the digital space, focused on the home goods category, they’ve found the secret is focusing on practical applications of AI that tackle urgent business use cases. They’re also selective in terms of where they’re applying the AI and ML work that they do. But transformation takes time, she noted, because AI and ML capabilities are quite different than traditional software algorithms, which offer instant results.
“With a lot of AI and ML-based models, it will take a while. It’s very iterative,” she explained. “To that point, when you’ll see transformational change, we don’t usually see that in the first X number of days or weeks. That usually does take time for us. With us, customers are coming in. We’re learning from them. We’re adapting.” Experience, iteration and adaptation are key for Arijit Sengupta, founder and CEO of Aible. Sengupta said he went through more than a thousand AI projects with his previous company, BeyondCore, which built technology for smart data discovery — and then wrote a book called AI Is a Waste of Money , after most of those AI projects failed. But he partnered with Intel to start Aible, an enterprise AI solution that guarantees impact in one month.
“When we started, nobody knew how you would get to value in 30 days. It was just rational to say that large companies can’t do this,” he said. “The good thing was I had done it more than a thousand times myself. My team had done about 4,000 AI projects. We knew where the bodies were buried. We could do it right the second time.” It does depend on the individual enterprise more than anything else, said Bill Wright, head of AI/ML and intelligent edge, global industries and accounts, at Red Hat.
“I’ve spoken with some customers that have phenomenal development capabilities,” he said. “They’ve gone through all the DevOps and MLOps steps to make everything very efficient. There’s so much more under the covers.” But some data scientists don’t realize all the work that goes into those production environments, how much can go right and can go wrong. Enterprises are at so many different stages of the journey toward understanding where their challenges lie, and how to tackle them. Success comes not only from iteration, but understanding the customer.
“It’s always about talking to the customer, understanding what their pain is, understanding what they’re going through,” he Wright. “All the technical advances I’ve ever experienced have been through customer conversations. I think that’s been the biggest lesson.” Moving outside the AI/ML comfort zone To hit the point of true digital transformation requires tackling bigger challenges, where the risks might be larger. For Wayfair, the most urgent problems to initially be solved were marketing and customer acquisition. They were able to automate and take some measured risks around bidding, which also deepened a lot of their customer strategy.
“As we got more and more experience, we took that and it morphed into, how do we understand the customer better?” Tan said. “It became the beginning of building up our customer graph. Expanding our AI and ML journey.” They did a similar thing on the product side, mining product information from suppliers to augment and enrich data the company already has. Combining the customer graph that arose from customer acquisition and marketing efforts with their product graph allows the company to offer the best possible experience to customers in every search and shopping experience. And each step in the journey builds on the one before it, enriching current capabilities and opening up opportunities to use AI and ML in other areas.
“We sell big things that are hard to move and expensive to move. How can I use AI and ML for optimizing my supply chain — offer up a capability where ideally I serve you the most relevant green couch based on what you’re looking for, but I also want to make sure I can serve you one that’s at the fulfillment center closest to you, so there’s the least possibility of damage,” Tan explained. “That’s the culmination of pulling together all these disparate components to be able to offer up a solution.” Often the issue slowing down AI transformation is too little sponsorship from leadership, Sengupta said, and too-large expectations.
“We figured out that if you go to [the leadership team] and say, ‘What kind of AI do you want?’, they want a flying car from Back to the Future,” he said. “The data may be able to give them a really fast boat or a medium speed car or a really slow plane. But when you start from the data and you can show them interesting patterns in the data and engage them early, they’re not asking for something crazy. Then you can give it to them.” If you take the risk points, solve them early in the project, and iterate very fast, you can get to a good result, he added.
“Remember the difference,” Sengupta said. “I’m not saying you can do any AI project in 30 days. I’m saying you can have significant success from AI in 30 days. The two are very different. An iPad can’t do what a supercomputer does, but an iPad creates a lot of value.” When winnowing down the pain points and business use cases cases to get to the right AI projects, where you are in your AI journey matters a lot, Subramaniyan said.
“But where the world is, the world of AI, in terms of the spectrum of development also matters,” he said. “We’ve all heard about how fast the world of AI is moving. We can actually take advantage of that rather than being intimidated by it.” The amount of investment required to actually build a large model can be daunting, but once the models have been built, or you find them open source, it’s about taking advantage of that so you can leapfrog, he said.
“As business leaders, that’s something you can think about rather than thinking about the large investment,” he said. “In some ways it helps you to be a little late, because now you can learn the mistakes made by everyone else, and also leapfrog ahead of them. You don’t necessarily have to think about your business as being small or large, or competing with the large AI powerhouses. We’re taking that and making sure we can democratize across the board. That’s what Intel is working on, both from a hardware standpoint, but more important from a software standpoint. AI is a software problem first. Hardware is an enabler for that.” Watch the full, in-depth discussion and catch up on all Transform sessions by registering for a free virtual pass right here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,685 | 2,022 |
"FeatureByte launched by Datarobot vets to advance AI feature engineering | VentureBeat"
|
"https://venturebeat.com/ai/featurebyte-launched-by-datarobot-vets-to-advance-ai-feature-engineering"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FeatureByte launched by Datarobot vets to advance AI feature engineering Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence (AI) offers a lot of promise to enterprises to help optimize processes and improve operational efficiency. The challenge for many, though, is getting data in the right shape and with the right processes to actually be able to benefit from AI.
That’s the challenge that the two cofounders of FeatureByte, Razi Raziuddin and Xavier Conort, noticed time and again while working at enterprise AI platform vendor Datarobot.
Raziuddin worked for over five years at Datarobot including a stint as the senior VP of AI services, while Conort was the chief data scientist at Datarobot for over six years.
“One of the challenges that we’ve seen is that AI is not just about building models, which is really the focus of not just Datarobot, but pretty much the entire AI and ML [machine learning] tooling space,” Raziuddin told VentureBeat. “The key challenge that still remains and we call it the weakest link in AI development, is just the management, preparation and deployment of data in production.” Borrowing data prep from data analytics to improve AI development Raziuddin explained that feature engineering is a combination of several activities designed to help optimize, organize and monitor data so that it can effectively be used to help build features for an AI model. Feature engineering includes data preparation and making sure that data is in the correct format and structure to be used for machine learning.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the data analytics world, the process of data preparation isn’t a new discipline; there are ETL (extract, transform and load) tools that can take data from an operational system and then bring them into a data warehouse where analysis is performed. However, that same approach hasn’t been available for AI workloads, according to Raziuddin. He said that data preparation for AI requires a purpose-built approach in order to help automate a machine learning (ML) pipeline.
In order to do really good feature engineering and feature management, Raziuddin said that a combination of several critical skills is needed. The first is data science, with the ability to understand the structure and format of data. The second critical skill is understanding the domain in which the data is collected. Different data domains and industry use cases will have different data preparation concerns, such as data collected for a healthcare deployment will be very different from that used for a retail business.
With a thorough understanding of the data, it’s possible to build features in AI that will be optimized to make the best use of the data.
Automating feature engineering for AI Getting data in the right shape for AI has often involved the need for a data engineering team in addition to one or more data scientists.
What FeatureByte is aiming to do is to help solve that pain point and provide a streamlined process for having data pipelines available for data scientists to use for building features for their AI models. Raziuddin said that his company is really all about removing friction from the process and making sure that data scientists can do as much as possible within a single tool, without having to rely on a data engineering team.
The company’s technology is still in development, though the company has some clear goals for what it should be able to do. Today, it announced that it has raised $5.7 million in a seed round of funding. Raziuddin said the platform will use the funding to help embed domain knowledge and data engineering expertise to accelerate the process of feature engineering.
FeatureByte’s platform will be cloud-based and will be able to leverage existing data resources, including cloud data warehouses and data lake technologies such as Snowflake and Databricks.
“With the number of AI models increasing, the number of data sources that are available to build these models is going up at a faster pace than most teams are able to handle,” Raziuddin said. “So unless there is tooling and unless that process is automated and streamlined, it’s not something that companies are going to be able to keep up with.” The seed funding was led by Glasswing Ventures and Tola Capital.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,686 | 2,022 |
"Dive into a full day of data infrastructure insight at VB Transform 2022 | VentureBeat"
|
"https://venturebeat.com/ai/dive-into-a-full-day-of-data-infrastructure-insight-at-vb-transform-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Dive into a full day of data infrastructure insight at VB Transform 2022 Share on Facebook Share on X Share on LinkedIn Data pipeline illustration Transform , VentureBeat’s flagship event, returns this year both in-person and virtually for two full weeks. On July 20, our virtual Data Week kicks off with a deep dive into the importance of data infrastructure. Because if AI and machine learning are the driving force of enterprise transformation, data infrastructure is the engine.
Your data architecture is the crucible in which business needs are transformed into data and system requirements, and where its flow through the enterprise is managed. A modern data architecture can finally break down those departmental data silos that slow down progress and innovation, so that every stakeholder has user-friendly access to data across the company.
Speakers from Cigna, Capital One, Intuit, McDonald’s and more will put the spotlight on both best practices and top strategies in data infrastructure, and the technology enabling it. And you’ll walk away with true applied lessons that can be carried over into organizations across all industries.
Learn how these companies are using data to bridge the gap between business goals and technology, ramping up speed, flexibility, innovation, and more.
A look at the July 20 agenda First, at 9:00 AM PDT, you’ll learn how Intuit and Canva employees reduce thousands of work hours while increasing productivity with the help of automation and the break-down of data silos. With custom bots and workflows from Slack, both companies were able to create digital headquarters that bring both system and human events together to empower more meaningful work and transform business outcomes.
Intuit utilized Slack’s reimagined platform to create a custom bot that made its service team 36% faster and saved agents over 9,000 hours of work, while Canva’s finance team built a workflow within Slack that saves them over 64 hours of work each week.
Steve Wood, SVP Product Management at Slack will share best practices for approaching automation in ways that are designed to scale human interaction, how organizations can empower people to turn noise into productivity, and how data can be properly connected to the people who do the work.
At 9:20 AM PDT, Sachin Joshi SVP, Chief Data & Analytics Engineering Officer of Evernorth, a subsidiary of Cigna Corporation, will talk about how data can fix fragmented care, lower health costs, stop gaps-in-care and change how patients engage with the healthcare system.
Joshi will drive into their industry-first, outcomes-based model of care that guarantees clinical targets and connects patients’ touch points across the care continuum.
They’ll share how their automated platform can flag situations requiring clinical attention, implement interventions tailored to patients’ clinical conditions and engagement preferences, and provide 360-degree visibility to patient activity, resulting in the ultimate care coordination program.
At 9:40 AM, you’ll learn how AI leaders from McDonald’s, Databricks, and the AI Framework drive smarter, more personalized customer experiences at scale with data.
They’ll talk about the challenges of scaling AI and ML initiatives, lessons learned in the quest for customer satisfaction, and how companies can cut through complexity and launch their own AI strategies.
Data is the backbone of any organization’s digital transformation, but lack of specialized knowledge can significantly hamper a company’s progress. At 10:00 AM PST, you’ll hear how Skillsoft’s CIO Orla Daly launched a burgeoning internal training program to improve data literacy and IT’s ability to leverage data across the organization.
Plus, you’ll learn how multimodality training speeds up data literacy, the positive impacts of growing your team’s skillset, and what it takes to create a culture of data and learning.
It’s a roundtable discussion at 10:30 AM PDT, when George Trujillo, Principal Data Strategist at DataStax, talks about why real-time analytics and ML are at the tip of the spear in competing for the loyalty of customers , and how to build a real-time technology stack and data architecture for a winning analytics and machine learning strategy.
Plus, you’ll hear about the capabilities and characteristics of an enterprise real-time data ecosystem, architecting data flows across a data supply chain, reducing business friction between centralized and decentralized teams and key areas in a data strategy that drive business innovation, and more.
Finally, at 11:30 AM PDT, learn why operationalizing data mesh is critical for operating in the cloud.
Patrick Barch Sr, Director of Product Management at Capital One Software, will take attendees through the basics of a data mesh framework, plus the set of principles that companies can adopt to help scale a well-managed cloud data ecosystem. Plus, you’ll get best practices for scaling your data ecosystem using data mesh concepts, which boosts efficiency by combining centralized tooling and policy with federated data management responsibility.
Two full weeks of VB Transform is coming up fast, so don’t forget to register now! July 19: The Data & AI Executive Summit | LIVE at The Palace Hotel, San Francisco, CA Join over 500 data and AI executives in-person to network with peers and hear success stories from c-level thought leaders on applied data and AI strategies.
July 20-22: The Data Week uly 26-28: The AI & Edge Week | Virtual Cutting across the most critical topics in Data and Al, pick, learn from wherever you are across the two weeks.
Get the full agenda here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,687 | 2,023 |
"Building a safer internet: The growth of Aura led by Hari Ravichandran | VentureBeat"
|
"https://venturebeat.com/business/building-a-safer-internet-the-growth-of-aura-led-by-hari-ravichandran"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Contributor Content Building a safer internet: The growth of Aura led by Hari Ravichandran Share on Facebook Share on X Share on LinkedIn The digital age has undeniably brought with it an explosion of opportunities and breathtaking innovations. Smart cities, AI-driven solutions, virtual realities — the horizon of possibilities seems limitless. However, there’s a flip side to every advancement. With accelerated digitalization, we’ve seen a growing dark underbelly where savvy cyber criminals rule.
Hari Ravichandran of Aura is one of the visionaries spearheading the fight for internet security.
Hari’s commitment to cybersecurity stems from a deeply personal encounter. “In 2014, my identity was stolen,” he recounts. “I found out when my bank denied me a mortgage and I realized my credit rating had plummeted. It took some digging, but I eventually figured out that my identity had been hijacked. Sorting out the ramifications was extremely complicated — it took me weeks.” That experience opened Hari’s eyes to the vulnerabilities that many face in the digital world. While there were tools available for internet security, none offered comprehensive protection. Hari realized that subscribing to multiple services still left gaping vulnerabilities. This glaring gap in the market was the impetus behind Aura, an AI-powered online safety company launched in 2018.
As the CEO of Aura, Hari seeks to offer a holistic approach to Internet security. Combining an array of technical solutions — from antivirus and VPN to password management– Aura doesn’t stop there. It scales its offerings to include identity theft monitoring, spam call filtering and financial fraud protection within a monthly subscription.
Aura and other digital security solution providers are particularly needed in the age of AI. Artificial Intelligence (AI), a groundbreaking force, perfectly exemplifies the challenges internet users face. On one hand, AI offers us unprecedented efficiency, data analysis and even companionship in the form of chatbots. However, on the flip side, malevolent AI programs, crafted in the obscure corners of the digital realm, pose serious threats. From AI-driven phishing attacks to deepfakes that blur the line between reality and falsehood, today’s unprecedented problems require cutting-edge solutions.
But it’s not just AI. With the increasing interconnectivity of devices, there are more vulnerabilities than ever before. Personal data breaches, identity theft and financial fraud are becoming unnervingly common. It’s a paradox of our times: as our digital prowess grows, so do the challenges to safeguard it.
“I am fascinated by the evolution of technology and how this helps make human life better,” Hari shares. This fascination led to his first business, BizLand which later evolved into Endurance International Group (EIG), a global multi-billion-dollar web hosting enterprise.
It’s not every day that a billion-dollar CEO chooses to step back into the realm of startups but that’s exactly what Hari did when he launched Aura. As he explains, his goal wasn’t just about the accolades or the corner office. He is, at his core, a builder and that’s what drew him to the cybersecurity space.
“I believe in the power of technology to do good — that’s why I founded Aura,” Hari explains. Under Hari’s guidance, Aura has not just grown; it has flourished, attaining the coveted unicorn status with a valuation of $2.5 billion. This growth isn’t just a testament to his business acumen but his commitment to creating a safer digital space for future generations.
“Our mission at Aura is to create a safer internet, not for corporations but for individuals,” he explains. “I’m a dad. I have three children, and they’re growing up as digital natives. I want to make sure that they are as safe as possible while exploring online spaces.” With the challenges of the digital age evolving daily, it’s promising to know there are people like Hari Ravichandran working to create a safer internet for everyone.
VentureBeat newsroom and editorial staff were not involved in the creation of this content.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,688 | 2,021 |
"Moving autonomous vehicles from R&D to mass production is closer than you think | VentureBeat"
|
"https://venturebeat.com/2021/07/06/moving-autonomous-vehicles-from-rd-to-mass-production-is-closer-than-you-think"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Moving autonomous vehicles from R&D to mass production is closer than you think Share on Facebook Share on X Share on LinkedIn Presented by Pony.ai The vision of safe, reliable autonomous vehicle transportation at scale is closer than ever to being realized, says James Peng, CEO at Pony.ai.
Since the company’s founding in Fremont, California in late 2016, the company’s been making strides in autonomous mobility deployment in both the U.S. and China. The company was the first to launch and offer a public-facing Robotaxi service in both countries.
“The technology is moving from experience-level to application,” Peng says. “The first half of the game is to build stable and mature products and accumulate experience. The second half is to move from R&D to mass production and scale, in addition to achieving commercially viable products.” In 2019, Peng predicted that the world would see a wide adoption of fully autonomous driving vehicles on the open roads within five years, and that forecast is gaining momentum. Pony.ai just received a driverless permit from California DMV, a milestone for their engineers to perform driverless testing without a human driver behind the wheel on public roads within the state. These tests are accelerating commercial growth across their operation sites worldwide, and they plan to launch driverless Robotaxi service to the public in California starting in 2022.
“I’m pretty confident that in many of the larger cities in the U.S., people will soon be able to ride with a driverless Robotaxi vehicle,” says Tiancheng Lou, CTO at Pony.ai. “A significant portion of taxi services will be supported by driverless vehicles.” Pony.ai currently offers ride-hailing in its autonomous vehicles in five markets: Irvine and Fremont in California; as well as Beijing, Shanghai, and Guangzhou in China. The company plans to install its technology in hundreds of vehicles next year, rising to tens of thousands in 2024-2025.
Waymo, Google’s self-driving car arm, has been piloting a fleet of fully driverless robotaxis in Phoenix for the past year, while Amazon-owned Zoox unveiled its autonomous robotaxi service last December.
The goals of autonomous driving According to most companies, the biggest benefit of autonomous driving is safety. The U.S. Department of Transportation has reported that 94% of car crashes are caused by human error.
While overall driving was down in 2020 because of the pandemic, traffic fatality rates surged 24% — the highest spike in nearly a century. Autonomous driving is the solution to this issue, Peng says, and companies like Pony.ai, Google, and Uber, along with most of the major automakers, are sinking considerable money into R&D, optimistic about the future of autonomous driving.
A transition to autonomous vehicles also has potential to reduce greenhouse gas emissions and support U.S. economic growth, points out the Self-Driving Coalition for Safer Streets.
The coalition, which includes members Ford, Volvo, Lyft, Uber, and Waymo, promotes the benefits of fully self-driving vehicles and supports the fastest deployment possible – or advancing the scale of the technology, which would enable mass production and start moving autonomous vehicles to the center of the smart city vision.
Challenges of driving autonomy at scale Pony is hoping to achieve large enough scale to reduce costs and meet commercial needs by 2022 with Alpha X, the working name for their latest generation of autonomous vehicles, Lou says, which are designed to be manufactured on a production line.
“To have a large number of vehicles, we need to manufacture them in a standardized way,” Lou says. “It’s a critical step to achieve front-loading mass production of self-driving cars.” The company established a production line and a set of standardized processes specifically for L4-level autonomous driving systems last November. With the support of the production line, the production efficiency of the autonomous driving system doubled six times compared with the previous generation, and production pace accelerated.
With PonyAlpha X, they established a L4 level autonomous driving system scale production line, Lou says. To manufacture the vehicle, they established a supply chain management process, hardware module design and verification, test production, and car conversion before final assembly. The end phase consists of overall quality inspection, off-line calibration, and road testing. The project helps them set up a standardized production process, which shortened production time, effectively reduced costs, and improved the stability of the overall system.
Compared with the previous generation, PonyAlpha X is more compact, integrated, and lightweight in terms of system hardware integration, reducing the cost of post-maintenance. They recently partnered with Toyota to equip PonyAlpha X on Lexus RX models.
In February of this year, the company rolled out the new self-driving vehicles equipped with the latest generation system from the standardized production line. The cars will go through all-day autonomous-driving open-road tests in Guangzhou, Beijing, and Shanghai before they join the company’s Robotaxi fleet for large-scale operations.
Their recent partnership with Luminar , an autonomous vehicle sensor and software company, has also allowed them to significantly bring down the cost of manufacturing, which is always a major hurdle for mass production.
“For the next generation, one of the biggest challenges is we’re trying to use auto-grade sensors,” Lou says. “One of the critical pieces is Luminar LiDAR. Using that LiDAR, we’re moving one step toward to being ready for mass production, and then we can prepare for large scale production. Luminar LiDAR, compared to our current solution, is more reliable and much cheaper.” LiDAR is a laser sensor that sends millions of laser points out per second and measures how long they take to bounce back, a key component to safe autonomous driving. Luminar says that its Iris LiDAR has a maximum range of 500 meters (1,640 feet), including 250-meter range with less than 10 percent reflectivity. The self-driving sensor solution offers better perception accuracy and field of view breadth to enhance the ability to cope with long-tail scenarios. The new pony vehicles with Luminar’s LiDAR sensor will be up and running in 2022, and will be ready for the company’s robotaxi customers in 2023.
Autonomous driving at scale is, literally, in sight The past year hasn’t been easy, Lou says, but it pushed their contactless delivery initiative further along in Irvine, which helped the community cope during the pandemic, while adding testing and collecting mileage for data points. But full autonomy at scale still requires testing.
To help accelerate their commercial growth and global deployment, they’ve tapped Lawrence Steyn, vice chairman of investment banking at JPMorgan Chase & Co, as chief financial officer.
And looking ahead, in 10 years or 20 years, it could be possible that owning a vehicle would entirely be a luxury, Lou says.
“We need to carefully think about whether that’s necessary, having personal vehicles,” he says. “If it’s mathematically possible to get a vehicle within as little as 30 seconds, most people may not have to own a vehicle.” He also points out that if most vehicles are running autonomously, the vehicle form factor can be reshaped for accessibility, opening up safe transportation for a larger population of older and disabled riders who could turn to cheaper, more efficient autonomous transportation for their travel needs.
“I still keep the engineering part in me, and with that part, I dream of pushing the world forward with state-of-art technology,” Peng says. “We are working hard to largely scale and deploy our technology across countries and regions to benefit more and more people.” Dig deeper.
Learn how Pony.ai is building the future of autonomous driving at scale.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,689 | 2,021 |
"The trends driving change in the post-COVID era of eprocurement | VentureBeat"
|
"https://venturebeat.com/2021/06/14/the-trends-driving-change-in-the-post-covid-era-of-eprocurement"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The trends driving change in the post-COVID era of eprocurement Share on Facebook Share on X Share on LinkedIn Presented by Amazon Business Digital transformation has spurred a significant trend in the procurement landscape for years. New technological solutions have pushed the boundaries of what’s possible, and what’s affordable, for organizations of every size to evolve their operations and unlock areas of growth. But the pandemic has accelerated this shift online, explains Rob Green, General Manager, Amazon Business Public Sector at Amazon Business.
The new Amazon Business “B2B E-Commerce in Evolution” report dives deep into the trends reshaping B2B ecommerce for midsize to enterprise businesses, as necessitated by the shift to remote work and the demand for cost containment. The goal: to capture the insights buyers and sellers need to know to evolve their operations and achieve success in the rapidly shifting landscape of procurement.
“This shift towards eprocurement is important because it will accelerate lasting changes across the entire organization that will positively impact growth, efficiency, and more,” Green says. “When B2B buyers incorporate more digital solutions into the purchasing process, operational efficiencies are felt across the entire organization.” The eprocurement trends to watch “Our data shows that the B2B purchasing experience is showing customer demand for more selection and shipping options that are eco-friendly and sustainable; organizations are increasingly setting goals to support buying from local businesses; and, there is a greater emphasis on sourcing from diverse sellers,” Green says. “All of these trends play a larger role in B2B procurement.” For sellers, one of the largest trends was the role eprocurement has played in opening new channels and opportunities to have a global reach. The survey found that 59% of sellers reported expanding their customer base is a top priority in 2021, and selling products globally is a top priority for 40%. A global audience offered by online channels means that sellers have unprecedented ability to expand their business significantly.
In ranking the most valuable features of the purchasing process, buyers overwhelmingly indicated that online features are more valuable than traditional ones, like phone calls or viewing products in person. This means sellers can lean on online procurement features to make their selection stand out, such as improving their product detail pages and images. These efforts will give their items a virtual shelf that has no geographical boundaries and can reach a wider customer base outside their historical reach.
“This ability to scale is spurred by more global visibility, but also by increasing desire from B2B buyers for a more consumer-like experience where self-service is the expectation,” Green points out. “Buyers value convenience, and sellers that can offer this benefit along with robust product detail will be well-equipped to grow.” For buyers, social and environmental considerations are playing a larger role in the procurement process, particularly for mid-sized and enterprise businesses. Research revealed a few key trends reflecting industry shifts towards value-based buying and selling decisions: 83% of buyers surveyed said their companies plan to increase spending reserved for Black and minority-owned businesses in 2021 and of those, almost half (48%) plan to increase their budgets for spending with diverse sellers by 20% or more. While increasing efficiency was the top buyer procurement priority for 2021 at 40%, improving sustainability was of almost equal importance at 39%. Amazon Business provides the tools for buyers to easily identify and connect with brands and products that align with those values.
“As the line separating consumer and B2B purchasing blurs, procurement teams and professionals can support meaningful causes through their business purchasing decisions,” Green says. “Sustainability is top-of-mind as buyers look to reduce their carbon footprint.” Facilitating diversity in the procurement world “One of the most positive shifts we’re seeing in the business world right now is towards more equitable work practices, and as an extension, a greater emphasis on supporting small, local, minority, women, veteran, and LGBTQ-owned businesses,” Green says. “By supporting these diverse businesses, buyers can help spur economic development in their communities.” According to the survey, 39% of buyers consider increasing diversity among suppliers a top priority this year. However, matchmaking or finding a small or diverse business to purchase from is not always simple, Green adds. Amazon Business connects a wide audience of buyers with small, diverse, and local sellers through advanced search and filtration features, as well as tools for diverse sellers to upload their national or state-recognized credentials and increase visibility with those businesses looking to purchase from them.
Transition and growth in a digital procurement world One of the biggest benefits of a digitalized procurement world is that sellers and buyers are realizing brand-new opportunities that weren’t possible via traditional commerce channels. For instance, digitization opens the door for small sellers to connect with large buyers who they may have trouble reaching.
“For example, certified Black- and veteran-owned small business Aldevra increased its sales by more than 300% since 2016 on Amazon Business,” Green says. “The company now works with customers across the nation, signaling the long-term positive effect of ecommerce on the success of smaller sellers.” To achieve the same growth, small businesses should take note of buyer preferences and align their online presence accordingly. The survey found that more than 80% of buyers highly value detailed product descriptions. Sellers can leverage online tools, such as pricing comparisons, listing optimizations, and customer reviews to meet the expectations of larger buyers.
On the other hand, larger sellers can focus on reaching buyers of any size online and improving operational efficiency to drive down costs and focus business improvements elsewhere.
The future of procurement The biggest trend might simply be that procurement is moving online. The survey found 85% of business buyers’ organizations were propelled to move more of their procurement online and 96% said they anticipate their organizations will continue doing more purchasing online, even after pre-pandemic business functions resume. And more than a third (36%) of buyers said they anticipate their organizations will make 50% or more of their purchases online this year.
The momentum towards online purchasing is likely to have major implications on the future of business buying, Green adds. The vast majority (91%) of buyers prefer eprocurement over traditional methods, citing product range, competitive prices, and order speed as the top benefits. Additionally, the adoption of more consumer-like purchasing capabilities is spurring the adoption of additional B2C trends in the B2B world.
“Expectations between consumer and business purchasing experiences have blurred as buyers expect the same fast, convenient, and personalized digital buying capabilities they’ve grown accustomed to at home,” Green explains.
With procurement shifting online, sellers can prepare by leaning into digital features like enhanced product content, business pricing, and quantity discounts as well as advanced fulfillment that will provide customers with the experience they seek as expectations continue to shift.
“For seller organizations, adapting to meet buyer demands will allow them to remain relevant with their B2B customers, to make the most of the huge opportunity to engage more deeply with customers via digital channels,” he says.
For a closer look at the most important digital procurement trends impacting buyers and sellers, download the free “B2B E-Commerce” in Evolution” report from Amazon Business.
Amazon Business B2B E-commerce in Evolution Report methodology Amazon Business surveyed 250 B2B buyers and 250 B2B sellers across the U.S. in 2021. Buyer respondents included full- and part-time employees across a range of job levels who worked at organizations of various sizes in the following sectors: government, education, healthcare, and commercial industries. All buyers’ organizations made an annual revenue of more than $25 million. All buyer respondents played an influential role in their organization’s procurement process. Seller respondents included full- and part-time employees across a range of job levels who worked at organizations of various sizes that sold products across a variety of categories.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,690 | 2,012 |
"Business intelligence startup Domo pulls in another $20M | VentureBeat"
|
"https://venturebeat.com/2012/01/31/business-intelligence-startup-domo-pulls-in-another-20m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Business intelligence startup Domo pulls in another $20M Share on Facebook Share on X Share on LinkedIn Domo , which is building a dashboard for real-time business intelligence, has landed another $20 million without even trying.
The funds come from Silicon Valley-based Institutional Venture Partners , which also invested in Domo founder Josh James’ previous startup, Omniture. It brings the company’s total funding to $63 million.
“There’s probably three people a week that I turn down, who want to put money into our company,” said James, in an interview with VentureBeat. He agreed to accept IVP’s investment because the venture firm has a good track record and James has worked with it before.
Above: Josh James, founder of business intelligence company Domo “We thought it made sense to take additional money,” James said, but added: “We’re always trying to build the company in a way that we won’t need more money.” The company is calling this second round of investment its “Series A-1” tranche, to indicate that it has the same terms as its $33 million Series A round (but at a higher valuation). Previous investors include Benchmark Capital, Andreessen Horowitz, Ron Conway and David Lee of SV Angel, and Hummer Winblad.
Domo hopes to give executives and managers a single, customizable, sharable view of the business metrics that matter to them, delivering its product as an online service, similar to the way Salesforce.com, Omniture, Microsoft’s Office 365, and other software-as-a-service (SaaS) solutions work. It will aim at collecting a slice of the estimated $10 billion annual market for business intelligence services.
James’ previous gig was Omniture. After cofounding the web metrics company, he led it to its initial public offering in 2006 and subsequent $1.8 billion sale to Adobe in 2009.
He left Omniture in July, 2010.
Domo is based in Salt Lake City, Utah, and currently employs about 100 people, mostly engineers, James said. However, once the product is ready to go, he plans to kick the sales organization into high gear.
“We will definitely be a sales-driven organization,” James said. “Right now we’re in the process of making sure the product is absolutely right, and then we’re going to sell it to every man, woman, child and retired person I can find.” Photo courtesy Domo.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,691 | 2,011 |
"Demo: We Are Cloud brings Business Intelligence to the masses | VentureBeat"
|
"https://venturebeat.com/2011/09/13/demo-we-are-cloud-brings-business-intelligence-to-the-masses"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Demo: We Are Cloud brings Business Intelligence to the masses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Smaller businesses often can’t afford a traditional enterprise Business Intelligence (BI) tool, nor do they have the in-house IT staff to administer and query it.
We are Cloud is launching Bime 3.0, a cloud-based business intelligence tool specifically for these small to medium sized businesses, at DEMO today.
Bime is designed to be accessible to anyone, from a web analyst to a product manager, rather than being restricted to IT staff.
BI tools can answer all kinds of questions for a small business from “Which sales are related to ad campaigns, social media activity and web traffic on my site?” to “When are my call centers most busy and how does that relate to my staffing schedule and budget plans?” Often the relevant data is stored in different formats and multiple locations. Daily sales numbers may be stored in a relational database, expenses in an accounting system or spreadsheet while traffic site data is retrieved from Google Analytics.
Bime 3.0 allows a user to create a query across all of these data sources in a web browser, regardless of the query language, file or metadata format. The user creates a business query by dragging and dropping the relevant data into a frame. Once the user is happy with the basic information he or she can turn the results into graphs and charts.
Bime competes with products from GoodData, Cloud9, Birst, Qliktech and Tableau software. Bime 3.0 is, however, completely cloud-based and the company claims that it can connect to an unmatched set of data sources either locally or on a web server anywhere in the world. The business model is software as a service (SaaS).
We are Cloud is based in Montpellier in France, was founded in 2009, has 12 employees and has raised $1 million dollars in angel investment.
We are Cloud is one of 70 companies chosen by VentureBeat to launch at the DEMO Fall 2011 event taking place this week in Silicon Valley. After our selection, the companies pay a fee to present. Our coverage of them remains objective.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,692 | 2,011 |
"Business intelligence provider GoodData raises $15M | VentureBeat"
|
"https://venturebeat.com/2011/08/18/gooddata-series-b-15m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Business intelligence provider GoodData raises $15M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
GoodData , a provider of online business intelligence software, announced today that it has raised $15 million in its second round of funding.
The company runs an online business intelligence service that tracks data from a number of sources — such as customer acquisition or customer relationship management software providers like Salesforce.com — and wraps it up in a neat, easy-to-digest package. It then gives business owners a number of analytics tools for that data, like projections for revenue and growth.
The funding was led by Andreessen-Horowitz.
The company’s chief executive and founder, Roman Stanek, has a pretty stellar track record — which was a large factor that led to Andreessen-Horowitz’ investment. Prior to GoodData, Stanek founded Java development company NetBeans, which he sold to Sun Microsystems. After that, he founded Systinet, which he sold to Mercury Interactive (Hewlett-Packard would go on to acquire that company in 2006).
“It starts with the entrepreneur and the team when we decide to invest in a company,” Andreessen-Horowitz general partner John O’Farrell told VentureBeat. “We think Roman is an exceptional entrepreneur who has assembled an exceptional team.” Like other enterprise 2.0 companies, GoodData is taking a shotgun approach to gathering data through partnerships instead of building up internal tools. Box.net, another enterprise 2.0 company, fleshes out its cloud storage product by integrating applications from the likes of Salesforce.com and Google to make its software more useful for enterprise companies.
Likewise, GoodData plans to partner with as many enterprises that generate valuable business data as it can. O’Farrell said that has become a critical strategy for smaller companies and startups lately if they want to compete with enterprise supergiants like SAP and Oracle. And most enterprises are more open to partnerships now than they were a decade ago, he said.
“My vision is to support every single software-as-a-service application out there, I want to have an extensive library or app store of GoodData applications,” Stanek told VentureBeat. “Business Intelligence was very successful, we can connect to a data source within a matter of minutes and start analyzing it.” Public trading markets have taken a significant beating lately and experienced a lot of volatility — but that isn’t deterring venture capital investing, O’Farrell said. Cloud computing companies in particular are gathering an enormous amount of interest thanks to the massive markets they attack with very little upfront capital costs, he said. Andreessen-Horowitz in particular will continue to invest in those companies, he said.
“Great companies will always be funded whether the market is up or down,” O’Farrell said. “If we see a great company at a stage where it makes sense for us to invest, market conditions would not deter us.” Stanek said he expects to use the funding to build up its sales team. GoodData launched in 2009 and has raised $29 million to date.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,693 | 2,011 |
"Jaspersoft may be looking to acquire with its $11M funding | VentureBeat"
|
"https://venturebeat.com/2011/07/15/jaspersoft-funding-quest"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jaspersoft may be looking to acquire with its $11M funding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Business intelligence software maker, Jaspersoft , announced yesterday that it raised $11 million dollars in funding. The round was lead by existing investors Red Hat and SAP Ventures in addition to including newcomer Quest Software.
Jaspersoft caters to the enterprise with business intelligence products. It aims to centralize the way data is secured, delivered and analyzed.
“Data around us is exploding, the need to make sense of it is bigger than ever. You need to have an analytic product,” said Brian Gentile, chief executive of Jaspersoft.
You would think that means the funding will go toward creating a new product or enhancing its current one, right? Wrong. When asked what the plan was, Gentile replied, “We’d like to say we are doing nothing with it.” In essence, Jaspersoft is tabling the funds to be used as an “agile agent on the balance sheet”.
“The funding allows us to look beyond our roadmap and make tougher build versus buy decisions,” said Gentile. This means Jaspersoft is looking to acquire. Gentile elaborated there are four key areas Jaspersoft may buy into as opposed to create products in-house. In priority order they are: 1) Advanced Reporting: The ability to provide reports on a company’s operation and production data.
2) Advanced Visualization: Supplying visual representations of data, such as a sophisticated graph form.
3) Data Movement: Streamlining the transfer of data from its original source and formatting it to be consumed by Jaspersoft’s analytics engine.
4) Advanced Analytics: More intelligent analysis of data using a variety of techniques such as memory and predictive analysis.
Jaspersoft has received $58 million in funding to date with Quest Software, Red Hat, SAP Ventures, Doll Capital Management, Morgenthaler Ventures, Partech International, Scale Venture Partners, and Adams Street Partners. The company is based in San Francisco and has 140 employees.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,694 | 2,022 |
"Qualcomm led, latest 3GPP milestone will expand 5G | VentureBeat"
|
"https://venturebeat.com/technology/qualcomm-led-latest-3gpp-milestone-will-expand-5g"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm led, latest 3GPP milestone will expand 5G Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Since the first call via cellphone was placed in 1973, the mobile phone industry has been instrumental in improving connectivity worldwide. We’ve come a long way since then with the advent of 3G and 4G, which were both major catalysts in the shift away from traditional telecom services. Now, the latest iteration of cellular technology, 5G, is engineered to significantly increase the speed and responsiveness of wireless networks.
With 5G, data transferred over wireless broadband connections can travel at multigigabit speeds, with the potential to reach speeds as high as 20 gigabits per second. By the end of 2027, 5G subscriptions are estimated to reach more than 4.4 billion — demonstrating the popularity and demand for the technology.
Wireless technology giant, Qualcomm , has been making massive strides in the technological evolution of mobile.
The company’s senior vice president of engineering and global head of wireless research, John Smee, shared, “We’re driving where 5G is going and are excited to unveil another key 5G milestone — 3GPP Release-17.
This development completes the first phase of the tech evolution in the 5G decade and solidifies Release-17 as the foundation for expanding 5G into new devices, applications and deployments beyond phones across IoT, wearables, XR and more.” Qualcomm Technologies led the charge on efforts across several critical projects connected to Release-17, according to a press release from the company.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 3GPP, which stands for ‘3rd Generation Partnership Project,’ unites several telecommunications standard development organizations originally joined together to create global protocols for 3G technology and has since set standards for enhanced iterations of tech. With upwards of 600 members, 3GPP typically meets four times a year at the end of each quarter to plan and develop new releases, with the goal of improving upon past releases and providing new standardized functionalities.
What is 5G NR Release-17? Qualcomm claims that Release-17 brings further enhancements to the foundational aspects of the 5G system, narrowing the digital divide and broadening 5G’s reach to new network topologies and use cases.
Strategic areas of improvement include spectrum expansion, device power savings, enhanced private networks and enhanced simple repeaters.
Qualcomm also says Release-17 delivers system optimizations for a wide range of devices and applications, such as reduced-capability devices, non-terrestrial networks, side link expansion, broadcast/multicast expansion and boundless extended reality, as summarized below.
Enhanced support of industrial internet of things The purpose of this study is to streamline support of Time Sensitive Communication (TSC), including enhancements for support of deterministic applications and enhancements to IEEE Time-Sensitive Networking (TSN). Other core issues addressed include Uplink Time Synchronization, UE-UE TSC communication, Exposure of QoS, Exposure of Time Synchronization services for activation/deactivation and support for PTP time sync and use of Survival Time for Deterministic Applications in 5GS.
E nhancement of support for edge computing in 5GC This work item involves defining solutions to improve the forwarding of some UE application traffic to the applications deployed in the Edge Computing Environment, which includes the dynamic insertion of traffic offloading capabilities, change of application server serving the UE and the capacity to provide local applications with information on items like expected QoS of the data path, which supports PSA change when the application doesn’t support notifications of UE IP address change.
Enablers for network automation for 5G; phase 2 From data collection from UE to slice SLA, this study addresses many of the shortcomings from Release-16. New functionality that will be supported in Release-17 includes Multiple NWDAF Instances in one PLMN including hierarchies, enabling real-time or near real-time NWDAF communication and NWDAF-Assisted UP Optimization.
Paving the way to 6G “While there are a number of wireless innovations in the pipeline, our vision for 6G is already starting to take shape,” Smee said. He added that the Release-17 will not only enable more effective enterprise deployment, but transform the way enterprises manage their supply chains and logistics, leading to a more productive and connected workforce.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,695 | 2,022 |
"VFunction deploys AI to improve code and fix technical debt | VentureBeat"
|
"https://venturebeat.com/programming-development/vfunction-deploys-ai-to-improve-code-and-fix-technical-debt"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VFunction deploys AI to improve code and fix technical debt Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Can technology fix technical debt ? That is, can more code identify the problems within the code base? VFunction has a plan to do just that. Today, they’re announcing the VFunction Assessment Hub, a product that uses artificial intelligence (AI) and graph algorithms to measure the quality of code and guides developers on how to improve it.
The new product joins the VFunction Application Platform, a tool for rewriting the code of an application to bring it up to more modern standards. Companies with older legacy code may use the tool to slowly evolve the software through redesign and refactoring.
“What we found out is that many times organizations don’t really know how to prioritize the applications that they want to modernize or how to quantify the technical debt or even to decide which application that would get the most bang for the buck by fixing.” said Moti Rafalin, the CEO and a cofounder of the company.
The VFunction Assessment Hub evaluates a Java code base and generates numerical measures of the code complexity and interdependence and then converts it into a number that captures the amount of “technical debt”.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Developers often use the phrase to describe improvements or fixes that they need to make to a largely functioning piece of software. Sometimes it accrues because developers haven’t added new features and other times it appears because new standards and protocols evolve and the code base hasn’t been modernized. It is, essentially, a to-do list of attention that must be paid to improving the software.
Microservice model for analyzing software packages A number of tools for analyzing software exist and they use several different criteria for evaluating a software package.
TeamCity from JetBrains, SonarCloud , SonarQube and Synopsys are just a few of the options. All apply a set of rules of poor design patterns to be avoided and flag sections that match it.
Another set of tools offer dynamic code analysis by tracking behavior as the software is running. Tools like OverOps , Inviciti and Acunetix watch software for bugs or security flaws that become apparent during execution.
The report generated by VFunction analyzes the software package with a mixture of graph algorithms that model the interconnections between software classes. It then converts this into a score that approximates the technical debt. The report will also identify the individual Java classes that may contribute the most to this score, so developers can focus first upon improving them.
The company’s philosophy is heavily focused on a microservice model, a modern approach that splits the code into a number of smaller, more independent modules that can be changed or recoded on their own. This model can help development teams work separately on each module without spending as much time coordinating their efforts.
“If you have a monolithic application, it simply doesn’t scale or it’s very costly to scale it.” explained Rafalin. “So you want to modernize it and break it into those microservices in order to benefit from what the cloud has to offer from elasticity, cost savings and so forth.” The Assessment Hub metric evaluates the depth and interconnections between the dependency chains linking together the different modules. It also examines the libraries used by the project, their history and their potential impact on code quality and security.
VFunction created the Assessment Hub after developing their Platform for modernization.
“You might think of it as upside down, but there’s actually logic here,” explained Rafalin. “We were able to do it only now because we had to have the experience of modernizing millions, maybe tens of millions of lines of code already with different applications with different customers.” Their platform can automate many of the chores for refactoring Java code by detangling the dependencies and cleaning up the interconnections. The process, though, requires guidance from the developer to ensure that it’s making decisions in line with the architect’s strategy.
Transforming technical debt into quality code Rafalin imagines that the tool will find usage in the management suite, where CIOs may need to look for a dispassionate assessment of the codebase so they can decide where to focus development teams. The metric may be an imperfect measure of code quality, but it is relatively independent.
“Given the complexity and interdependence of today’s multi-layered application estates, CIOs and technology leaders need to prioritize their modernization efforts before they start pulling on threads that can unravel further complexity,” explained Jason English, principal analyst, Intellyx , which honored VFunction in 2021 with its inaugural Digital Innovator Award.
“The VFunction Assessment Hub provides an entry point for evaluating the ROI of breaking down legacy monoliths and decoupling dependencies, so refactoring efforts can be better aligned with business goals.” The current tool works with Java code, but VFunction promises that support for .Net code bases will be following soon. They anticipate that this will target the two biggest collections of legacy code that make up many enterprise stacks. The company is also rolling out partnerships with Microsoft and Amazon to offer support for refactoring the code running in their clouds.
Pricing will begin with ten-application packs. Options will also include AWS Marketplace and partnerships with other integrators.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,696 | 2,022 |
"Seclore accelerates focus to lead in enterprise data security | VentureBeat"
|
"https://venturebeat.com/enterprise/seclore-accelerates-focus-to-lead-in-enterprise-data-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Seclore accelerates focus to lead in enterprise data security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Seclore , a data-centric security platform (DCSP) for enterprises, today announced a $27 million series C funding. The company reported that the capital will be used to accelerate its quest to be the platform of choice for enterprise data protection initiatives. It’s DCSP offering will enable enterprises to shift their security posture from infrastructure to data. This approach allows customers to focus on securing their crucial data assets, rather than chasing data to secure it as it moves from enterprise systems throughout the cloud and to third parties.
The announcement comes on the heels of a time when data breaches are increasingly more common as the workplace environment has shifted to remote rapidly due to the pandemic.
Cybersecurity experts attribute 20% of data breaches to remote employment, with each incidence costing organizations an average of $4 million.
Throughout the last few decades, security has been heavily invested in cyber infrastructure, which includes devices, networks and apps. True data-centric security has just ascended to the next level and this is where Seclore comes into play. The platform can be used to integrate ideal data-centric security solutions with current enterprise systems to overcome the inherent limitations of different data protection point solutions.
Data security in the cloud Apart from data protection where it protects IP, customer data and employee data, Seclore reveals that its DCSP also solves data privacy issues. This is for enterprises trying to comply with global privacy standards such as GDPR and CCPA , as noncompliance can result in fines of ranging from $2,500 per infraction to $7,500 for purposeful violation or higher. Seclore’s DCSP also includes cloud data security , which enables enterprise customers to protect data as it travels through various public cloud systems without solely relying on the cloud service provider’s security posture.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A Forrester report reveals that 60% of security incidents are caused directly by third-party vulnerabilities. In addition, another poll conducted by Deloitte found that 87% of respondents had had a third-party incident interrupt their operations and 11% have had their vendor relationship completely fail.
Seclore says its third-party risk management solution in the DCSP enables enterprises to exchange confidential information with third parties like vendors, partners, customers, or even board members without, making sure the security or privacy of the information is not compromised.
Standing out as a data security solution While Microsoft (MIP), Vera (now HelpSystems) and Fasoo are other competitive data security solutions, Seclore claims its ability to interact with current systems and automate tedious operations is what sets it apart. For decades, data-centric security solutions such as discovery, data loss prevention (DL), classification and encryption have existed. Due to the necessity for human engagement, these technologies have proven to be inefficient and error-prone.
Seclore’s DCSP automates human tasks.
“We rapidly recognized Seclore as the global market leader in data-centric security after reviewing the team, technological stack and total addressable market,” said Ron Heinz, founder and managing partner at Oquirrh Ventures, the firm that led the funding round.
In addition to Oquirrh Ventures, Origami Capital also helped lead the round. The Seclore says the new funding will be used to increase the company’s global staff as well as its customer base in North America.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,697 | 2,022 |
"Lynk tackles knowledge management during an era of data explosion | VentureBeat"
|
"https://venturebeat.com/enterprise/lynk-tackles-knowledge-management-during-an-era-of-data-explosion"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lynk tackles knowledge management during an era of data explosion Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There’s undoubtedly more information available now than at any other point in human history. In 2015, it was reported that more data has been created in the prior two years than in the past two decades. Now it’s 2022 and data continues to explode across industries.
And while that should logically mean that anyone can access whatever information they’re looking for at any point with just a few scrolls of the mouse, the opposite tends to be true.
Knowledge and insights are often buried in a deluge of increasingly amassed data and information.
This has led to the massive growth of knowledge management. The process involves organizing, creating, using and sharing collective information within an organization to parse data, cull insights and guide decision-making.
“The more we need insights, the more we need something that we trust that guides us in the next step,” said Peggy Choi, founder and CEO of AI-driven knowledge management platform Lynk.
“We need that next layer – credible insights.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Knowledge management’s rampant growth Knowledge management is the fastest-growing area of AI spend globally, according to Gartner.
The market is predicted to grow by more than 30% in 2022, overtaking virtual assistants as the top AI use case, according to the firm. Furthermore, Reportlinker indicates it will continue to snowball and gain significant traction: the market will reach an estimated $1.1 trillion by 2026, up from $381.5 billion in 2020.
The “upheaval” of traditional workplaces as the result of the pandemic demands more sophisticated knowledge management. Hybrid or fully remote models are likely to continue, geographically fragmenting people. This adds to the hurdle that many organizations face when it comes to gathering people’s knowledge and insights — both internally among employees and externally among partners and customers.
“Knowledge is evolving,” said Choi. “How we solve it has to evolve with time.” Lynk is offering its answer to the “age-old unresolved problem” with Lynk Circle.
The company today announced the availability of the software-as-a-service (SaaS) tool that helps organizations identify the right people with the proper knowledge in their networks.
As Choi noted, it’s not just measuring knowledge in “that title, that company.” “Being able to know who knows what helps you discover the right opportunity,” she said.
The platform’s underlying AI collects and maps all known data about a given worker, partner or customer. A data feedback loop helps to create graphs to help search and perform other functions.
“It’s mapping out what the person knows,” Choi said. Ultimately, “this helps people do better work.” Organizing strong knowledge networks not only captures important institutional information, she noted; it promotes knowledge sharing and unlocks knowledge to increase productivity.
“We’re creating a platform of knowledge networks for organizations to work more efficiently, easily find out ‘who knows what,’ get smarter and focus on what is important,” she said.
The Lynk Circle white-label SaaS solution is highly configurable and secure, hosting a private ISO 27001-certified network. It features an intuitive interface and flexible interaction with advanced search and formats including Q&As, 1:1 calls, project work, articles and documents. It also has add-on capabilities around compliance checks and payment processing.
Choi explained that Lynk Circle has been used by private equity and venture capital firms, membership organizations and industry associations to establish advisor networks, promote engagement, power knowledge hubs and rapidly onboard new employees and ramp up their work. Choi pointed to one use case with the Australia Chamber of Commerce, which leveraged the platform to bring a mentoring program online when it could no longer offer them in person amidst the pandemic.
The RegTech Association, meanwhile, used the platform for its RegTech Circle, which connects financial institutions, experts and vendors, “allowing all parties to engage in the sharing of intelligence on regulatory technologies and their applications to facilitate commercial discussions and opportunities,” explained board director Alex Oxford. “With Lynk Circle, we can exchange knowledge, network and connect, driving collaboration across the industry.” AI-powered knowledge management at scale Since its founding in 2015, Lynk has used its platform to scale a network of more than 840,000 experts across 80 countries. The company has facilitated interactions with more than 300 enterprise customers, including PwC, UBS Group, The Mass Transit Railway in Hong Kong, cosmetics company Shiseido, beverage company Pernod Ricard, as well as several industry organizations, Choi said.
She explained that the company is driven by what is known as the DIKW pyramid.
This hierarchy represents the relationships between data, information, knowledge and wisdom, with each creating a building block that establishes a step toward the next higher level.
“I’ve always thought that there’s something here about this human insights part,” Choi said. “The platform’s applicability is proving that thesis out. Everyone needs it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,698 | 2,022 |
"What you need to know about managing the modern supply chain | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/what-you-need-to-know-about-managing-the-modern-supply-chain"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community What you need to know about managing the modern supply chain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
You can’t plan for what you can’t predict. A huge container ship blocks the Suez Canal, disrupting global trade for several days. A global pandemic wreaks havoc on global supply chains, causing shortages and pushing prices sky-high.
Now, Russia’s invasion of Ukraine has injected chaos into an already fragile supply chain ecosystem. The availability of everything from oil and natural gas to wheat is now in doubt as car manufacturers stop production in Russian factories. Automakers, in fact, are facing their third supply chain crisis in as many years.
Old-fashioned supply chain planning involved charting the journey of a material or product from the raw material stage to the consumer. It also encompassed supply planning, demand planning, production planning, operations, inventory optimization, routing, transportation, logistics, warehouses and more.
But what happens when there’s a weak link — or a complete break — in the chain? Something as minor as a truck breaking down or as major as a global pandemic introduces uncertainty into our planning algorithms. These types of supply chain disruptions are inevitable. While we can’t control for all the variables or predict the unpredictable, we can be better prepared to respond.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Supply chain snags and the bullwhip effect Let’s take a closer look at how the delicate balance of a supply chain’s complex system can be disrupted at one point, causing chaos further down the line. Today’s mass production environment favors just-in-time manufacturing — an environment that encourages receiving goods only as needed for production. This, ideally, reduces inventory costs and waste.
Meanwhile, factories are tuned to work at full capacity. Considering how expensive it is to build and operate a factory (employees, robots, electricity), the last thing you want to do is stop the production line. In a stable world, with no surprise variables, you can calculate the inventory you need to keep the factory running smoothly. Factor in a few extra “safety stock” inventory units (just in case) and 99% of the time, all is well.
What happens when there’s a supplier snag upstream? Or a consumer change of heart downstream? Exercise bike manufacturer Peloton faced this exact scenario recently. After an uptick in consumer bike purchases at the beginning of the pandemic, consumer demand began to cool over time. The result: thousands of cycles and treadmills sitting jam-packed in warehouses or on cargo ships. Peloton temporarily halted production of its connected fitness products earlier this year, while the company laid off staff and overhauled its management team.
Peloton fell victim to the “bullwhip effect,” a supply chain situation that results from small fluctuations in demand at the retail level causing progressively larger fluctuations in demand at the wholesale, distributor and supplier levels. The phenomenon is named after the physics involved in cracking a whip: a slight flick of the wrist results in increasingly larger motions toward the end of the whip.
A small change upstream, a major change downstream The Bullwhip Effect describes changing consumer demand patterns, but what happens when there are disruptions upstream? People tend to overreact when there is a small fluctuation at one end of the supply chain, perhaps triggered by a major event (such as Russia’s invasion of Ukraine). You may have 1,000 different parts feeding into your factory. What happens when a part you need is out of stock from the supplier? All the steps you thought you could carefully manage with a small inventory suddenly seize up.
Consider the global chip shortage. In the early stages of the pandemic, early signs of changing demand patterns led to stockpiling and advance ordering of chips by some, which left other companies struggling to obtain needed components. Auto manufacturers cut their orders for semiconductor chips as they predicted demand for new cars would take a nosedive. Those chips were then snatched up by other industries for phones, computers and video games. Meanwhile, worldwide auto production was halted when a missing single part stalled production of the entire vehicle.
It’s not about planning, it’s about responding: Graphing “what if” scenarios Many claim that poor forecasting and ineffective planning result in these supply chain disruptions. The problem isn’t a failure to plan, it’s a failure to effectively respond. How can you forecast numbers for six months from now when you have no idea what will be happening six months from now? It’s not like you build forecasting engines for the next pandemic. Instead, you should set a baseline number of units for inventory and focus on looking for demand signals in the market — and responding to these signals in a sensible way. Adapting to change is key. All CEOs should ask themselves, “What is our ability to adapt to unforeseeable big changes?” The key to adapting to change is having data systems in place that showcase your options, as well as quantify the implications of any given option. And you need to do this as quickly within the supply chain as possible. Once that signal goes downstream, it gets more difficult to recalibrate throughout the supply chain (and the result may be overstocked items or an inventory shortage).
Traditional supply chain software is linear, passive and limited by relational databases. Relational databases, which store customer, order and product data in separate tables, were designed for steady data retention rather than dynamic data-intensive use cases.
A graph database, however, can model disparate relationships and dependencies in a way that closely mirrors the real world. Graph, which tracks every individual part from supplier through the manufacturer to the finished product, can load massive amounts of data and uncover real-time relationship patterns. The graph provides a “what-if” engine, allowing companies to create a digital representation of a complex system (such as an automotive supply chain). The graph represents a “digital twin” of your real-life supply chain, allowing you to evaluate alternative plans in response to global changes in supply and demand.
Graph algorithms, which include the shortest path and geographical proximity, can help you manage and mitigate complex dependencies — in real-time. As several internal and external factors (involving parts, people and things) cannot be forecasted, businesses must be ready to respond.
What is the end-to-end impact of a change in supply? If a part is unavailable, what product can you build now with what you do have? Graph empowers you to take an active role in managing your response to demand changes, meaning you move away from a passive view of risk. If demand for a particular car model is suddenly dropping in the U.S. market, what parts will we now have in surplus? How can we best use these parts? What other options do I have? Graph analytics helps you answer the difficult “what-if” questions — and it even helps you ask and answer questions you had never even imagined.
Since we can’t predict the unpredictable, our next-best option is to be ready to act at any given moment. If you have a real-time, what-if mastery of the data, relationships and dependencies within your supply chain, you’ll be ready for any snag, shortage, or surplus — minus the sting of the bullwhip.
Harry Powell is the head of industry solutions at TigerGraph DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,699 | 2,022 |
"What it will take to implement Web3 | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/what-it-will-take-to-implement-web3"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community What it will take to implement Web3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Lately, there’s been no shortage of talk about the transition to Web3 , a new digital frontier powered by blockchain and accessible via decentralized applications (dapps). But while many of the products created thus far are groundbreaking — offering verifiable digital ownership and access to new financial instruments — they still haven’t managed to galvanize mainstream adoption yet.
To reach critical mass, the blockchain industry needs to ensure that platforms and services are easy to use as their current-gen counterparts.
We aren’t there yet The current landscape of the internet is still very much grounded in Web2 architecture. While users can access a range of services, each requires its own unique username and password and third-party platforms are typically still needed to process payments. While this model has ostensibly worked well enough for the past two decades, it’s been mired by the centralized control of big tech companies, which thrive on selling user data.
In Web3 , services will all be interconnected and interoperable. Users will be able to transfer assets and value across virtually any platform. Often, they’ll even own these platforms and, importantly, their data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some of the groundwork for this future has already been laid down with the rise of smart contract platforms and, by extension, dapps. But, issues remain.
Take Ethereum, for example; despite its popularity, the network is struggling with some severe limitations. For one, current levels of throughput, around 15 transactions per second, simply aren’t enough to support the type of traffic that worldwide adoption would bring about.
Then there’s the fact that fees have become so prohibitively high that many users can’t afford to transact on the chain, giving rise to elaborate scaling solutions that quite literally add another layer of complexity to the user experience. This puts up a real barrier to entry for a vast majority of the population, particularly those not well-versed in blockchain tech, undermining the notion of “banking the unbanked.” It isn’t a simple matter to scale the network either, evidenced by the fact that Ethereum has been attempting to scale for several years now and still, issues persist.
There are still more hurdles that slow overall adoption as well. Too many entry points are confusing to newcomers and the means of adequately securing assets is similarly complex for most. Combined with the growing concern that cryptocurrency has a negative environmental impact , it’s not hard to understand why so many average users shy away.
What it will take To overcome the myriad of barriers to entry, these services need to implement improvements that make the space far more attractive to outsiders. For starters, these blockchains need to be able to handle far more transactions than they can right now. It isn’t enough to match legacy payment processors — they need to surpass them. The same goes for delivering web content. The average person doesn’t want to wait longer for their data, even if it is more secure. Future blockchain services need to be able to match or surpass web speeds if they want to attract casual users who don’t care about blockchain inherently.
On that same note, onboarding and user experiences should be seamless and require no understanding of the underlying technology or even knowledge of its existence. We can’t expect the average person to learn all the nuances of blockchain interfaces to unlock the potential here.
Web3 has to be as straightforward as Web2 and require as few clicks as possible.
This extends to infrastructure, too — one of which being consumer-friendly wallets. Users need an uncomplicated and secure means of storing their assets and accessing dapps that don’t require advanced skills to operate, long-winded recovery seed phrases to memorize, or private keys to secure.
Lastly, the public needs to be reassured that this technology isn’t killing the planet. Nobody will want to reap the benefits of blockchain if they feel they’re stripping seconds from the Doomsday Clock in doing so. Future services will need to be based upon proof-of-stake (PoS) and go beyond that to ensure they are entirely carbon-negative.
Possibilities Everything we’ve outlined is essential if Web3 is going to attract average users. Once in place, the opportunities for Web3 are almost limitless.
For example, any physical item could be tokenized as a nonfungible token (NFT) and traded in the same fashion. This would revolutionize how peer-to-peer commerce works, offering a new paradigm in secondary sales and giving rise to a new, virtual commerce market. A system this powerful and accessible could even be leveraged by big businesses for professional record keeping and documentation. Moreover, dapps should be accessible in one click from a secure and universal access point that appeals to both hardcore and casual users.
By making the entire process relatively simple and intuitive, the average user will almost certainly get on board in the same way that people became accustomed to eBay and Amazon. Furthermore, because these new systems will be built on highly efficient blockchains, the entire community can move past the stigma that digital assets are a pox on the environment. With practical and ethical barriers removed, there is a clear path forward for such systems to eventually breed a fully global level of adoption.
Web3 — taking shape Web3 is starting to take shape, but it isn’t all there yet. What’s missing is a degree of accessibility that currently only exists with Web2 services. This is hardly insurmountable, but work will need to be done to make accessibility and user experience for these platforms as simple and intuitive as possible. Once accomplished, it should open the floodgates to a whole new level of adoption as more and more newcomers begin to appreciate the benefits that this space has to offer. Anything less, and the coming Web3 will likely remain stuck in a conceptual development phase forever.
David Kim is the head of publishing at WAX Studios.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,700 | 2,022 |
"The hard truths about Web3: What no one else is talking about | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/the-hard-truths-about-web3-what-no-one-else-is-talking-about"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The hard truths about Web3: What no one else is talking about Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The entirety of Web3 rests on the fundamental belief that decentralization will provide freedom and equality for all. When in fact, so far, Web3 is spurring the greatest consolidation of power in recent times.
What is Web3? It’s an all-encompassing term to describe a blockchain-powered internet, where platforms and apps can be built and owned by users. The underlying principle — it’s a crusade to transfer power back to the internet community.
In contrast, Web2 is dominated by a few large players, such as Google, Apple and Facebook. These centralized companies often retain billions in profits for simply being the middleman.
I think you’ll agree, it’s easy to buy into the “down with big tech” narrative. They have created trillions in wealth, largely for things that can now be automated by code in the form of smart contracts. I mean, who wouldn’t want to cut out the middleman in our everyday lives? This could effectively increase speed, increase security and decrease costs.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But the question remains, is it too good to be true? The reality of Web3 Although blockchain was supposed to mark the advent of the decentralization of power and wealth, we see it has done the exact opposite. The top 9% of accounts hold 80% of the $41B market value (subscription required) of NFTs on the Ethereum blockchain. To put this into perspective, the richest 10% in the U.S. own nearly 70% of the country’s total wealth, according to Statista.
And that wealth gap is seen as a major topic of political contention.
Now imagine Bitcoin is a country: it would maintain the greatest wealth inequality of any nation on the planet.
And I fear this centralization is only increasing by the day. You have tech giants such as MicroStrategy and Tesla amassing billions of dollars worth of crypto at a time. In addition, as long as Bitcoin remains a proof-of-work crypto, the current miners will only continue to re-invest profits, purchase more miners (which are horrible for the environment) and increase their output.
Another problematic trend is that new crypto projects are substantially increasing their insider ownership at inception. Notable cryptos such as BNB, AVAX and SOL all maintained over 40% internal ownership at launch, making them inherently less decentralized.
Lastly, we can’t talk about Web3 and not mention the obscene amounts of money VCs are throwing at startups. Notable funds such as Pantera and a16z are quickly establishing a monopoly within the space. They have a foothold in a majority of all high-profile Web3 companies, receiving dual-class shares which grant 20x the voting power of regular ones. This means they reserve the right to vastly influence any major decisions the companies will make down the line.
The feasibility In its current state, Web3 is unfortunately predicated on the existence of a middleman. A majority of the decentralized applications or “dapps” rely on centralized infrastructure and services. Companies such as Infura and Quicknode provide imperative node-as-a-service infrastructure. In addition, platforms such as Alchemy and Moralis help build and scale dapps 10x faster.
Developers will not/cannot run their own servers. It is extremely time-consuming and capital intensive. They also don’t want to rewrite every single line of code, essentially having to reinvent the wheel every time they build a new dapp. This inherent need for centralized products/services will not be going away any time in the foreseeable future.
All this being said, there is undoubtedly game-changing technology in Web3. DAOs have the potential to revolutionize community and corporate governance. NFT technology offers an extremely diverse set of possibilities including revenue sharing, GameFi, collateralization, etc. DeFi empowers everyday investors to access new asset classes, reduce their fees, improve their APY and overall take more control over building their financial future.
Closing thoughts We are at a stage where everyone is overestimating what can be done in one year and underestimating what can be done in 10. Don’t simply buy into the hype generated by macroeconomic cycles. Instead, educate yourself on the long-term sustainable use cases of blockchain technology.
Personally, as a Web3 startup investor, I am looking to back the most innovative Web3 companies obsessed with utility and impact, while still following the principles of decentralization.
Arnav Pagidyala is a Web3 startup investor and blockchain enthusiast.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,701 | 2,022 |
"The future of on-prem and the cloud | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/the-future-of-on-prem-and-the-cloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The future of on-prem and the cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Any organization that shifts from an on-premises IT infrastructure to public cloud infrastructure-as-a-service (IaaS) will spend some time operating within a hybrid model. There is no magic switch you can flip to instantly migrate everything from your data centers to the cloud. But how long can (or should) they coexist? If you’re planning on using a hybrid infrastructure for an extended period of time, my advice is this: don’t. While it’s impossible to avoid a hybrid setup during a transition period, most organizations are best served by committing to cloud IaaS completely (or as much as possible) and following a plan that can get you there incrementally over the course of two to three years.
Why businesses are moving to the cloud One of the main drivers for undertaking a cloud IaaS migration is the existing talent pool, for a couple of reasons. First, as legacy on-premises hardware and networks continue to age, the pool of those with the expertise to properly maintain those devices and systems shrinks. On this level, it’s not uncommon for professionals to either retire or change careers, and there certainly isn’t much younger talent with experience working with older IBM or Sun Microsystems hardware, for example. Longstanding knowledge within an organization is very valuable, making it both expensive to replace and costly to lose.
Similarly, with “younger” talent coming out of schools with more of a cloud focus, cloud IaaS is where an organization wants to be if it wants to attract and retain newer employees. The goal is to develop, grow, and (hopefully) retain talent, and that is becoming harder and harder to do if a company only offers on-premises infrastructure and related tools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are varying skill sets in play when looking at on-premises versus cloud infrastructure. For example, the toolsets used to manage and support on-premises hardware and network devices are typically different from those used in a cloud environment. This includes differences in monitoring, performance management, and implementation support. And I’m not talking just about differences in terminology around how these tools work; cloud IaaS management and security tools are typically quite different in function and use compared to on-premises tools.
Top 3 drawbacks of a long-term hybrid model Operating under a hybrid setup forever is possible in theory, but unless you have an unlimited budget doing so over an extended period doesn’t make business sense. Here’s why: It requires additional administrative support.
Under a hybrid model, you need professional systems administrators supporting on-premises and in the cloud. These teams handle things like patches, monitoring, failover, backup, and restores. This is more than just extra work; it involves extra knowledge – and probably extra sets of tools.
Hard costs reach a tipping point.
At some point, the physical footprint supporting your on-premises architecture — which was probably built years ago when it made economic sense — stops delivering the needed ROI. Imagine a couple still living in a big house after the kids move away. The house may be nice, but it’s not very efficient. You’re essentially paying for space you don’t need or use. Your needs have changed, but you still pay for the entire house. Eventually, overhead costs are spread out over a smaller base, so unit costs go up.
Different policies.
If an organization has both a cloud team and a non-cloud team, it has essentially told its people they’re either on the varsity or junior varsity team. I have spoken with too many clients who have unknowingly created that issue, which can lead to resentment on the team. If the company says, “we are cloud IaaS first,” expect everyone to want to be on the cloud team. If the organization is saying the future is in the cloud, but they want IT staff to keep managing on-premises, what does that staff do in three years when their peers are working in the public cloud? How do they get retooled and retrained ? Organizations need to be aware of the problems this type of situation could cause over time.
Like most things, it comes down to people Businesses should be thinking about what it takes to support their on-premises and cloud IaaS based on the talent that’s available. If you reach a certain level based on size and scale and need 24/7 coverage — essentially all businesses in this era require that level of support — how many engineers do you need to cover all your various systems for 365 days a year? There are certainly organizations that have recruited a team of smart people who stuck around over time, but this team eventually realizes it’s hard to also have a life outside of work. It’s not just the sheer hours but also the constant stress of waiting for that phone call. Planning activities is always an issue because you know you’re accountable; if something does happen, it could cost you a weekend. This realization ultimately affects innovation — you can’t expect people to be on call each week and then also implement that next software/hardware solution that drives the company forward.
In the end, a hybrid model is inevitable during a transition to cloud IaaS, so the idea is to make that transition as efficient and cost-effective as possible. With that in mind, here are three steps to get the ball rolling: Bring all stakeholders into the discussion.
Technical leaders should team up with CFOs and other business leaders to map out and explain why each step of a cloud migration makes business sense.
Perform a full TCO analysis.
Analyzing the costs involved in a cloud migration requires much more diligence than only using the online calculators provided by the major cloud providers.
Build a three-year roadmap.
Create a plan to migrate to the cloud incrementally based on business priorities, and make sure that plan continues to move forward.
Most would agree that migrating customer-facing and internal-facing systems are top of mind; that’s where you get the “big bang” and where ROI is generally made. Where it’s not made is in file systems or network devices. That said, if you run out of money or time before you complete your migration of these aging systems, you’ll be stuck with a sub-optimal solution and business could suffer.
Ultimately, there is no cookie-cutter solution that works for every business, and these are just some of the issues you need to consider. How you get to the public cloud may look different from a peer or competitor, but the fact remains that limiting the amount of time you spend operating under a hybrid model will almost always give you the best chance at success.
Michael Bathon is Vice President & Executive Advisor, IT at Rimini Street.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,702 | 2,022 |
"Sharper data collection is key to better insurance CX | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/sharper-data-collection-is-key-to-better-insurance-cx"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Sharper data collection is key to better insurance CX Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital transformation has been one of insurance companies’ key priorities recently. The promises big data and analytics hold for quote citation, risk analysis and underwriting efficiency are immense. Yet, many insurers struggle to leverage the wealth of customer data they can access.
This issue isn’t as much about infrastructure needs as it is about data collection processes. Customers these days are digital natives, as evident in a study conducted by EY.
Two out of three customers prefer digital interactions and about 80% of respondents said they bought insurance online.
Social media, wearables, telematics, agent interactions and smart homes are just some data sources insurers can mine. Yet, manual processes and outdated workflows prevent insurance companies from designing memorable customer experiences (CXs).
By reworking their data collection methods in the following ways, insurance companies may be able to transform their businesses.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Faster customer onboarding for insurance The typical insurance customer onboarding process is tedious. The customer fills out multiple forms, requests a quote, provides a ton of medical paperwork, responds to insurance agent requests for more paperwork and finally signs forms manually before mailing them to the company.
This process can take anywhere between a few weeks and one month to complete. Comparing prices is impossible with such processes, since insurers struggle to offer comparisons without the requisite paperwork in place. Consumer demand for aggregators indicates the hunger for easy online comparison and insurers are failing to fill this void.
A digital onboarding experience eliminates these hassles and seamlessly connects agent needs to consumer data. For instance, an online form can capture relevant consumer information, centralize data storage and automatically screen applicants for further information requests.
Companies can identify risk thresholds based on customer inputs and request documents or issue quotes within a few days time. Underwriters can make quick decisions since all customer data is centralized. Thus, not only is customer experience seamless, backend processes benefit too.
Quote timelines are even longer in business-to-business (B2B) insurance cases, taking three to six months. Digitally upgrading data collection processes can help reduce onboarding time from months to weeks, boosting profits and creating a scalable digital process.
Offering agile plans Customers are also short on patience these days. With the wealth of options available to them, consumers are highly attuned to receiving less-than-ideal services from their insurers. From the insurer’s perspective, the easiest way to guarantee steady underwriting revenue is to boost loyalty.
Most insurers resort to price discounts as a means of building loyalty. However, all this does is force customers to view insurance as a commodity. Insurers often struggle to communicate the value underlying their policies due to a lack of data surrounding customer value drivers.
Data offers companies the potential to create highly customized and agile products. For instance, a healthcare insurer can leverage wearables data to segment their customers and assign risk parameters. These datasets can transform granular processes such as entering information in forms. Insurers can pre-fill data and collect only what is necessary.
Moving away from manual data collection is the key. Nurnberger insurance faced a challenge in this regard. Their customers demanded flexible products, but the company lacked the insights to offer value-driven and profitable plans.
By centralizing customer data and collecting relevant information, companies can boost brand loyalty through highly agile policies. Customers can pause, adjust, or cancel insurance plans to suit their needs. The company has used data to shift power to their customers, allowing them to input their needs, reducing agent workloads and automating tedious underwriting risk analysis.
Thus, the benefits are two-fold. Not only are customers more loyal, but Nurnberger’s operational costs have decreased, boosting their margins and potentially helping them achieve free float.
Reduce consumer healthcare costs The insurance marketplace has evolved with changing customer attitudes. Consumers these days demand cost-effective healthcare and expect insurers and healthcare providers to leverage technology to achieve this goal.
Healthcare provider Atrius Health needed a way to monitor their diabetes patients. Their objectives were to encourage greater self-care, manage health between visits and improve patient satisfaction scores. Given their vast customer base, manually conducting in-person screenings to adopt a proactive monitoring stance was impossible.
Glooko’s remote health monitoring services, powered by wearable technology, help Atrius monitor vast quantities of patient data such as blood glucose levels, exercise activity and carbohydrate levels on a central platform. Glooko also removes data silos and presents care providers with a full picture of patient health.
Crucially, these datasets are shared automatically with patient consent. Following the program’s implementation, 80% of patients indicated they found sharing data extremely easy. In addition, patients were proactive in monitoring their health, doubling their blood test frequency.
Better data collection for better insurance customer experience Customer satisfaction is intricately linked to data collection. The more seamless data collection is, the better the customer experience is. Customers feel less intimidated sharing data and do less work. In turn, care providers can adopt proactive health management solutions, lowering healthcare costs for patients and operational costs for their businesses.
Tal Daskal is the CEO and cofounder of EasySend.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,703 | 2,022 |
"Securing the data ecosystem | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/securing-the-data-ecosystem"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Securing the data ecosystem Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Organizations are often pressured to choose how to prioritize what is essential in terms of data management.
Equally as often, they make rash decisions to comply with industry and regulatory standards. Some prioritize concern with cybersecurity threats and data leaks rather than cost efficiency. Many do not know that data management can protect against cyberattacks while being cost-effective. The growth of big data in the digital world is extraordinary. Developing an efficient data management strategy is key to securing your organization’s data ecosystem.
Data management may seem daunting because of the sheer amount of data being churned out daily. It’s no easy task, but intelligent management technologies and data management best practices can help your organization avoid the consequences of poor data management, leading to high costs and vulnerability to cyberattacks.
Understanding ROT data There are three types of data: redundant, outdated and trivial data – otherwise known as ROT data. The more ROT data a company has, the higher the risk of falling victim to a cyberattack.
Companies can mitigate risk and cut down costs by understanding their ROT data.
Redundancies often occur because of our conditioning to save multiple copies of the same dataset because we have been told it’s best to hold onto everything. Outdated data is just precisely that: data that is outdated and no longer has relevance. Trivial data is data that no longer serves a purpose and takes up space on servers while slowing down processing. Organizations can tackle this problem by knowing what data should be kept, deleted or sorted.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another challenge that comes with ROT data is data storage – companies run with the conditioning of needing to hold onto data and end up storing it in the cloud. While the cloud might seem like a cost-effective solution, since the cost is very little per gigabyte, ROT data quickly drives up monthly storage fees. In response to this, companies buy disk after disk to support this growth, but then in five years, they run out of space and have accumulated an abundance of disks.
The path to securing your data First, organizations need to identify what data needs to be retained, whether it is essential to conduct business or to meet the company’s compliance regimes – for instance, financial data for a SOX audit needs to be held onto for seven years. In contrast, GDPR statutes in Europe dictate that user data should be eliminated as soon as it is no longer needed.
You can save your organization copious amounts of money through data archiving methods that sophisticatedly and cost-effectively manage unstructured data. The door also opens for data to be used as a critical corporate asset that can easily be mined and used for benefit. For some businesses, master records or archives of certain sensitive data should be retained as long as they cannot be copied or viewed by unauthorized actions.
Intense security event monitoring and authorization is also good practice. This helps identify access controls that are incredibly useful for securing enterprise data and gives data stakeholders a heads up on incoming threats, latent or introduced data vulnerabilities, and potential privacy or compliance issues.
Organizations can rapidly identify and resolve issues by setting automated data search policies to keep up with current protocols for compliance regimes. A practical example of this is sharing medical data without signed forms or failing to delete account information when it is no longer needed.
Organizations shouldn’t have to turn a blind eye to increased cybersecurity threats and data leaks because they are unaware of cost-effective solutions to data management. Companies should become aware of the data management strategies that help identify data and organize what should be kept, deleted or sorted. In doing so, we change the digital world by securing the data ecosystem. Not only do these help organizations create new value – it is a critical factor in cutting down costs and preventing cyberattacks.
Adrian Knapp is CEO & founder of Aparavi.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,704 | 2,022 |
"Quantum computing promises to solve data center energy drain | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/quantum-computing-promises-to-solve-data-center-energy-drain"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Quantum computing promises to solve data center energy drain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data centers represent a massive drain on our world’s energy resources and are a major source of greenhouse gas emissions. These computing hubs produce 200 million tons of CO 2 annually and consume 2% of electricity worldwide, according to Accenture , which projects that figure will reach 8% by 2030.
Aspen Global Change Institute adds that some of the world’s largest data centers use more than 100MW of power — enough to power about 80,000 U.S. households.
The data center as an energy drain became a hot topic in tech and political circles more than a decade ago. At the request of Congress in 2007, the Environmental Protection Agency (EPA) developed a report on server and data center energy use, costs, and efficiency opportunities.
This set off a green data center movement that gave rise to companies such as Verne Global , which created a hydroelectric- and geothermal-powered data center in Iceland. And after a Microsoft researcher wrote a paper proposing underwater data centers a few years later, Microsoft made a splash with an underwater data center that employed seawater for cooling. Meanwhile, Highlander recently signed an agreement to build a commercial underwater data center at Sanya, a coastal city in China.
Work to build more energy-efficient data centers continues at Microsoft , Amazon , Facebook , Google , Intel and an array of other companies. Many green data center efforts focus primarily on using renewable energy sources to power and/or cool standard computing equipment. But in a world that continues to battle a global pandemic and has seen U.S. workers quitting their jobs at record rates , sustainability doesn’t get as much attention as it has in the past.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, as the World Health Organization (WHO) recently reminded us, climate change is the “ single biggest health threat facing humanity ,” leading to extreme weather events, disruption of food systems and the spread of diseases. And quantum computing can help address that.
Quantum computing can help power carbon fixation , the process of reducing carbon dioxide in the atmosphere by converting it into other useful compounds. Plants do this naturally, but quantum computers can help us discover synthetic catalytic processes. Instead of painstaking trial-and-error experiments, quantum computers can efficiently simulate alternatives and find efficient methods to extract carbon dioxide and convert it into useful chemicals.
It’s also worth considering how the choice of computing equipment — today and in the future — will impact energy usage. And you may be surprised to learn that quantum computers can perform some calculations much faster using just a fraction of the energy used by classical computers.
Here’s why: A conventional data center computer may use billions of transistors. But with a quantum computer, you have hundreds — or, eventually, millions — of qubits (quantum bits). That means you only need enough energy to excite, or move around, millions of atoms instead of billions of transistors. And quantum computers can analyze massive data sets in parallel; whereas classical computers need to analyze them serially.
I’m not alone in the belief that quantum computers will be vastly more energy-efficient than supercomputers in certain computational problems. Published research by a team of experts from NASA’s Ames Research Center, Google, and Oak Ridge National Lab have demonstrated this benefit. In their analysis, the quantum computer used 0.002% of the energy used by a classical computer to perform the same task.
Quantum computing will help companies and researchers solve some of the world’s previously unsolvable problems in such areas as drug discovery, electric vehicle battery innovation, and power grid optimization at a time when the world’s need for solutions is bigger than ever before.
The race is on for companies and countries to deploy quantum solutions to their advantage. But it’s important to remember that, when it comes to climate change, we’re all in this together. And we all stand to benefit from breakthroughs that quantum computing can enable. The fact that quantum computers require much less energy than conventional computers makes them even more valuable.
Nir Minerbi is co-founder and CEO of Classiq DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,705 | 2,022 |
"Modernize your site tag management with a 1P tag manager | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/modernize-your-site-tag-management-with-a-1p-tag-manager"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Modernize your site tag management with a 1P tag manager Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As the future of data collection , storing and processing has changed significantly, and the definitions of what is first party and third party are constantly evolving, one thing is for sure: the era for a limited cross-domain cookie is already here, very much like the end of a 3P cookie (aka cookieless).
The death of the IDFA (and tracking on iOS devices) Every mobile Apple device has a unique advertising ID called the IDFA. It makes it possible to identify a device (and, by extension, its user) to display tailored advertisements. Advertisers use it extensively to track individuals. They do this by collecting demographic and behavioral profiles for a given IDFA and then sharing them with third parties, who use the data to enhance how they segment and target users.
This has been happening without users knowing or consenting, but Apple is changing that with its new AppTrackingTransparency (ATT) framework. Starting with iOS 14.5, an app cannot track a user without first asking for consent. Apple defines tracking as “the act of linking user or device data collected from your app with user or device data collected from other companies’ apps, websites, or offline properties for targeted advertising or advertising measurement purposes.” This is a broad definition. It encompasses everything from advertisers to analytics to marketing attribution. And for tracking to continue, an individual must choose to be tracked. Even the most generous opt-in figures show drastic reductions.
Apple’s intended effect is to significantly curtail this activity. However, in essence, it seems like Apple is implementing GDPR with its original intent.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Advertisers and publishers that have over-relied on Google Tag Manager (GTM) are now struggling to keep their marketing signals, measurements and attribution stacks intact.
GTM is considered a tracker by all browsers outside of Chrome Cross-domain cookies are limiting identity 3P cookies are the new ‘evil’; cross-domain cookies are trackers Enabling tracking on Google before you run a script allows Google to see every user (for free) And we are at the beginning of the change cycle. Analytics links to Google Analytics are already invalid in the EU (France, Austria, Ireland).
Google Tag Manager is dead! Long live GTM! I know this sounds ominous; well, almost! But it is true when your tag management system is blocked about 25-40% of the time; it’s hard to imagine that website owners can just continue as if nothing happened. Ad blockers block about 35-50% of Google Tag Management, causing data loss for these site publishers. Don’t believe it — test it for free at truetraffic.io – True Traffic drops no cookies or IDs against your consumer but tells you the impact of data loss to your business.
Using tag management systems like Google Tag, enterprises and agencies lose visibility every time an ad-blocking, iOS, Firefox, or a privacy browsing (40% of them) customer or prospect visits their site.
As every enterprise is expected to think about how they need to own the data, build trust (consent) and think compliance (regional and state laws), customer data must enter their infrastructure and control (controller) before it’s distributed to parties that need to coprocess their data (law).
By delegating the game’s rules to a third party, like Google Tag Manager, you expose your customer to a third party even before asking for permission. That’s a classic catch-22, especially considering Google is the No. 1 advertiser in the world. To help alleviate this, Google has introduced Google Tag Manager using GCP. However, the entire setup is cumbersome, expensive and does not enable site controllers to own their data.
The first-party relationship — directly between you and your users — is privileged. There are no platform constraints on the data that you collect yourself, although privacy regulations like the GDPR and CPRA still require consent for specific uses and types of data.
To do this, you need to collect, manage, analyze and share customer data yourself in your warehouse. Warehouse data can capture consent across platforms and even allow advertisers to decide how to share data. More critically, you can analyze the data that you have captured, and only share what users consent to, allowing you the best of both worlds — understanding your users and engaging third parties as permitted.
So, what’s a 1P tag manager? A first-party tag manager runs in your domain (and in your control) and makes decisions for you in your infrastructure (serverless edge to your site or app) to decide which coprocessor gets to see your customers’ data, but making you (the site or app) responsible for those decisions (controller). Essentially, it acts as a rule engine and a transformation engine, while replicating data to partners via an ID graph.
Today, even the consent management systems are third-party and lose customer identity every seven days. This creates gaps and personalization issues for sites that have lost track of an existing customer identity via their CMP; a 1P tag manager solves this.
A 1P tag manager: Keeps a copy of a universal lifetime ID to map SaaS, consent, etc., Keeps a copy of the consent log against the lifetime ID, Creates ID for data sharing and opts out as necessary per partner, Has the ability to transform data and make it innocuous, Has the ability to detect what region customer belongs to, and Has the ability to plug a new data coprocessor by enabling consent for the same, very much like the old Google Tag Manager.
Critically, with a serverless edge tag manager, data never leaves your “controller” infrastructure until you, the controller, decide to share it, using strictly software tools that have no access to this data. However, big wins are 100% visibility over your data, 100% control over who the enterprise shares it with and a trusted relationship with the customer.
What are the benefits? The No. 1 benefit of a 1P tag manager is a lifetime identity that only belongs to the site domain. This identity helps catalog all the necessary ID mapping, data sharing, opt-outs, consent log changes, etc., that should have been part of the tagging system.
So, instead of giving your valuable customer data to others, you are better off handling the data yourself. To do that, you will need a first-party data warehouse that manages and analyzes user and campaign data in your own infrastructure. And, if it is designed correctly, it will allow you to adjust to technology and policy changes — like Privacy Shield breaking in Europe — with relatively little difficulty.
As the future of privacy and trust continues to evolve, building trust while conducting business globally, with zoning features, enabling compliance, and getting accurate data analytics for the reality of your business are essential.
Mandar Shinde is CEO of Blotout.io.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,706 | 2,022 |
"Data is the strongest currency in marketing and there may be too much of it VentureBeat | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/data-is-the-strongest-currency-in-marketing-but-is-there-already-too-much-of-it"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Data is the strongest currency in marketing and there may be too much of it Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data and information obtained through its analysis have been used in marketing decision-making for years. It wasn’t until the first years of this millennium that talk began about “big data.” Especially over the last 10 years, the amount and importance of data in marketing have grown exponentially. Ironically, the adjective “big” is completely undersized in this context.
According to many estimates , more than 90% of all data globally has been generated in just the last few years. It’s estimated that by 2025 people will produce 463 billion gigabytes of data every day. According to statistics, as early as 2021, 4.66 billion people used the internet — about 60% of the population — and the number is growing by hundreds of millions every year. With the massive increase in usage and digitalization, Cybercrime Magazine has estimated that in 2025, cloud services will have more than 200 zettabytes of data — 1 zettabyte = 44 trillion gigabytes.
Where is most of it coming from? As early as the beginning of this millennium, marketing data was used primarily to track sales transactions and analyze the impact of email campaigns. Today, human-derived data for marketing is generated from a vastly larger number of sources: online shopping, clicks, search behavior, social media activity, geographic movement and so on. Brands want to meet consumers more effectively in the digital world, which is happening. For example, according to statistics from Business of Apps , nearly 70% of Instagram users viewed images and videos posted by brands in 2021.
At the same time, as the amount of data increases, its collection has become increasingly challenging with various consumer protection regulations (e.g., GDPR and ePrivacy ) and changes in services. E.g., changes in how Apple and Facebook allow their app users to decide on their data are very welcome to consumers, but they decrease the possibilities for apps to gather data and make it harder for service providers to provide customized services.
Apple’s decision to deprecate the use of their unique IDFA (identifier for advertisers) files in the same category. These changes have fundamentally affected marketing strategies and cause new challenges for marketers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The quantitative infinity of data and its inevitable growth are major problems for today’s marketers. No team has any physical ability to process such an amount of data, let alone the ability to produce genuinely useful analyses for it. Fortunately, the data-driven world seems to recognize and solve its own challenges, as many new intelligent products and services for analyzing data have emerged to support marketers worldwide to truly leverage the ever-growing amount of information. This development is still in its infancy, and this can be witnessed, e.g., in my own company, Supermetrics’ new clients: 80% of them have never used this kind of service before.
Marketing and the need for data rules Legislators and decision-makers worldwide have also been active in regulating data although it’s almost impossible to keep pace with change in many places. The genuine exploitation of data requires rules and regulations, as growth always increases the potential for misuse. The task of technology companies is to build data pipelines that ensure the trust and security of AI and analytics.
Data is the new currency for businesses, and the overwhelming growth rate of it can be intimidating. The key challenge is to harness data in a way that benefits both marketers and consumers who produce it. And in doing this, manage the “big data” in an ethically correct and consumer-friendly way. Luckily, there are many great services for analyzing data, effective regulation to protect consumers’ rights and a never-ending supply of information at our hands to make better products and services. The key for businesses is to embrace these technologies so that they can avoid sinking in their own data.
Mikael Thuneberg is CEO & founder of Supermetrics.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,707 | 2,022 |
"Blockchain interoperability is essential to avoid the flaws of Web2 | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/blockchain-interoperability-is-essential-to-avoid-the-flaws-of-web2"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Blockchain interoperability is essential to avoid the flaws of Web2 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Blockchains are not merely storage and communication protocols. Each of them has a history, community and culture worth protecting.
Some communities are more focused on creating “sound money” alternatives to current fiat systems. Others are working hard to maximize raw computing power or storage capacity. Some blockchains allow users to collect basketball shots and other sports moments. Others are emerging as metaverses for developing a particular cultural or gaming culture.
We need to nurture spaces for these communities to grow and innovate. Like borders, languages and currencies, blockchain designs allow cultural particularities to thrive instead of being absorbed by the more powerful neighbor.
We need to promote diversity. And, just like in the real world, we must also encourage dialogue between communities. We must invest in bridges that allow blockchain ecosystems to communicate, as long as these bridges emerge organically to serve the needs of their users, rather than top-down as a result of government-sponsored standards.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Interoperability holds the key to a multichain world Blockchain interoperability is not a set rule book. It refers to a broad range of techniques that allow different blockchains to listen to each other, transfer digital assets and data between one another and enable better collaboration. There are decentralized cross-chain bridges that facilitate the transfer of data and assets between Ethereum, Bitcoin, EOS, Binance Smart Chain, Litecoin and other blockchains.
Currently, the main use cases of interoperability are: first, the transmission of a given cryptocurrency’s liquidity from one blockchain to another. Second, allowing users to trade an asset on one chain for another asset on another chain. Third, enabling users to borrow assets on one chain by posting tokens or NFTs as collateral on another chain.
Each bridging technique makes its own design compromises in terms of convenience, speed, security and trust assumptions. Each blockchain operates on different sets of rules and bridges serve as a neutral zone where users can switch between one and the other. It greatly enhances the experience for users.
For end-users, these trade-offs may not be easy to understand. Furthermore, the risks associated with each bridge technique may compound each other whenever an asset crosses several bridges to reach the hands of the end-user.
Call to action As members of the Web3 ecosystem, we share a responsibility not just to promote a multichain world, but also to make it safer as more users begin to enter Web3.
Everyone has a role to play. Cross-chain bridges must be transparent about risks and resist the temptation of growth at all costs; they must also publish bug bounties. Security researchers and analytics platforms should publish public risk ratings and report incidents. Blockchain protocols and wallet operators should agree on lists of cryptocurrencies and smart contracts officially supported on each chain. Dapp developers should aim to deliver simple user experiences without throwing away the core tenets of decentralization and user ownership. And media outlets and key opinion leaders must help end-users to “do their own research.” We must move away from “winner takes all dynamics” and offer a better future to user and developer communities. The Web3 movement gained traction because we wanted — and still want — to move away from the shackles of centralization. The seamless flow of information and tokens between different blockchains will be a major push towards a truly decentralized, multichain economy.
Ken Timsit is the managing director of Cronos.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,708 | 2,022 |
"Big tech vs. data privacy: It wasn't meant to be this way | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/big-tech-vs-data-privacy-it-wasnt-meant-to-be-this-way"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Big tech vs. data privacy: It wasn’t meant to be this way Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Privacy.
You would be hard-pressed to find a word used as frequently or with as much weight in recent years. As the world hurtles towards an increasingly digital future, concerns over data privacy have reached a fever pitch.
From high-profile cases of data breaches to tales of government surveillance, it seems that scarcely a day goes by without another story about how our personal information is being mishandled.
Threats to privacy in the age of big tech The digital age has brought with it many amazing advances, but it has also created new threats to our privacy. One of the biggest dangers comes from the way that big tech companies collect and use our data.
Most people are now familiar with the term “data mining” — the process by which companies collect large amounts of data about our online activity and use it to target ads and sell products. But data mining is just the tip of the iceberg. Many tech companies are now using sophisticated methods to track our every move, both online and offline.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This tracking is made possible by the proliferation of devices that are connected to the internet. These devices collect a wealth of data about our whereabouts, our behaviors and even our physiology. This data is then used to create detailed profiles of each individual user.
These profiles are extremely valuable to companies, who use them to target ads, sell products and influence our behavior. In other words, they use our personal information to make money.
This business model has come under fire in recent years, as more and more people have become aware of the ways that their personal data is being used without their consent.
The way forward So what can be done to protect our privacy in the age of big tech? There are no easy answers, but there are some steps that we can take to help ensure that our privacy is not violated.
The answer is not merely a matter of public policy — but an overall paradigm shift in the way we think about privacy. This would of course involve: Advocating for stronger privacy laws.
Using privacy-preserving technologies.
Educating yourself and others about privacy issues.
Being vigilant about how we share our personal information.
Understanding the need for a decentralized and democratized web.
This last point goes back to the original intent of the internet. As a matter of fact, the internet was designed to be a decentralized network, where each user could connect to any other user without going through a central server. This design was based on the belief that decentralization would make the internet more resistant to censorship.
The future of privacy Unfortunately, this vision has not been realized. Instead, we have seen the rise of a small number of giant tech companies that now control most of the internet. These companies use their power to censor and manipulate the information that we see, and they collect vast amounts of data about our online activity.
This centralization of power is dangerous for democracy and privacy. It gives these companies too much control over our lives, and it makes it easy for them to violate our rights.
We need to build a new internet that is decentralized and democratized. This new internet should be designed to protect our privacy and promote free speech. It should give power back to the people, and it should be resistant to censorship and control.
The first step in building this new internet is to create decentralized alternatives to the centralized services that we use today. These alternatives will be built on the principles of privacy, security and freedom.
Privacy is a complex issue, and there is no one-size-fits-all solution. But by taking some simple steps to protect our privacy, we can make a difference. We can make sure that our data is not used to violate our rights, and we can help build a new internet that is free from censorship and control.
Daniel Saito is CEO and cofounder of StrongNode.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,709 | 2,022 |
"Why observability data is crucial for digital transformation | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/why-observability-data-is-crucial-for-digital-transformation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why observability data is crucial for digital transformation Share on Facebook Share on X Share on LinkedIn Presented by Era Software In 2022, observability data volumes could increase between two to five times, according to Era Software’s 2022 State of Observability and Log Management report. That means companies could be looking at exabytes of data to manage in five years. Current tools aren’t up to the task, 79% of IT practitioners say, and costs will skyrocket in 2022 if existing tools don’t evolve.
But storage isn’t the only issue; 96% say that even more critical is finding efficient ways to apply that data to solving business problems – and 100% say their organizations would benefit from innovation in observability.
“It’s becoming harder and harder for engineering and technology organizations to figure out which pieces of this growing pile of log data are most important,” says Todd Persen, CEO and co-founder, Era Software. “As those data volumes have gotten bigger than humans can even review or grasp, the tools to store that data have started to break down.” The problem with traditional monitoring As companies move into the digital age, they’re also moving from traditional two-tier application architecture to multi-tier application architecture across multiple cloud environments and managed services. The IT team doesn’t have direct control over these services and instead is dependent on what the cloud provider reports. Even though the provider is responsible for the performance of the company’s application, they might not understand or have a view of the underlying technology and how it’s performing.
Without observability, the acceleration of digital transformation could be a risky journey, resulting in poorly performing services that will ultimately impact both the customer experience and the bottom line. But while observability is a straightforward goal, and many organizations realize that existing monitoring tools cannot keep up with the massive data volumes created by modern cloud environments, they’re looking for new ways to efficiently extract critical insights from observability data.
“It’s not even just the number of systems — it’s that the operational modes of these systems have become so complex that even if you’re the developer who built the system, you can find yourself at an impasse,” Persen says. “How do you gain insight into what it’s actually doing as a dynamic system, and what metrics matter? What do you need to look at to tell why a system is failing?” Security also remains a considerable challenge since security organizations need to analyze massive amounts of log data to identify potential security incidents and for security audits and compliance reporting. However, many organizations are forced to limit the number of logs they ingest or store because it’s too expensive to keep them all. As a result of this forced picking and choosing, many security leaders say they don’t have the logs they need to troubleshoot security incidents, which negatively impacts response efforts and increases vulnerability.
Why observability is essential Observability bridges the gap between legacy technology and modern approaches to data management. It’s an evolution of traditional monitoring towards understanding deep insights from analyzing high volumes of logs, metrics, and traces collected from many modern cloud environments. It ensures the delivery of reliable digital services in the face of the increasing complexity of cloud services. And it’s more and more necessary for any company that’s embarking on digital transformation.
“People realize that as they’re going on this digital transformation journey, they’re adopting more tools and more products and adopting more scope and more things they need to monitor and observe,” Persen says. “Observability is an enabler because it lets people have the confidence that these new systems are doing what they want. But at the same time, it’s become table stakes for digital transformation — having a good observability story is an essential part of success.” The State of Observability and Log Management report also revealed that IT departments are erasing data to manage the cost of collecting and storing log data with more traditional tools. But ditching the data means losing critical information needed later for forensics and security analysis.
“Imagine you have an attack and don’t have the data to figure out where it’s coming from – you’re exposing your organization to risk,” Persen says. “Not only are you exposing yourself, but if you’re not properly logging everything and potentially masking personally identifiable information, you can accidentally expose that PII.” Key observability tools While the cloud offers unprecedented efficiency, unlocks innovation, and slashes costs, it’s also made it a lot more complicated to figure out how to execute cloud digital transformation the right way. How do you build a business on top of these complex systems? “At the end of the day, most companies are not in the business of managing or dealing with infrastructure. They’re in the business of providing a core service,” Persen says. “How do they stay effective while going down this relatively new and uncharted course? It’s hard, and we see our role as trying to find a way to provide a consistent set of tools in the observability space that can fit anywhere that the customer needs to go.” Platforms like Era Software Observability Data Management , which process data between different sources and destinations at scale, plus cost-effectively store and optimize it for analysis, are the way of the future.
IT and security teams should look for a platform that can gain insights from raw data, reducing TCO for existing observability and log management solutions while preserving information in low-cost object storage. This data can be used for forensics, auditing, baselining, and seasonal trends analyses.
Persen also notes the importance of a platform that’s not dependent on any particular architecture but has the flexibility to function in systems from traditional on-prem to hybrid cloud to cloud.
And one of the biggest benefits overall of observability data platforms like these is significant cost savings due to the efficiency the technology brings to observability workloads. When you consider the budget dedicated to log management, reducing costs means having the option to store more data and improve visibility. This leads to more reliable services, freeing up resources to invest in innovation.
There’s a broader benefit too: By centralizing everything and removing artificial limits on who can access what amount of data or how much should be logged, data is democratized for the entire organization, allowing everyone to see the full view and derive insights from it.
“Data democratization, providing access to the entire organization, allows everyone to get big business benefits,” Persen says. “It’s not just data made available to ITOps for troubleshooting. You see everything. You see customer interactions. You see information about application performance. You see the trends in your customers. It’s a gold mine of data for the entire organization.” Dig deeper: Read the 2022 State of Observability and Log Management report here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,710 | 2,022 |
"Why companies need to bridge the digital-physical divide -- and how | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/why-companies-need-to-bridge-the-digital-physical-divide-and-how"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why companies need to bridge the digital-physical divide — and how Share on Facebook Share on X Share on LinkedIn Presented by Foursquare For two years now, we have been waiting for things to return to “normal.” When can we grab coffee with friends? When can employees return to the office? When will customers return to stores? The reality is that the pandemic has upended everything, from how people buy to what they are purchasing. What’s more, the old consumer model and trends that businesses have relied on are obsolete. At least for now. Despite all the changes in consumer behavior, one thing is clear: we are seeing the unification of digital and physical emerging as a key trend for 2022.
Location is shifting from just an add-on feature to a necessity, and businesses are learning that it’s essential to build meaningful bridges between digital spaces and physical places, in a way that protects people’s privacy. In fact, 95% of executives say that geospatial data is important to achieve desired business results today, and 91% say that it will be even more essential in the next three to five years, according to a recent study from BCG.
There are a few reasons for this shift, says Ankit Patel, SVP of Engineering at Foursquare. First is simply that people are heading outside.
“Coming out of the pandemic, we’re seeing an explosion of people wanting to engage with the real world,” he says. “The foot traffic patterns and places where people frequented changed drastically at the beginning of the pandemic, and they are changing just as drastically now. Businesses need to understand their customers’ changing needs.” For example, return-to-office policies, supply chain issues, and rising gas prices are all dramatically influencing the places people go.
New consumer behaviors mean companies need to rethink the very products they create and services they provide. This includes how they’re staffing, outsourcing, and even the customers they are targeting.
Businesses need to be able to meet customers where they are, making location data an extremely valuable resource. The line between the digital and physical world is continuing to blur, with consumers interacting in both spaces seamlessly and necessitating companies to do the same as well.
The power of location data for decision-making From a business standpoint, location intelligence presents opportunities in a number of different areas.
Quantitative investment models.
Savvy investors leverage alternative data sets to help guide their strategies. Foot traffic can serve as a powerful signal as to which companies are either declining or surging in popularity. By adding that visitation data into their models, investors can forecast trends sooner and outperform the market.
Demand forecasting and supply chain optimizations.
Businesses typically plan their inventory and forecast demand based on historical sales, sometimes incorporating external data sets like weather data or survey data. By incorporating foot traffic data into their models, businesses can better identify, capitalize on, and predict trends, such as the types of stores people are visiting and the peak times when they are shopping — thereby improving the efficiency of their models and reducing waste. Armed with real-world data, businesses can ensure they have the right products that people want in the right locations.
In-store shopping experiences.
Retailers are increasingly using location technology to create better shopping experiences, from helping people find a store nearby, to optimizing the store’s layout so that people can find the products they want when they arrive. Location technology serves as a powerful tool for retailers to stay ahead of ecommerce competitors, and to compliment their own ecommerce offerings.
Site selection.
Location data has become indispensable both for commercial real estate, and for brands with brick-and-mortar presence looking to expand their footprints. According to a recent study , 60% of commercial real estate professionals report they struggle to make timely investment decisions to keep up with shifting market trends. Up to 58% admit they still rely on outdated or static data and tools to make business decisions.
For this reason, businesses are increasingly turning to location intelligence to make the right decisions about where to invest in real estate, taking into account movement patterns, changes in neighborhood landscapes and more. In fact, the same study showed that 92% of commercial real estate executives expect their spending on location technology to increase in the next two years. Using granular and timely insights to inform real estate strategies has become even more critical since the onset of the pandemic, as today’s business leaders are also facing decisions about whether — and where — to invest in office space.
Location data and advertising For businesses trying to maximize the ROI of their advertising spend (which is all of them), cost per click is a standard metric — but not every business is designed for online conversions. Instead, many advertisers are looking to drive people into physical stores. Or they may be looking to influence more complex behaviors such as buy-online-pickup-in-store, order ahead, and click-to-collect.
Accurate, timely insights based on foot traffic data can help marketers understand the entire customer journey — how customers are interacting with their business, and more generally with the world around them. When businesses can accurately quantify the impact of their advertising on both online and offline behaviors, they can better optimize strategies in real-time to maximize their return on investment.
“We’ve seen that advertisers who don’t use location technology understand only half of their customers’ journey,” Patel says. “Unable to bridge their online and physical presence, these companies are leaving essential money on the table.” In action: Improving user experiences with location “In addition to driving smarter decisions, businesses want to incorporate location technology in order to power better experiences for their end users,” says Patel. “Today’s consumers expect personalization. With the help of our technology, product managers and developers can solve for that need.” One example of this is Nextdoor, which helps users explore their neighborhood and engage with their community. The company uses Foursquare’s point-of-interest (POI) data to improve global business data coverage and quality, enabling them to surface local recommendations for users along with relevant attributes (such as business hours) in its onboarding and discovery experiences. POI data also helps Nextdoor with lead generation, boosting the number of local businesses that officially claim their own listings within the app.
Foursquare also has a partnership with Doordash, providing POI data for the ‘Request a Restaurant’ feature, which lets Doordash users request a recently opened neighborhood restaurant to be added to Doordash’s pick-up and delivery service. This feature has generated a new data stream for their team, helping them understand where their users are requesting certain cuisines, the trending types of restaurants across markets, and which cuisines are surging or declining in popularity in a granular way to inform recommendation algorithms. Fresh POI data with rich context cues means businesses can better understand their users, and tailor more meaningful experiences accordingly.
Big data and beyond Big data is still a problem, despite the fact that we’ve been talking about it for years. Companies are grappling with how to store, process, and visualize petabyte-scale data, or billions of points of data. However, companies aren’t actioning on some of their most valuable data — including location data. Even if they are, sometimes it’s hard to know, from all the data, how to make the right business decision. How do you get a good signal, distinguish correlation and causation, and make sense of how the pieces on the board are moving? “The volume of data being produced and collected is skyrocketing, and it’s becoming increasingly difficult to understand what information is relevant and how to use all of this data in a meaningful way,” Patel says. “Investing in data science will be key. Using sophisticated algorithms and models, data scientists can extract knowledge and uncover insights to drive real, impactful business decisions.” Looking at 2022 and beyond, businesses will need to expand in-house data science teams and integrate more advanced third-party solutions, such as Foursquare’s Unfolded platform. It’s a geospatial analytics and visualization tool that makes it easy for users to drag and drop large data sets, surface insights, and produce powerful visualizations. A game-changing tool within Unfolded is Hex Tiles, a new tiling system that enables users to process, unify, and analyze bigger volumes of geospatial data than previously possible. What’s more, this can now be done in a matter of minutes, all within a browser.
The goal with these types of innovations is to empower Foursquare’s customers and partners to discover the rich insights from location data and build better, more personalized experiences for users. “As with all things, we’re in a state now where I think it will be almost irresponsible for companies not to unlock the massive potential of location data to drive better decision-making,” he says. “2022 is turning out to be a pivotal year, I believe, in location data. Something that was once a competitive advantage will soon be a competitive necessity. Businesses that don’t adapt will be left behind.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,711 | 2,022 |
"ThoughtSpot adds new BI capabilities, editions for smaller organizations | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/thoughtspot-adds-new-bi-capabilities-editions-for-smaller-organizations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ThoughtSpot adds new BI capabilities, editions for smaller organizations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At its Beyond 2022 conference yesterday, independent business intelligence (BI) player ThoughtSpot announced the salient points of its revamped Modern Analytics Cloud platform, including new capabilities and new editions available for small teams, medium-sized entities and large enterprise organizations.
ThoughtSpot’s Series F fundraising round, back in November, garnered $100M and a $4.2 billion valuation for the company. The valuation reset accompanied a change in business model, too. While the platform premiered on the market with a monolithic, on-premises natural language-focused business intelligence platform and a six-figure price tag, ThoughtSpot completely switched gears, moving to a fully-SaaS model.
The technology has changed along with the deployment model. While the original ThoughtSpot platform required all data to be ingested into, and modeled within, its own storage platform, it now leverages major data warehouse and lakehouse platforms — including Amazon Redshift , Snowflake , Databricks , Google BigQuery , Starburst , Dremio and Microsoft’s Azure Synapse Analytics — for the actual storage of data.
Essentially, ThoughtSpot is now implemented as an analytics engine, with its own modeling language , and no longer seeks to be the physical repository for the data. This avoids lengthy data movement and inefficient, risky data duplication, taking a customer-driven approach rather than a vendor-centric one.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! New editions to its analytics platform Yesterday’s announcement rang in further changes to the pricing model, making ThoughtSpot’s analytics platform available in three editions, which differ in data volume/capacity limits, but which impose no restrictions on number of users. A Team Edition is available for $95/month, with a data volume limit of 5 million rows. While there is no limit in the number of users, there is a limit of one group of users, making the Team edition a departmental solution, with appropriate nomenclature. Team Edition offers unlimited queries, and support is community-based.
Pro Edition starts at $2500/month, with a limit of 100 million rows — increasing both the price and the data capacity by about 20x — although the rows:dollar ratio actually decreases a bit. For Pro, the number of user groups increases from one to five, making it a solution more appropriate for small and medium-sized organizations or as a divisional solution for larger ones. 24/7 direct support by ThoughtSpot is part of the package, with certain service-level agreements (SLAs). The actual monthly billable amount for Pro will vary by query activity; however, startups, nonprofits and educational institutions, with less than 100 people and under $10 million in annual revenue, are eligible for a special variant of Pro that eliminates per-query charges.
The top-of-the-line solution is Enterprise Edition, which eliminates caps on data volume and number of user groups. Here too, pricing is based on actual queries, and capabilities include higher-grade SLAs, enhanced data encryption , support for AWS PrivateLink / Azure Private Link , single sign on (SSO) and VPN support.
ThoughtSpot offers features galore In addition to the new editions and pricing, ThoughtSpot announced several new core capabilities. These include new “CodeSpot” searchable repository of open-source ThoughtSpot blocks and code samples; ELT Live Analytics templates (custom ELT jobs built to work with Matillion ); new third-party data blocks; integration with dbt Labs ‘ SQL-based data pipeline platform, and new SpotApps, with templates for ServiceNow, Snowflake, HubSpot, Okta, Google Analytics, Google Ads, Jira, Redshift and Databricks.
Also announced were ThoughtSpot Sync, which can trigger actions in other applications and services through APIs; Bring Your Own Charts , which lets customers bring visualizations from javascript or d3 libraries directly into ThoughtSpot’s Live Analytics interface; and Monitor, an automated KPI observation and alerting facility.
Compare and contrast For my own purposes, I like to analyze new offerings relative to others in the market, both to determine value, but also to observe industry trends. The availability of three pricing tiers for ThoughtSpot’s platform, as well as its cloud orientation, begs some comparison to Microsoft’s Power BI.
The latter offers three major tiers as well: Free, Pro and Premium, with the last of these starting at $4995/month and aimed at enterprises, much like ThoughtSpot’s Enterprise Edition.
There are key differences, though. While Power BI Premium doesn’t limit the number of consumption-only users, it does have additional per-seat pricing for users who need authoring capabilities. On the other hand, it offers dedicated infrastructure and doesn’t have any usage-based fees. Of course, the higher the usage, the more compute capacity a customer may want, which would mean adding dedicated infrastructure nodes, with a commensurate increase in monthly pricing. One way or another, you get what you pay for, or vice versa.
Meanwhile, Power BI lets users import data in their BI models or leave it in the source system. It also provides for so-called compositie models, where data storage for a single BI model can be split between the local and remote.
Trends, not fads The point here, though, isn’t to measure parity between ThoughtSpot and other BI platforms, but rather to discern some trends of consensus in the market. What we can see overall, is that business intelligence, which has been around since the 1990s, maintains its core tenets of slice-and-dice analytics but has modernized with the sea changes in database technology and computing overall. Today, it’s all about the cloud, integrating with other platforms in the ecosystem, and leveraging data from a variety of sources, without requiring the data to be moved.
The barriers to entry for BI have been lowered, with simplified getting-started experiences, and very accessible pricing for smaller organizations. Large organizations will still pay handsomely, but will ostensibly see comprably handsome ROI, in terms operational efficiencies and competitive differentiation. The doctrine of data-driven operation and digital transformation is enabled by BI, which needs to be low-friction and accessible at the low-end, while facilitating robust rewards, usually accompanied by equally robust pricing, at the high-end.
ThoughtSpot and its platform have changed immensely since the early days, as has the BI space, with so many players having been acquired in the last few years. ThoughtSpot is now well-aligned with industry trends and seems driven by them. If the remaining independents like ThoughtSpot are to succeed, they’ll need to conform to these trends and even get a bit ahead of them. Some will do well there; others less so. ThoughtSpot is clearly all-in on retooling and revamping for today’s analytics workloads, despite the business intelligence market’s evolution into a very crowded, competitive space.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,712 | 2,022 |
"Stripe launches Data Pipeline to help users sync payments data with Redshift and Snowflake | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/stripe-launches-data-pipeline-to-help-users-sync-payments-data-with-redshift-and-snowflake"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stripe launches Data Pipeline to help users sync payments data with Redshift and Snowflake Share on Facebook Share on X Share on LinkedIn Stripe logo on a phone screen Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Payments processing giant Stripe has announced a new product designed to help users synchronize their financial data with Amazon Redshift and Snowflake.
In a software-centric world, companies of all sizes possess an arsenal of data which is usually spread across myriad silos, from customer support tools and CRM applications, to marketing and — in the case of Stripe — payments. Accessing all this data in their silos is easy, but gleaning deep insights by combining and querying different data sets in a centralized data warehouse is a different ball game.
Stripe Data Pipeline , as Stripe’s new product is called, is positioned to supplant existing mechanisms that companies may use to transport Stripe data into data warehouses, such as having to build an API integration in-house. This is a time-consuming endeavor that requires many engineering dollars, and even then, the inherent latency can hinder timely data access.
“Building an API integration from scratch requires multiple months and hundreds of thousands of dollars,” Stripe’s product lead Vladi Shunturov explained to VentureBeat. “Engineers also need to consistently monitor and update their homegrown solutions to support transaction updates, new datasets, schema changes and more. Stripe Data Pipeline can be set up within a few clicks and takes on all ongoing operational work.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Native pipeline It’s worth noting that companies can also use data integration platforms such as Airbyte which offer pre-built connectors to transform and transfer data from Stripe into Snowflake and Redshift.
However, such ETL (extract, transform, load) integration tools can lead to “incomplete results,” given that they don’t support the full gamut of Stripe data. Indeed, ETL pipelines can only pull data from Stripe’s core REST API , whereas the Stripe Data Pipeline offers unfettered access to Stripe data such as that available via its reporting API, which delivers a host of “business-ready metrics,” according to Shunturov. This includes data relating to Interchange Cost Plus ( IC+ ) fees, which is concerned with transactions and Stripe balance changes. It also means that companies can access revenue and financial reports out-of-the-box — directly from their data warehouse.
“This significantly reduces the amount of work our users need to do in order to transform their account and transaction data into meaningful reports, metrics and insights,” Shunturov said.
And so with its new Data Pipeline, Stripe is helping companies consolidate all their payments and financial data in their existing warehouses, so they can extract key business insights with fewer roadblocks. It’s worth noting that this functionality could be used by just about any team — a security and fraud unit at a food delivery platform, for example, could combine Stripe data with other business data to identify which restaurants may be most susceptible to fraud. Or analytics and product teams could find potential new growth opportunities by following the flow of payments through a customer’s entire lifecycle to figure out profitability, margin and ways to cut costs.
According to Stripe, businesses such as Zoom, Lime and HubSpot were already using Data Pipeline ahead of today’s formal launch. And in the future, there could be scope to extend support to other data warehouses, such as Databricks or BigQuery, though Stripe wouldn’t confirm plans.
“We’re actively looking at ways to enhance our product offering, but we don’t have any specific plans to share at this time,” Shunturov said.
Data Pipeline is available now to all Stripe customers in the U.S. who are also customers of Amazon Redshift or Snowflake’s Data Cloud.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.