id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
15,167
2,021
"Decentralization may be key to protecting our digital identities | VentureBeat"
"https://venturebeat.com/2021/11/06/decentralization-may-be-key-to-protecting-our-digital-identities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Decentralization may be key to protecting our digital identities Share on Facebook Share on X Share on LinkedIn Data security Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Debra J. Farber, Privacy Strategist at Hedera Hashgraph The internet as we know it is broken. When it was originally created with the primary objective of facilitating information sharing, this meant that security and user privacy were little more than an afterthought. The original data architectures were based on the concept of stand-alone computers, which companies used to store data centrally on a server that could be sent or retrieved by a second counterparty. To reap the benefits of the internet both on an individual level and at a societal level, each user needs a digital identity. There are many interpretations of the term “ digital identity ” which range from email addresses and social media accounts to actual forms of digital identification such as passports or driving licenses used for authentication in real-life scenarios. As the UN strives to ensure that everyone on the planet has a legal identity by 2030, the topic of digital identification has become more pertinent, prompting companies like Microsoft and Accenture to look at ways to provide digital identities to the 1.1 billion people around the world with no official documentation. As we enter a new era where our driving licenses are stored on our phones , it’s important to remember that the world was a different place when the foundations for the internet were laid down. Consequently, the internet that we rely on today still sits on shaky footing when it comes to user privacy and security. This is cause for a host of problems – and it’s a large part of the reason why Microsoft and Accenture have turned to blockchain to help the UN realize its goal of providing each member of the global population with a legal identity over the next decade. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Losing control of our digital identities According to Statista , the number of social network users worldwide reached 3.6 billion in 2020, with that figure forecast to grow to 4.41 billion by 2025. On the vast majority of existing social media platforms, email clients, and the array of other tools we use to communicate online, individual users do not hold ownership of their digital identities. Instead, these identities are managed and owned by some of the largest and most powerful companies in the world. This comes at a cost to our personal, professional, and financial data. Under the control of large businesses, this data is used and analyzed for advertising , marketing and to predict our collective future behaviors. Banks can see how and where you spend your money; retailers can identify patterns in your shopping habits; while social media platforms know who you know and what you are interested in. We have seen time and time again how this information can be exposed or exploited through the rising number of cyberattacks like the recent “catastrophic” data breach involving Ireland’s Health Service Executive (HSE) and other notable incidents, such as the Cambridge Analytica scandal. What makes this more worrying is the fragmented nature of our digital identities – many longtime Internet users would be unable to list off the names of every single website or app to which they have ever registered. For many people, there are little pieces of our identities spread all over the internet for which we can’t even account. It raises a serious question of trust. With more and more devices getting connected to the internet, virtually all of our data is still centrally stored: on our computers or other devices, or in the cloud. Can we trust the businesses, organizations, and institutions that store and manage our data against any form of corruption – either internally or externally, on purpose or by accident? In the instance of Ireland’s HSE cyberattack which occurred in May 2021, hackers gained access to highly sensitive data which was centrally stored by the Irish health service. According to the BBC , by September 2021, 95% of servers and devices had been restored – meaning that the HSE has yet to restore all devices and services impacted by the incident. The attack on Ireland’s health service illustrates just how high the stakes can be when it comes to storing data centrally. In comparison, distributed ledger technologies (DLT) store data in cryptographically linked blocks which are nearly impossible to tamper with, ensuring there is no single point of failure, as there is when centrally storing data. Taking back ownership For data protection to meet the privacy standards required by internet users in the 21st century, it’s imperative that we aim to give individuals the ability to control and manage their own identities and the personal data tied to their identities. The key to providing this is through decentralization. Decentralization offers additional security when compared to the centralized architecture on which today’s internet relies. With the existing internet, problems such as server misconfigurations on the cloud can result in data leaks or disrupted service. If there is a single point of failure, numerous parties could see their data compromised if or when a central controller is compromised. With a decentralized identity , identifiers such as usernames can be replaced with IDs that are self-owned, as opposed to the existing usernames that we use, which are owned and controlled by social media companies or other entities online. These identities work in a trust framework that uses blockchain and DLT to ensure the privacy of users while enabling secure transactions. Researchers around the world are working to create alternatives to our existing digital identities through the decentralized web, or Web 3.0. These alternatives use new protocols that remove the need for intermediaries during transactions, while further democratizing the web and bringing value back to creators and participants. The goal is to enable internet users to verify their credentials without the dependence of intermediaries while managing their own identities. This will create a fair, secure, fast, and scalable new internet with a stronger focus on security and privacy. Who is held accountable? By nature, data on a distributed ledger is owned by each node – meaning that each computer on the network has access to the same data, enabling more secure, effective data management and storage for everyday users and producers of such data. The challenge, however, remains that existing privacy and data protection regulations require that one owner be accountable and responsible for all data privacy requirements. One approach for DLT networks to ensure compliance with data privacy regulations is to consider the use of index numbers tied to personal data in a separate database, rather than storing personal data on the blockchain. By utilizing this approach, one organization can secure and own that database, while still sharing the data pseudonymously on the blockchain, which will remain anonymous to anyone else who sees the data on-chain. While this may add a little more centralization to your dApp , it will keep your company compliant until global laws are updated to acknowledge and enable the full capabilities of DLT. While it is not yet a perfect solution, it is currently the best alternative to the existing system of centralization and the risks associated with having a single point of failure. By introducing decentralization, there is an opportunity for dApp developers to uphold strong, secure data privacy protections for users across the board. By offering strong privacy defaults and more user-centric options, decentralized data solutions will enable individuals to make informed decisions about their data. As we get closer to realizing the full potential of Web 3.0, existing regulations will be improved to better suit the needs of the blockchain industry and better cater to the privacy needs of Internet users. Debra J. Farber, Privacy Strategist at Hedera Hashgraph DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,168
2,021
"Deep tech, no-code tools will help future artists make better visual content | VentureBeat"
"https://venturebeat.com/2021/11/15/deep-tech-no-code-tools-will-help-future-artists-make-better-visual-content"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Deep tech, no-code tools will help future artists make better visual content Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital. Despite the hype, the “ creator economy ” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online. The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital , we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report. Above: ©LDV CAPITAL INSIGHTS 2021 We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm. Computer vision will aid livestreaming Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content. Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory. The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today. Synthetic visual content will be ubiquitous A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people , things, and places that never physically existed. By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop. Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck. Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media. Artificial intelligence will deliver motion capture to the masses Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content. Above: ©LDV CAPITAL INSIGHTS 2021 The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more. Nearly all content will be gamified Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement. Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content. AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces. People will interact with many digital beings We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online. Above: ©LDV CAPITAL INSIGHTS 2021 Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI. These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more. Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article. Machine learning will curate, authenticate, and moderate content For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created. In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails. Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications. It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required. Visual content tools are the greatest opportunity in the creator economy The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation. Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,169
2,022
"Decentralized identity solutions could bring true ownership and security to gaming | VentureBeat"
"https://venturebeat.com/2022/02/19/decentralized-identity-solutions-could-bring-true-ownership-and-security-to-gaming"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Decentralized identity solutions could bring true ownership and security to gaming Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Humpty Calderon, head of community at Ontology. Over the last several years, the gaming industry has witnessed incredible growth. Expanding 23% in 2020 from the year prior, its integration with blockchain has seen play-to-earn and crypto models skyrocket in popularity. Against this backdrop, many gamers and developers are skeptical about the benefits that blockchain can bring to games, leading to recent backlashes against some studios that are integrating NFTs. Tangible use cases that showcase the benefits and solutions that blockchain can bring will be important for demonstrating that this technology can enhance what players already know and love about gaming. Through backending games with blockchain technology including decentralized identity (DID) solutions, players can truly own and get rewarded for their in-game assets in a way that hasn’t been possible before. In doing so, they are tied to a safe, secure online identity that will be transportable across multiple different virtual worlds. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Blockchain allows studios to take the gaming world to a whole new level. It can help ensure that in-game items are traceable and that the value they hold isn’t just locked inside a game. Tying blockchain to in-game items such as weapons, skins, and other collectibles enables them to retain a new, unique form of value, while also ensuring that value stays safe. This means that if a game is eventually deleted or sold to another company, players can always retain their value. By tying a digital record of an asset to a person’s decentralized identity, players can prove that they own the item and can carry that record out of one game if needed. In this way, DID can facilitate interactions between different blockchain-based games to offer a diverse range of benefits for players. DID solutions are blockchain infrastructure that can back-end a game to verify players’ identities, confirm traceability and validity of their in-game assets, and ultimately increase player security. As the gaming industry grows, so does the appetite for confirmation of ownership of in-game objects like treasures, trading cards, and other quest items that can be collected, and blockchain can help users to meet these needs. We have witnessed individuality becoming increasingly important for Gen Z, who constitute a large portion of the gaming community. According to reports , 87% of Gen Z play video games, with 65% having spent money on virtual items within a game in 2021. Through attaching a record of ownership to an immutable ledger on blockchain, players can truly prove that they own these items, making the process of exchange much clearer and more secure. Players often trade in-game items such as skins, lives, and skills via forums, social channels, and commerce platforms. The reason they choose to do so outside the games themselves is that many games do not enable the exchange of items for real currency, nor do they have the infrastructure to truly verify items. Easy-to-navigate, quick KYC checks using DID are one option that can be used in games to enable fair exchange and protect players from scams and other malpractice. This means that players who are trading can confirm their identity, while remaining aware of the legitimacy of the person they are dealing with. What’s more, if players decide to take themselves out of the gaming economy to trade via social channels or forums, they can still request DID verification to prove the legitimacy of who they are trading with. Thus ensuring that they are safe both in and out of the game. Web3 will eventually enable players to seamlessly traverse between different games and virtual worlds that make up the metaverse. In practice, what this could mean is a player would be able to take their avatar from a game like Call of Duty and bring it into World of Warcraft to complete a quest. There, they might buy a sword and later sell it to a player in Decentraland, where they could use money to buy a virtual plot of land. The possibilities are endless but in order for this to become a reality, players need to have an interoperable digital identity that will allow them to traverse through many virtual worlds with a consistent identity including their wallet, assets, and personal information. Decentralized identity enables this. DID solutions also mean that players own and have full control over their own personal data, enabling them to log in securely to different systems without exposing their private information. This level of security is a crucial element for gaming, ensuring that every transaction within a game is verifiable, securely recorded and stored, and most importantly cannot be altered. Players pour countless hours into building their gaming profiles, skills, and collecting items. DID can help gamers to secure these gaming assets while also protecting their privacy and preserving their valuable time. Solutions are designed to allow players to create a stronger version of their online identity and with complete control, the DID holder can decide what information they choose to share with a private key that grants access to the verified user only. With the global gaming market set to reach $250 billion by 2025, the appetite from players truly looking to own their in-game assets as well as to hold a secure interoperable identity across multiple virtual worlds will only increase. Decentralized identity can build an entirely new ecosystem of true ownership within video games, whilst adding security, protecting private information, and adding trust. This would alter the entire industry for the better, creating a more transparent arena where players can rest assured that their assets and real and virtual identities are safe. If utilized correctly, decentralization will shape the future of gaming. Humpty Calderon is head of community at Ontology. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,170
2,021
"Data science in a post-COVID world | VentureBeat"
"https://venturebeat.com/2021/05/01/data-science-in-a-post-covid-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Data science in a post-COVID world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I am often asked about the state of data science and where we sit now from a maturity perspective. The answer is pretty interesting, especially now that it’s been more than a year since COVID-19 rendered most data science models useless — at least for a time. COVID forced companies to make a full model jump to match the dramatic shift in daily life. Models had to be rapidly retrained and redeployed to try to make sense of a world that changed overnight. Many organizations ran into a wall, but others were able to create new data science processes that could be put into production much faster and easier than what they had before. From this perspective, data science processes have become more flexible. Now there is a new challenge: post-pandemic life. People all over the world believe an end to the pandemic is in sight. But it is highly unlikely we will all just magically snap back to our pre-pandemic behaviors and routines. Instead, we’ll have a transition period that will require a long, slow shift to establish a baseline or new set of norms. During this transition, our data models will require near-constant monitoring as opposed to the wholescale jump COVID prompted. Data scientists have never encountered anything like what we should expect in the coming months. Tipping the balance If asked what we most miss about life before the pandemic, many of us will say things like traveling, going out to dinner, maybe going shopping. There is tremendous pent-up demand for all that was lost. There’s a large group of people who have not been adversely affected financially by the pandemic. Because they haven’t been able to pursue their usual interests, they probably have quite a bit of cash at their disposal. Yet the current data science models that track spending of disposable income are probably not ready for a surge that will likely surpass pre-pandemic spending levels. Pricing models are designed to optimize how much people are willing to pay for certain types of trips, hotel nights, meals, goods, etc. Airlines provide a great example. Prior to COVID-19, airline price prediction engines assumed all sorts of optimizations. They had seasonality built in as well as specific periods like holiday travel or spring break that drove prices even higher. They built various fare classes and more. They implemented very sophisticated, often manually crafted optimization schemes that were quite accurate until the pandemic blew them up. But for life after COVID, airlines have to look beyond the usual categories to accommodate the intense consumer demand to get out and about. Instead of going back to their old models, they should be asking questions like “Can I get more money for certain types of trips and still sell out the airplane?” If airlines consistently run models to answer these and other questions, we’ll see an increase in prices for certain itineraries. This will go on for a period of time before we see consumers gradually begin to self regulate their spending again. At a certain point, people won’t have any piled up money left over anymore. What we really need are models that identify when such shifts happen and that adapt continuously. On the flip side, there is another segment of the population that experienced (and continues to experience) economic difficulties as a result of the pandemic. They can’t go wild with their spending because they have nothing or little left to spend. Maybe they still need to find jobs. This also skews economics, as millions of people are attempting to climb back up to the standard of where they were pre-COVID. People who previously would have played a sizable role in economic models are effectively removed from the equation for the time being. Model drift COVID was one big bang where things changed. That was easy to detect, but this strange period we will now be navigating — toward some kind of new normal — will be much harder to interpret. It’s a case of model drift, where reality shifts slowly. If organizations simply start deploying their pre-COVID models again, or if they stick with what they developed during the pandemic, their models will fail to give them proper answers. For example, many employees are ready to return to the office, but they may still opt to work from home a few days a week. This seemingly small decision affects everything from traffic patterns (fewer cars on the road at peak periods) to water and electric usage (people take showers at different times and use more electricity to power their home offices). Then there are restaurant and grocery sales — with fewer employees in the office, catered lunches and meals out with colleagues drop from pre-pandemic levels, while grocery sales must account for lunch at home. And here we’re only looking at the effects of a single behavior (transitioning to partial work-from-home). Think about the ripple effects of changes to all the other behaviors that emerged during the pandemic. The slow march to normal In establishing an environment to contend with this unprecedented challenge, organizations need to unite entire data science teams, not just the machine learning engineers. Data science is not just about training a new AI or machine learning model; it’s also about looking at different types of data as well as new data sources. And it means inviting business leaders and other collaborators into the process. Each participant plays a role because of all of the mechanics involved. These teams should look at patterns that are emerging in geographies that have opened up again post-COVID. Is everything running at full capacity? How are things going? There is quite a bit of data that can be leveraged, but it comes in pieces. If we combine these learnings with what we saw prior to and during COVID to retrain our models, as well as ask new questions, then we’re looking at highly valuable data science with mixed models that accounts for swings in practices and activities. It is imperative that teams persistently monitor models — what thesey do, how they perform — to identify when they become out of whack with reality. This goes way beyond classic A/B testing and also involves challenger models and mixing models from pre-COVID with newer ones. Try out other hypotheses and add new assumptions. Organizations might be surprised to see what suddenly works much better than before — and then to see those model assumptions eventually fail again. Organizations should prepare themselves by putting in place a flexible data science function that can continuously build, update, and deploy models to represent an ever-evolving reality. Michael Berthold is CEO and co-founder at KNIME , an open source data analytics company. He has more than 25 years of experience in data science, working in academia, most recently as a full professor at Konstanz University (Germany) and previously at University of California, Berkeley and Carnegie Mellon, and in industry at Intel’s Neural Network Group, Utopy, and Tripos. Michael has published extensively on data analytics, machine learning, and artificial intelligence. Follow Michael on Twitter , LinkedIn and the KNIME blog. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,171
2,021
"No-code AI analytics may soon automate data science jobs | VentureBeat"
"https://venturebeat.com/2021/10/12/no-code-ai-analytics-may-soon-automate-data-science-jobs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive No-code AI analytics may soon automate data science jobs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SparkBeyond , a company that helps analysts use AI to generate new answers to business problems without requiring any code, today has released its product SparkBeyond Discovery. The company aims to automate the job of a data scientist. Typically, a data scientist looking to solve a problem may be able to generate and test 10 or more hypotheses a day. With SparkBeyond’s machine, millions of hypotheses can be generated per minute from the data it leverages from the open web and a client’s internal data, the company says. Additionally, SparkBeyond explains its findings in natural language, so a no-code analyst can easily understand it. How companies can benefit from AI analytics data automation The product is the culmination of work that started in 2013 when the company had the idea to build a machine to access the web and GitHub to find code and other building blocks to formulate new ideas for finding solutions to problems. To use SparkBeyond Discovery, all a client company needs to do is specify its domain and what exactly it wants to optimize. SparkBeyond has offered a test version of the product, which it began developing two years ago. The company says its customers include McKinsey, Baker McKenzie, Hitachi, PepsiCo, Santander, Zabka, Swisscard, SEBx, Investa, Oxford, and ABInBev. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of SparkBeyond’s client success stories involved a retailer that wanted to know where to open 5,000 new stores, with the goal of maximizing profit. As SparkBeyond CEO Sagie Davidovich explains, SparkBeyond took the point-of-sale data from the retailer’s existing stores to find which were most profitable. It correlated the profitability with data from a range of external sources, including weather information, maps, and geo-coordinates. Then SparkBeyond went on to test a range of hypotheses, including theories such as if three consecutive rainy days in proximity to competing stories correlated with profitability. In the end, proximity to laundromats correlated the most strongly to profitability, Davidovich explains. It turns out people have time to shop while they wait for their laundry, something that may seem obvious in retrospect, but not at all obvious at the outset. The company says its auto-generation of predictive models for analysts puts it in a unique position in the marketplace of AI services. Most AI tools aim to help the data scientist with the modeling and testing process once the data scientist has already come up with a hypothesis to test. Competitors in the data automation space Several competitors, including Data Robot and H20, offer automated AI and ML modeling. But SparkBeyond’s VP and general manager, Ed Janvrin, says this area of auto-ML feels increasingly commoditized. SparkBeyond also offers an auto-ML module, he says. There are also several competitors, including Dataiku and Alteryx, that help with no-code data preparation. But those companies are not offering pure, automated feature discovery, says Janvrin. SparkBeyond is working on its own data preparation features which will allow analysts to join most data types — such as time-series, text analysis, or geospatial data — easily without writing code. Since 2013, SparkBeyond has quietly raised $60 million in total backing from investors, which it did not previously announce. Investors include Israeli venture firm Aleph, Lord David Alliance, and others. “The demand for data skills has reached virtually every industry,” said Davidovich in a statement. “What was once considered a domain for expert data scientists at large enterprise organizations is now in urgent demand across companies of all sizes.” “Our new release is powerful yet intuitive enough that data professionals — including analysts at medium-sized and smaller organizations — can now harness the power of AI to quickly join multiple datasets, generate millions of hypotheses and create predictive models, unearthing unexpected drivers for better decision-making.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,172
2,022
"How to build a data science and machine learning roadmap in 2022 | VentureBeat"
"https://venturebeat.com/2022/01/10/how-to-build-a-data-science-and-machine-learning-roadmap-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to build a data science and machine learning roadmap in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Closing the gap between their organization’s choice to invest in a data science and machine learning (DSML) strategy and the needs that business units have for results, will dominate data and analytics leaders’ priorities in 2022. Despite the growing enthusiasm for DSML’s core technologies, getting results from its strategies is elusive for enterprises. Market forecasts reflect enterprises’ early optimism for DSML. IDC estimates worldwide revenues for the artificial intelligence (AI) market, including software, hardware, and services will grow 15.2% year over year in 2021 to $341.8 billion and accelerate further in 2022 with 18.8% growth, reaching $500 billion by 2024. In addition, 56% of global enterprise executives said their adoption of DSML and AI is growing, up from 50% in 2020, according to McKinsey. Gartner notes that organizations undertaking DSML initiatives rely on low-cost, open-source, and public cloud service provider offerings to build their knowledge, expertise, and test use cases. The challenge remains of how best to productize models to be deployed and managed at scale. DSML is delivering uneven value in enterprises today Data scientist teams in financial services, health care, and manufacturing tell VentureBeat their enterprise’s DSML strategies are the most effective when they anticipate and plan for uneven initial results by business unit. The teams also say producing models at scale using MLOps is fundamentally different from producing mainstream internal apps with DevOps. They add that the more complex the operating model of a business unit, the steeper the MLOps learning curve. DSML’s contributions to business units vary by the availability of reliable data and how clearly defined problem statements are. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! O’Reilly found that “enterprise AI won’t have matured until development and operations groups can engage in practices like continuous deployment until results are repeatable (at least in a statistical sense), and until ethics, safety, privacy, and security are primary rather than secondary concerns. Kaggle indicated that 80.3% of respondents use linear or logistic regression algorithms, followed by decision trees and random forests (74.1%) and gradient boosting machines (59.5%). Enterprises are just scratching the surface of DSML’s potential, with adoption slowed by several factors that need to improve in 2022. How and where DSML will improve in 2022 Getting the foundational elements of a DSML platform right accelerates the accuracy, speed, and quality of decision-making. As the latest Gartner Magic Quadrant shows, DSML platform providers are making strides in providing more flexible, scalable infrastructures that have governance designed to support multiple personas’ needs at scale combined with extensibility. Enterprises that McKinsey considers to be “high performers” use cloud infrastructure much more than their peers do, with 64% of their AI workloads running on public or hybrid cloud, compared with 44% of their peers. In addition, McKinsey notes that this group relies on public cloud infrastructure to access a wider range of AI capabilities and techniques. DSML strategies are going to see growing adoption across organizations in 2022, and the following are areas where organizations and platform providers can work together to improve outcomes by having these areas covered on their roadmaps for 2022: Adaptive ML shows potential for improving cybersecurity, remote site security, quality management in manufacturing, and fine-tuning industrial robotics systems. Look for Adaptive ML to find increased adoption across a spectrum of use cases defined by how rapidly changing their contextual data, conditions, and actions are. For example, combining cyber risk and remote site risk assessments in an adaptive ML model is a use case that utility companies are using in production today. Adaptive ML’s greatest gains could come from manufacturing, where combining telemetry data from visual IoT sensors with adaptive ML-based applications can identify defective products immediately and pull them from the production line. Saving customers the hassle of returning defective products can increase customer loyalty while reducing costs. Given the chronic labor shortage manufacturers face, combining Adaptive ML techniques with robotics can help manufacturers still meet customers’ needs for products consistently. Adaptive ML is also the basis of autonomous self-driving vehicle systems and collaborative, smart robots that quickly learn how to complete simple tasks together through iteration. DSML platform vendors known for their expertise include Cogitai, Google, Guavus, IBM, Microsoft, SAS, Tazi, and others. Collaborative workflow support in DSML platforms becomes table stakes for competing in the market. Data scientists tell VentureBeat that workarounds to DSML platforms not designed in collaboration workflows to flex and adapt to their needs can cost weeks of model development time. Collaboration tools and workflows need to get beyond simple question-and-answer forums and provide more effective cross-modal data and code repositories that each collaborator can securely use across an enterprise. There also needs to be support for data and model visualization and the option for exporting models. The must-haves for collaboration to meet data scientist requirements include communication and code sharing across each step in the modeling process, data lineage and model tracking, and version control and model lineage analysis. DSML platform vendors offering collaborative workflow support include Domino, Dataiku, Google, Microsoft, SAS, TIBCO, RapidMiner, and others. MLOps will have a breakout year as organizations gain more experience scaling models for deployment faster while tracking business outcomes for greater results. Reducing the cycle times for creating and launching new models is one of the key metrics of how DSML projects are evaluated in enterprises today. Every DSML platform vendor offers its version of MLOps support. Enterprises considering a DSML strategy need to review how each platform of interest handles model creation, management, maintenance, model and code reuse, updates, and governance. Look for every DSML platform vendor to continue fine-tuning how they modify MLOps to provide greater model scalability and security in 2022. DSML platform vendors will rely on MLOps differentiators, including model taxonomies, version control, model maintenance, monitoring, and code and model reuse. The best DSML platforms also ensure their MLOps workflows have the option of tying back to measuring business outcomes using metrics and key performance indicators (KPIs) relevant to financial decision-makers and line-to-business owners. Privacy concerns will force every organization creating sensor-connected products and the services supporting them to use synthetic data to build, test, and refine models. The current and next generation of connected devices with embedded sensors to capture biometric data are among the most challenging machine learning models to create today. Startups creating AI-based worker safety systems are finding it necessary to create and fine-tune synthetic data so they can predict, for example, when, where, and how accidents can potentially occur. The Wall Street Journal provides a fascinating glimpse into how effective synthetic data is and how pervasive it’s becoming in AI and ML models development. The article explains how American Express improves its fraud prediction models using generative adversarial networks, a much-used technique for creating synthetic data of randomized fraud patterns. Autonomous vehicle companies are also relying on synthetic data to train their models, including Aurora, Cruise, and Waymo, all of which use synthetic data to train the perception systems that guide their cars. DSML platform providers need to scale up and automate the entire ML workflow at scale. Providers have multiple generations of model development tools, and their experience shows in the maturity of the workflows they can support. The goal for 2022 is to improve model deployment and management and to integrate zero trust into MLOps workflows while retaining the flexibility of customizing workflows. AutoML will see greater adoption as enterprises look to accelerate their ML workflows, with data scientists skilled with its techniques in high demand. Automating ML workflows will deliver greater reusability of ML code components, trim cycle times for model testing and validation, and increase the productivity of data science teams in the process. Transfer learning will see rapid adoption across enterprises with DSML strategies operating at scale and in production today. The essence of transfer learning is reusing existing trained machine learning models to get a head start on new model development. It’s particularly useful for data science teams working with supervised machine learning algorithms that require labeled data sets to deliver accurate analyses. Instead of starting over on a new supervised machine learning model, data scientists can use transfer leveling to customize models for a given business goal quickly. In addition, transfer learning modules are becoming more relevant across process-centric industries that rely on computer vision because of the scale it provides for labeled data. Leading DSML platform providers who offer transfer learning include Alteryx, Google, IBM, SAS, TIBCO, and others. Organizations need to focus on use cases and metrics first and realize that exceptional model accuracy may not deliver business value. One of the most common challenges when building supervised machine learning models, especially when there is an abundance of telemetry data from sensors and endpoints, is the tendency to keep tweaking models for one more degree of accuracy. Telemetry data from manufacturing shop floors can be sporadic and varies by cycle count, frequency, and the speed of a given machine, among many other factors. It’s easy to get caught up on what real-time telemetry data from the shop floor says about the machines, but pulling back to see what the data is saying about shop floor productivity and its impact on margins needs to stay in focus as the primary goal. DSML strategies must be grounded on business outcomes Organizations pursuing DSML strategies need to go into 2022 with a clear roadmap of what they want to accomplish from a business case perspective first, anchored in measurable customer outcomes. The speed and variety of innovations that DSML platform providers plan to announce in the next twelve months will revolve around five key areas. These include democratizing ML model creation and making model building and fine-tuning available to more business professionals. Second, DSML platforms’ multi-persona support will improve in the next twelve months, further supporting greater adoption. Third, automating ML workflows end-to-end will help accelerate MLOps cycles in 2022, driving the fourth factor of an improved line of business reporting tied to model performance. Fifth, enterprises want much faster time-to-value for their DSML investment, and the DSML platform vendor landscape will need to quantify their value with greater precision and real-time insights to hold onto customers and attract new ones. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,173
2,022
"Monetizing and protecting an AI-powered virtual identity in today’s world | VentureBeat"
"https://venturebeat.com/2022/01/29/monetizing-and-protecting-an-ai-powered-virtual-identity-in-todays-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Monetizing and protecting an AI-powered virtual identity in today’s world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Taesu Kim, CEO of Neosapience. An AI revolution is going on in the area of content creation. Voice technologies in particular have made tremendous leaps forward in the past few years. While this could lead to a myriad of new content experiences, not to mention dramatically reduced costs associated with content development and localization, there are ample concerns about what the future holds. Imagine if you are known for your distinctive voice and rely on it for your livelihood — actors like James Earl Jones, Christopher Walken, Samuel L. Jackson, Fran Drescher, and Kathleen Turner, or musicians such as Adele, Billie Eilish, Snoop Dogg, or Harry Styles. If a machine were trained to replicate them, would they lose all artistic control? Would they suddenly be providing voice-overs for a YouTube channel in Russia? And, practically speaking, would they miss out on potential royalties? What about the person who’s looking for a break, or maybe just a way to make some extra cash by licensing their voice or likeness digitally? A voice is more than a compilation of sounds There is something tremendously exciting that happens when you can type a series of words, click a button, and hear your favorite superstar read them back, sounding like an actual human with natural rises and falls in their speech, changes in pitch, and intonation. This is not something robotic, as we’ve become accustomed to with characters created from AI. Instead, the character you build comes to life with all of its layered dimensions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This depth is what had been lacking from virtual actors and virtual identities previously; the experience was, quite frankly, underwhelming. But modern AI-based voice technology can reveal the construction of an identity whose intricate characteristics come out through the sound of a voice. The same can be true of AI-based video actors that move, gesture, and use facial expressions identical to those of humans, providing the nuances inherent in a being without which characters fall flat. As technology improves to the point that it can acquire true knowledge of each of the characteristics of a person’s surface identity — such as their looks, sounds, mannerisms, ticks, and anything else that makes up what you see and hear from another, excluding their thoughts and feelings — that identity becomes an actor that can be deployed not only by major studios in big-budget films or album releases. Anyone can select that virtual actor using a service like Typecast and put it to work. The key here is that it is an actor, and even novice actors get paid. Understandably, there is some fear about how such likenesses can be co-opted and used without licensing, consent, or payment. I would liken this to the issues we’ve seen as any new medium has come onto the scene. For example, digital music and video content that were once thought to rob artists and studios of revenue have become thriving businesses and new money-makers that are indispensable to today’s bottom line. Solutions were developed that led to the advancement of technology, and the same holds true again. Preservation of your digital and virtual identity Each human voice — as well as face — has its own unique footprint, comprised of tens of thousands of characteristics. This makes it very, very difficult to replicate. In a world of deep fakes, misrepresentation, and identity theft, a number of technologies can be put to work to prevent the misuse of AI speech synthesis or video synthesis. Voice identity or speaker search is one example. Researchers and data scientists can identify and break down the characteristics of a specific speaker’s voice. In doing so, they can determine whose unique voice was used in a video or audio snippet, or whether it was a combination of many voices blended together and converted through text-to-speech technology. Ultimately, such identification capabilities can be applied in a Shazam-like app. With this technology, AI-powered voice and video companies can detect if their text-to-speech technology has been misused. Content can then be flagged and removed. Think of it as a new type of copyright monitoring system. Companies including YouTube and Facebook are already developing such technologies for music and video clips, and it won’t be long until they become the norm. Deep fake detection is another area where significant research is being conducted. Technology is being developed to distinguish whether a face in a video is an actual human or one that has been digitally manipulated. For instance, one research team has created a system based on a convolutional neural network (CNN) to pull features at a frame-by-frame level. It can then compare them and train a recurrent neural network (RNN) to classify videos that have been digitally manipulated — and it can do this rapidly and at scale. These solutions may make some people feel uneasy, as many are still in the works, but let’s put these fears to rest. Detection technologies are being created proactively, with an eye towards future need. In the interim, we have to consider where we are right now and synthesized audio and video must be very sophisticated to clone and deceive. An AI system designed to produce voice and/or video can only learn from a clean dataset. Today, this means it can pretty much only come from filming or recording that’s done in a studio. It is remarkably difficult to have data recorded in a professional studio without the consent of the data subject; studios are not willing to risk a lawsuit. Data crawled on YouTube or other sites, by contrast, provides such a noisy dataset that it is only capable of producing low-quality audio or video, which makes it simple to spot and remove illegitimate content. This automatically subtracts the suspects most likely to manipulate and misuse digital and virtual identities. While it will be possible to create high-quality audio and video with noisy datasets eventually, detection technologies will be ready well in advance, providing ample defense. Virtual AI actors are still part of a nascent space, but one that is accelerating quickly. New revenue streams and content development possibilities will continue to push virtual characters forward. This, in turn, will provide ample motivation to apply sophisticated detection and a new breed of digital rights management tools to govern the use of AI-powered virtual identities. Taesu Kim is the CEO of Neosapience. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,174
2,022
"How NFTs could redefine the future of the music industry | VentureBeat"
"https://venturebeat.com/2022/02/19/how-nfts-could-redefine-the-future-of-the-music-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How NFTs could redefine the future of the music industry Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Jake Fraser, head of business development at Mogul Productions. NFT sales skyrocketed in 2021. From a transaction volume of merely USD $40.69 million in 2018, NFT trading volume surged over $44.2 billion in 2021 and is continually shattering records and reaching new heights. The NFT segment is said to reach a market cap of $80 billion by 2025. In December 2021 alone, NFT transactions worth $4 billion were recorded. Despite the speculation and skepticism around NFTs’ seemingly volatile and unregulated nature , one thing is certain — they are here to stay. NFTs have gained traction particularly among millennials and Gen-Z, and have emerged as a way to connect with their favorite artists. While art NFTs and collectibles for the metaverse have taken off, NFTs are said to transform the music industry as well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The dark side of the music industry Music is a universal experience. By the end of 2020, the global music industry generated total revenue of $21.6 billion , and it was the sixth consecutive year of growth within the industry at 7.4% CAGR increase from 2019. Despite this seemingly successful track record of the industry, it is plagued with a bunch of issues. Musicians find it extremely difficult to sustain themselves in the industry and make a living through their music. It is no secret that artists are not compensated fairly for their talent and efforts. This arrangement is usually fortified by the record deal and complex legal contracts which they are made to sign at the time of onboarding by the record label. Artists who have made it big have often come out and spoken about this skewed power dynamic between the record label and the artist. Take, for example, Kanye West, who went as far as calling record deals “ modern-day slavery ” with all rights of the artist being signed off to the label for the lure of upfront financing, strategic advice, and marketing. Here’s what Akon had to say about the exploitation of artists: “Throughout my career, I have always believed that artists never really got their fair share of the profits for the work they produced and people listen to.” “If you don’t own your masters, your master owns you,” said Prince for Rolling Stone in 1996. According to reports , only the top 1% of artists receive 90% of all streams, and only about the top 0.8% of the seemingly famous and top artists earn an average of USD 50,000 per year from streaming. This is mainly due to the overall revenue being split between artists and record labels, agents, lawyers, distributors, and other “stakeholders” in the artists’ music. Moreover, with the record label holding full rights over the “masters” or the original recording of the song, the creator has absolutely no control over where and when it is played. This puts serious restrictions on the creative freedom of the artist as well. This has led to a quest for the entire model of music industry record labels to be overturned. While social media platforms such as TikTok, YouTube, and Instagram have somewhat given the power back to creators as a way of promoting and marketing their music, there are still long strides left to be taken for artists to properly monetize them through a viable tool. The NFT revolution: Changing tides for the music industry When COVID-19 regulations caused a complete halt on live sports, concerts, and entertainment, blockchain-based non-fungible tokens emerged as a way to connect fans worldwide with companies, teams, and creators that they love. The NFT revolution kicked off as profile picture collections (PFP) that buyers could display on their social media handles to denote that they are part of the particular NFT Community. These PFPs dominated the narrative with celebrities and other NFT enthusiasts buying famous digital collectibles such as Bored Apes, Cool Cats, and CryptoPunks. But what is it about NFTs that makes them so disruptive and novel? For starters, NFTs are non-fungible (or immutable) digital files on the blockchain that are distinct and irreplaceable. Housed on blockchains such as Ethereum, Solana, and Binance Smart Chain, NFTs are rare, verifiable, and valuable. However, most major NFT projects such as Decentraland and Axie Infinity are valued not for their art, but for their utility that comes from the underlying smart contract and use cases. For instance, on Decentraland, digital land can be bought in-game as NFTs which can then be used to host events, rented out, etc. For the media and entertainment industry, this means, NFTs are offering artists and creators a new medium to present their work, market their work on the blockchain through NFT marketplaces, and engage their fan community. NFTs have the potential to establish scarcity of digital assets and thus let creators set their rates for the creations, as well as control over the secondary market for them. Therefore, they democratize access to new marketplaces for creators globally. Artists and fans take the pie NFTs also can restore power to creators to control the supply chain and rights associated with the masters and related collectibles. NFTs bring scarcity into music and gives musicians complete control over the subsequent ways in which their work is distributed, and the rights associated with it. Therefore, NFTs present opportunities for musicians to engage with their audience on a more seminal and granular level with authenticity and establish communities around them, as well as give them complete autonomy over their work. Artists keep all rights to their music even when their NFTs are sold on a secondary market, and also earn a royalty that they chose to set for it on every transaction of the NFT. This makes for a global marketplace for music NFTs. In January 2022, BTS, a popular K-pop boy band, in conjunction with Dunamu, is all set to launch its own NFT set in the form of photocards, digital versions of collectible cards featuring photographs of the K-pop band members. Even the popular singer Akon seems to be ditching record labels and dro pping his next album as an NFT to monetize it from day one! On the other hand, fans who buy these collectibles or creations have full transparency into the authenticity and origin of the purchases they make. In this sense, NFTs allow anyone to buy property rights to the art or music while letting artists verify their work outside the confines of the legacy music industry. Further, NFTs allow for new ways of fundraising by letting the audience partake in the music process. Fans become the investors in the project for several reasons. While many may invest and buy NFTs solely as collectibles, others may do so for speculative reasons, some may HODL, some may do to learn royalties, and others for trading them in the secondary NFT Marketplaces to make profits. This can help artists get upfront funding without waiving rights to the master and suffering considerable cuts in revenue like TLC, Kanye West, and Taylor Swift. Further, it lets them take a fair share of the profits of their success. Fans will be able to invest in Nas’ music by purchasing shares in the royalties made from the streaming of two of his songs. This investment can be made by purchasing Royal’s extended version in NFT form invariants of “Gold,” “Platinum,” and “Diamond” digital tokens for each song. The intriguing part is that token owners will receive a part of royalties each time the tune is streamed for perpetuity! What lies ahead for NFTs and the music industry Tokenization of assets allows for a wide pool of people to own assets on the blockchain. In this sense, even ordinary fans are immersed directly in the value and ethos of the artist or creator they support without an intermediary like a streaming platform or a record label. Thus, this enables fan communities to participate in their favorite creators’ growth like never before. Further, music NFTs can have additional value besides itself. Autographed physical copies of the collectibles, music royalties for perpetuity, backstage passes, exclusive remixes, and private parties are just the tip of the iceberg of rewarding the fan community who invest in the musician’s creation. The possibilities are truly endless for artists to connect with their audience and investors. Several artists are testing the waters of NFTs, such as creative music producer 3lau, who sold 33 NFTs on the third anniversary of his album Ultraviolet for over $11.7 million, making it one of the largest music NFT deals. 3lau has even been contemplating allowing the NFT owners to collaborate with him on the net song and perhaps even be featured in the song! Moreover, there are recent instances of artists contemplating paparazzi photos being made into NFTs so they can monetize from them in the form of royalties. These applications of NFTs have the potential to transform the music industry into a level-playing field for all artists. Jake Fraser is the head of business development at Mogul Productions. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,175
2,021
"Amdocs: The great resignation: Boomers and Gen X stay put, while millennials and Gen Z walk away | VentureBeat"
"https://venturebeat.com/2021/09/19/the-great-resignation-boomers-and-gen-x-stay-put-while-millennials-and-gen-z-walk-away"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The great resignation: Boomers and Gen X stay put, while millennials and Gen Z walk away Share on Facebook Share on X Share on LinkedIn The Great Resignation Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 19% of surveyed workers left or considered leaving a job in the last year. However, there is a generational divide; 27% of millennials and 31% Gen Z have stated this, compared to just 13% of boomers, according to new research from Amdocs. Above: There is a generational divide among people who leave their jobs. Tech workers are even more likely to flee, with 33% having left or considered leaving over the past year. More than 35% had colleagues leave, which negatively affected their daily workloads. With the Great Resignation also comes the question of why. Nearly two-thirds (64%) would leave their job due to a lack of growth (38%), and training and development (26%) opportunities. 56% want employers to offer more training and career development opportunities in 2022. This area is critical for new talent, too. 90% said when searching for a new job, it’s very important a company offers a strong training and upskilling program. This was even higher for tech-specific respondents (97%). While remote work is a hot topic , it’s not in the top three demands for workers in 2022. 35% of respondents said they want remote work options from their existing employers (tech workers were lower with 20%). This trailed other areas like health and wellness (61%), training and development (56%) and CSR efforts like diversity and sustainability (See Chart #2). This could potentially be partly contributed to the fact that remote work still has its fair of challenges. 38% (and 60% of tech respondents) claimed they need better support from their employer with remote solutions, including reliable connectivity. 33% worry they’ll have fewer opportunities for training and reskilling, or that they’ll disappear completely with the rise remote work. In its recently released Workforce of 2022: Reskilling, Remote and More Report, Amdocs surveyed 1,000 full time workers across the U.S. to uncover what they want from their employers, and more importantly, why they’re leaving their jobs. According to the findings, today’s employee has specific criteria for the current and post-pandemic workplace, with several key areas rising above remote work as in-demand perks. Read the full report by Amdocs. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,176
2,022
"The future of employee perks in the post-pandemic workplace | VentureBeat"
"https://venturebeat.com/2022/02/16/the-future-of-employee-perks-in-the-post-pandemic-workplace"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The future of employee perks in the post-pandemic workplace Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Zach Dunn, cofounder and VP of Customer Experience at Robin. Organizations today face a rare opportunity to revisit old work standards and make changes that better support employees, attract new talent, and improve the overall productivity of all workers. Hybrid, remote, flexible — today’s workplace is more attuned to the needs of employees than ever before. Of course, there is no change without challenges. The office’s role has shifted from a central hub to a clubhouse where employees come in to collaborate with their peers and leverage company assets to get their work done. As businesses continue to determine the proper balance between remote and in-person collaboration, specialized perks and benefits play a crucial role in the retention and engagement of employees. To uncover what workplace perks matter to employees today and how employers have — or haven’t — adjusted their perks since the onset of the pandemic, we recently surveyed more than 500 full-time employees. We discovered that superficial perks, like ping-pong tables in the break room and free cold brew, are no longer enough and fail to serve the growing number of remote workers. Internet and cell phone stipends, wellness resources, and flexible work mean the most to workers, but these needs change when looking at employees by age and role. Read on to learn more from the findings and how business leaders can adjust their perks to attract and retain talent in this red-hot job market. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Then vs. now Before the world changed, companies prided themselves on perks that engaged and entertained their workers. Even though 85% of employees still feel workplace perks are as important or more important than they were before the pandemic, many witnessed their company’s pre-pandemic perks disappear with the transition to remote work. For those whose companies retained their perks over the last two years, free food/snacks in the office (25%), health and fitness reimbursement (20%), and travel reimbursement (17%) are the three most common carried-over perks. In 2021, workplaces were thrown several curveballs. From emerging variants to delayed opening dates, many employees had to adapt to working in a way they wouldn’t have chosen. Many employers have already introduced new offerings to better serve their hybrid workforce , such as additional time off, flexible working hours, and wellness stipends. Still, many leaders have held off amidst the continued uncertainty. While we have little to no control over the state and progress of a global pandemic, we do have control over workplace benefits and perks packages. What employees want today In 2019, Comparably noted that the competitive job market was driving new and creative perks to attract talent. The job market has never been more competitive, with resignations reaching record highs and the talent shortage impacting every industry. Knowing that employee needs changed over the past two years, it’s telling to see that the top three most desired perks employees want to be introduced by employers are more time off (45%), a wellness stipend (30%), and the ability to work from anywhere (29%). As burnout is a serious issue in workplaces worldwide, this isn’t surprising. Based on our data, workers would love to see their phone and internet covered— reasonable but rarely provided perks. Workplace perks and the generational divide The generational gap in the workplace can present significant challenges for leaders trying to attain and retain talent alongside the transition to more flexible working models. While millennials view perks and benefits as one of the top three items they look at when choosing an employer, boomers remain apathetic about their organization’s workplace perks. Fifty-six percent of respondents within this demographic report their feelings about perks haven’t changed, though many reported losing them due to the pandemic and that their employers failed to create any new offerings. Workplace leaders may not keep all of their employees happy all the time, but keeping these contrasting values in mind when revamping your offerings will put you in a much better position to create a productive and happy workforce. For those struggling to get started, distributing a survey to employees for insight into their preferred way of working is a great way to create an open dialogue for workers to share feedback. This best practice informs the process of eliminating existing perks that no longer fit your organization and replacing them with new offerings that align with employees’ preferences. Perks have the potential to differentiate companies in a competitive job market. Getting this right is worth the time and effort — hybrid work is here to stay, and it will require updated employee experience strategies to ensure high productivity, loyalty, and engagement. It’s a good time to kill sacred cows and build a workplace that offers value to employees’ entire lives, not just their careers. Zach Dunn is the cofounder and VP of Customer Experience at Robin. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,177
2,022
"Innovation will drive the success of NFT gaming, not profit or hype | VentureBeat"
"https://venturebeat.com/2022/01/25/innovation-will-drive-the-success-of-nft-gaming-not-profit-or-hype"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Innovation will drive the success of NFT gaming, not profit or hype Share on Facebook Share on X Share on LinkedIn What's the metaverse? Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Olga Vorobyeva, VOX Consulting. Nonfungible tokens ( NFTs ) have been around for the better part of four years, quietly growing in popularity among hardcore cryptocurrency investors. But in 2021, their total market cap jumped from $55 million to more than $7 billion, according to NFTGO.io. Ninety percent of that growth was within the last four months. OpenSea , the largest NFT marketplace, has become a significant player in gaming. In 2021, it reported around $10 billion in total all-time sales, the vast majority of that being made last year. This was due to a $3.4 billion transaction volume during August due to frenzied interest in artistic NFTs. So-called GameFi is becoming an increasing part of OpenSea’s business model. No matter how big that number sounds, it isn’t much compared to standalone blockchain-based games. Axie Infinity, an NFT-focused video game developed on the Ethereum network, surpassed $1 billion in total trade volume in August 2021, perhaps the most prominent 30-day period in the history of NFTs. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! With so much interest in NFTs , it’s only natural that developers have begun to develop the infrastructure necessary to handle what will undoubtedly become a massive secondary market for these assets. In addition, holders want real tangible benefits to holding NFTs, and in a crowded gaming market, new entrants need to differentiate to survive. 2022 is likely the year NFT games become more mainstream, especially now that many crypto investors own these assets. And real innovation, not just in NFTs but in gameplay and mechanics themselves, will be the driving force. While NFT gaming gives gamers a way to earn while playing their favorite games, the industry lacks a social component. The advantage of owning an NFT asset is that it’s yours, and you should be able to use that asset where you want. Here are three innovations that are driving the success of NFT gaming today. VR and AR It’s no secret that Virtual Reality (VR) and Augmented Reality (AR) are the future of gaming. We got a taste of this tech with Pokémon Go, but that was merely a herald of things to come. VR and AR offer a much richer experience than a 2D screen ever could; however, we have some ways to go before the technology is fully operational. Smartphone-based AR applications will probably become prominent in specific industries before VR goes mainstream. AR offers the ability to tie in a physical location with NFTs. Like in Pokémon Go, users can visit a real-world place to interact with an object (in this case, an NFT). Both AR and VR offer the ability to make these NFTs real. Imagine a game where you see your NFT, inspect it, purchase it, and use it. An NFT can be a gun in a tactical shooter VR game or a suit of armor in a medieval VR game. Ultimately, the user experience will determine the success of NFT gaming. When NFTs are tied in with a virtual world, it will offer a richer experience than the current craze for digital art NFTs. 3D gameplay, in combination with functional NFTs (bulletproof vests, guns, pets, etc.), will revolutionize gaming. But it will take quite a lot of innovation before we get there fully. Creative distribution models and NFT gaming The way that rewards are distributed is key to enticing gamers into any ecosystem. Gameplay is important, followed closely by ownership and dividends for the time spent on a given platform. Games like Axie Infinity are successful mainly because they are forms of investment as much as entertainment. The play-to-earn (P2E) distribution model is more attractive to gamers as they are fairly compensated for their time. And some blockchain-based platforms are even taking this a step further. Gamerse’s new “share-to-earn” offers further incentives to gamers. While P2E is a lot better than the old model, it still does not provide adequate rewards to gamers. The share-to-earn model is an innovation on the play-to-earn model, which has worked well for many Web3 games. A cross-chain social platform for NFT gamers, Gamerse’s platform gives NFT game developers an easy way to connect with their users and the broader NFT gaming community. With the new share-to-earn model, group participation is rewarded, and the APY yield for the group’s holdings is determined by group activity. Gamerse is also developing an aggregated NFT marketplace and platform for holders to buy and sell NFTs. Swipe Swap, a Tinder-like recommendation engine for NFT collectibles, learns gamers’ preferences over time and offers a smooth process to acquire new NFTs within the app. More games could adopt the share-to-earn model in the future, and new reward models will undoubtedly be a feature in the NFT gaming ecosystem. Gamers now expect to be appropriately rewarded. The evolution of the Multi-Online Battle Arena (MOBA) The multi-online battle arena (MOBA) game genre is red hot. League of Legends, Pokemon, and Mobile Legends are downloaded more than a million times a month , and their accompanying marketplaces generate millions in revenue for their developers each year. However, despite this being perhaps the most popular genre of all time, the games are still based on a centralized architecture, and we haven’t seen this industry disrupted in quite a while. Two of the most popular games, DoTA2 (2013) and League of Legends (2009), are old and need some innovation of their own. MOBA evolution is at hand, and some blockchain-based MOBAs are lining up to disrupt the industry. One example is League of Ancients , which puts the power back into the gamer’s hands. The League of Ancients (LoA) mission statement declares that it seeks to become the best MOBA of all time while acknowledging the contributions of other highly successful games. Unlike MOBA games from the big developers, LoA is entirely free to play. And even more importantly, players have the chance to earn while enjoying the gameplay. This is the opposite of other MOBAs built to extract as much money as possible from their customers. Players earn currency and NFT assets through battles. These assets are usable in the game, including skins that change the hero’s abilities within the game. Players can trade these assets to other players through an in-game marketplace. The LoA gameplay is based on two of the most popular MOBA games — DoTA2 and League of Legends. This is typical of this industry, as it takes successful components from one map or game and adds new elements, themes, features, avatars, and revenue generation models. LoA will certainly not be the only blockchain-based platform looking to disrupt the MOBA industry, large as it is. Other factors contributing to NFT gaming success Other elements could potentially play a role in the success of NFT gaming. It’s hard to ignore the ongoing COVID-19 pandemic, which has relevance even in a gaming article. But gaming has already proven its worth, as people who lost their jobs turned an income with platforms like Axie Infinity. The fact that gaming has become a legitimate source of revenue through NFTs is quite noteworthy. The quality of the blockchains that NFT games are built on should also not be ignored or relegated to the side. Most successful blockchain games were built on Ethereum, and this was the only place offering this functionality at an acceptable level. With high-power chains like Solana and Binance, more functionality could be provided that directly supports better quality games with the ability to cater to many thousands/millions of online gamers, at all times, in rich environments. That, or significant Ethereum upgrades. We don’t know how much bandwidth will be needed for futuristic metaverse games involving complex NFTs. But it’s safe to say it will be significant. After all, Games can only get better at a rate proportional to the network on which they are built. However, there is plenty of room for excellent game design, and it’s more about mainstream adoption than technological innovation. An industry of innovation, not profits People want something new. The utility of the blockchain adds a necessary layer of interaction and ownership that could never exist before. Introducing these elements in new and novel ways has huge, reverberating effects on the entire industry. As soon as someone does something interesting, that concept is introduced in five established projects and 10 upcoming ones. The growth is incredible, and innovation leads the charge, not profit or hype. Olga is the Founder of Vox Consulting, a marketing firm for blockchain, DeFi, and NFT startups, and a former Head of Marketing at SwissBorg – the first crypto wealth management platform (TOP -100 Coinmarketcap). DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,178
2,022
"What Meta’s metaverse means for enterprises | VentureBeat"
"https://venturebeat.com/2022/02/07/what-metas-metaverse-means-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What Meta’s metaverse means for enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Meta has doubled-down on its metaverse ambitions with over $10 billion in promised investments, many acquisitions, and a name change. But what does this really mean for the metaverse for enterprises? Meta’s many efforts to improve its Quest (Oculus) VR hardware, develop comfortable AR glasses, and AI backbone could transform the way business managers, engineers, customer support, and repair technicians interact with the real world. But a lot of work will be required to catch up with Microsoft, which is the dominant leader in this space. Microsoft is already starting to connect the dots between its HoloLens mixed reality hardware, digital twins , cloud service, and Mesh – its upcoming metaverse for Teams. [ Special report : The Metaverse: How close are we? ] Many experts believe that Meta will focus on extending its strength to capture attention at scale and then monetize it through advertising. David Pring-Mill, founder and chief analyst at Policy2050, a technology analyst firm, said, “Their preferred use cases for a metaverse would likely represent a continuation of that business model.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Down the road, he expects that engineering around the needs of engineers could represent a viable B2B pathway for Meta and other players. Coming to market with the best and simplest hardware could drive buy-in for digital twins projects, especially when resources are thin. “I think that VR/AR is a plausible way to get other people to visualize or emotionally respond to new projects,” Pring-Mill said. Metaverse will Focus on collaboration Meta’s early wins in the enterprise are likely to focus on improving collaboration. Chris Mattmann, chief technology and innovation officer at NASA JPL and adjunct research professor at USC, is already experimenting with using Quest2, Spatial.io, and Facebook Horizons to improve remote collaboration. “Having meetings in the Metaverse re-creates company culture and brings people together even though they are perhaps now permanently apart,” Mattmann said. Facebook realizes this and, for collaboration and meetings, has made it easy to build apps on the Quest2. These apps push a lot of the complexity into end-user devices such as Quest2, iPhones, Macs, and PCs. It’s still a challenge, especially for the less technically savvy. But Mattmann finds that Meta’s “Metaverse OS” is getting simple enough for more users to leverage. Enterprise is nice to have for Meta Other VR experts say that business use cases are likely to take a backseat to Meta’s consumer focus on social media. “For Meta, the enterprise is a nice-to-have business line, and its main focus is the consumer market,” said Antony Vitillo, VR developer for New Technology Walkers in Turin, Italy. Although Meta is introducing some business features in its vision of the Metaverse, its master plan will be devoted to consumers, he said. This echoes what it is already doing on the web, where its Workplace app is targeted at enterprises but not a main source of revenue. In the short term, he expects Meta to focus on implementing its “defy distance” vision into the enterprise space, too. Early efforts include Horizon Workrooms for organizing meetings, Quest for business, and Infinite Office for accessing business apps in VR. Accenture bought 60,000 Oculus Quest 2 units last year to favor the onboarding and training of its new employees. “The pandemic has accelerated the adoption of remote working technology, and while VR is not ready yet to be used for eight hours a day, it hopefully will in a few years,” Vitillo said. Digital twins and the metaverse The broader adoption of low-cost VR hardware could also accelerate the adoption of digital twins workflows in the enterprise. Offering a high-fidelity and immersive virtual environment that mirrors what is happening in the real world could be incredibly useful for planning around physical facilities. “Imagine an airport that could simulate what would happen if they saw a record passenger flow if baggage conveyor systems stopped working, or what that advertisement would look like in that particular spot,” said Brian Jackson, research director in the CIO practice at Info-Tech Research Group. Meta could combine its capabilities constructing virtual worlds with its synthetic data generation to create this sort of virtual experimentation ground for businesses. This could build on Meta’s recent acquisition of AI.Reverie, which specializes in synthetic-data generation, specifically for training cognitive vision algorithms with video and images. This would be useful for training or planning renovations to a facility. Instead of reconstructing an environment in a digital space, Meta could also map digital information on top of the real world. This could help engineers and technicians keep their hands free while seeing digital annotations on top of machinery they are working to repair or operate. This could combine Meta’s mixed reality capabilities with its social management capabilities, so workers could collaborate with experts in real time, said Jackson. Bumpy road ahead for Meta’s metaverse and the enterprise Meta still has a few bumps in the road to navigate to gain traction in the enterprise. For starters, it needs to build buy-in from employees who may push back due to fears about mismanagement of their data or attention. “Facebook has always guaranteed that enterprise data does not have the same treatment as consumer data on its platform, but not everyone trusts them,” said Vitillo. It also has to contend with competitive hardware from HTC, which Vitillo contends is better-suited for the enterprise market (Vive Focus 3), better enterprise services, and its enterprise meeting tools (Vive Sync). Other enterprise metaverse competitors include Engage, Immersed, and Microsoft’s pending Mesh for Teams. Meta also faces significant competition for the Metaverse backend from Nvidia’s Omniverse. Pring-Mill said. “I would apply a higher success probability to Nvidia than to Meta, despite the nearly $250 billion difference in market cap. Facebook’s DNA involved gradually transforming social credibility into data monetization after scaling its efforts more competently than predecessors like Friendster and Myspace. I believe that their social credibility has been eroded by scandals, their critical mass might not fully transfer, and the technical challenges of seamless AR or more comfortable VR are significant.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,179
2,021
"Harness your data to make weak AI your strength | VentureBeat"
"https://venturebeat.com/2021/10/11/harness-your-data-to-make-weak-ai-your-strength"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Harness your data to make weak AI your strength Share on Facebook Share on X Share on LinkedIn Presented by DataStax For enterprise IT, 2020 was a year defined by coming through in the clutch. Most organizations successfully stood up with new ways for employees to work remotely or interact with customers far faster than previously thought possible. But as we transition from focusing on maintaining business continuity toward driving growth, we should not lose sight of the forest for the trees. The leaps forward companies made in response to the COVID crisis set them up to benefit from virtuous cycles that complement and reinforce each other to turbocharge growth. This results in growing the scope and scale of digital interactions with customers that increases an organization’s ability to pervasively deploy “weak AI” to improve the top and bottom line. As more organizations do this, the cycle of innovation for relevant open-source tools and technologies accelerates. Most certainly, companies that will win the decade will make the most of this. Some are well on their way. Delightful real-time experiences are the foundation Some data-driven, real-time app experiences are so good at saving us time, hassle, or money — or even expanding our horizons — that the word “delightful” may come to mind. Spotify, Netflix, or Waze might be top of mind for some. For others, it might be Target’s popular Circle app — “your shopping and saving sidekick” that promises you will “never miss a deal.” These examples reflect a broader pattern. A Boston Consulting Group survey , for example, found that consumers who experienced the highest level of personalized experiences were more likely to buy something other than what they’d originally planned, spend more money, and also report a higher net promoter score score for the retailer. Using data that originates in applications and is streamed, stored, and immediately analyzed to initiate action means winning some combination of more and more valuable interactions with customers — and winning more customers. This can create a customer-facing virtuous cycle if you are able to re-deploy more and more types of data to improve existing experiences or offer new ones. The Home Depot is a great case study for how this works. Real-time experiences that contributed to resilience during the pandemic included rapid deployment of curbside pickup functionality. It contributed to an 86% increase in app and mobile sales for the company in 2020. But it also helped enrich the company’s arsenal of data: total quarterly customer transactions (both in-store and online) increased by nearly 73 million year-over-year to 447.2 million. That’s a lot of additional data to feed the AI that a company might deploy. For example, it could fuel voice-activated product searches that are smart enough to identify groups of items that would be typically needed for specific projects. Ubiquitous weak AI is the reward As you serve customers with more and more types of data-generating interactions, more and more types of data flow through your organization’s operational systems. You can create a data-driven virtuous cycle within your organization, too. More data for any function or combination thereof (sales, service or support, inventory management, logistics or supply chain management, sensor monitoring, or even staff scheduling and credit risk assessment or fraud detection, for example) positions you to make widespread use of “weak AI.” Simply put, “weak AI” is delegating to computer systems tasks traditionally handled by people. It won’t necessarily make headlines but it can materially improve business process efficiency and outcomes, over and over again, provided the scope (type) and scale (volume) of data available is sufficient to recognize patterns. “Process intimacy” among internal teams such as finance or human resources might be considered a parallel concept to the “customer intimacy” that externally focused business units bring to bear in finding valuable ways to use data. Functional teams can use weak AI to (for example) to reduce errors, spoilage, re-work, or stockouts. At The Home Depot, for example, models are watching weather activity and inventory in order to give replenishment teams a jump on events like hurricanes so they can route items like generators to stores where demand is likely to surge. The open source cycle of innovation is your ally Open source software (OSS) already plays a vital role in enterprise infrastructure. As the basis of competition shifts toward AI and machine learning, there is good reason to make leaning into the OSS ecosystem part of your data strategy. OSS dominates the toolbox of technologies with which to deliver delightful, smart, real-time experiences. This includes, for example, Apache Cassandra (open sourced by Facebook in 2008), Apache Kafka (open sourced by LinkedIn in 2011), and Apache Pulsar (open sourced by Yahoo in 2016). Everyone has access to a growing number of best-of-breed technologies, but your organization has something no one else does: the data generated by your unique combination of brand, operating model, and customer experiences. You should seize the opportunity that OSS presents to focus your people on tapping into domain knowledge to innovate. Both Target and The Home Depot, for example, make OSS the foundation for the custom apps that they build to drive differentiated experiences (for the latter, building on OSS is a mandate ). This almost certainly will lead to contributing to the OSS ecosystem (as both of these retailers do) in ways that grow out of driving your distinctive data strategy. That’s a virtuous cycle, too: because in doing so, you’ll find other members of the ecosystem who happened to arrive at similar problems to solve. As more organizations compete on data, the OSS cycle of innovation for enabling technologies will accelerate, expanding the boundaries of what’s technically possible without your organization bearing the full cost and risk of R&D — and increasing the returns of betting on OSS. Time is of the essence The good news about complementary virtuous cycles is they deliver compounding returns over time. The bad news about compounding returns over time is that any delay getting in the game can make catching up a tall order. Learn how leading organizations improve the custom experience and boost revenue with data in the research report, “ The State of the Data Race 2021. ” Bryan Kirschner is Vice President, Strategy at DataStax. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,180
2,021
"30 startups that show how open source ate the world in 2021 | VentureBeat"
"https://venturebeat.com/2022/01/03/30-startups-that-show-how-open-source-ate-the-world-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 30 startups that show how open source ate the world in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. It has been a busy year in the open source software sphere , from high-profile license changes to critical zero-day vulnerabilities that sent businesses into meltdown. But in among all the usual excitement that permeates the open source world, countless open source startups launched new products, attracted venture capitalist’s (VC) money, and generally reminded us of the role that open source plays in today’s technological landscape — including the data sovereignty and digital autonomy it promises companies of all sizes. Here, we take a look at some of the fledgling commercial open source companies that gained traction in the past year, revealing where enterprises and investors are betting on the power of community-driven software. Polar Signals: Cutting cloud bills Continuous profiling belongs to the software monitoring category known as observability. It’s chiefly concerned with monitoring the resources that an application is using, such as CPU or memory, to give engineers deeper insights into what code — down to the line number — is consuming the most resources. This can help companies reduce their cloud bill, given that most of the major cloud platform providers charge on a consumption basis. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While there are a few continuous profiling products on the market already, Polar Signals officially went to market in October 2021 with the launch of an open source project called Parca. At the same time, Polar Signals raised $4 million in seed funding from Alphabet’s venture capital arm GV and Lightspeed, as it gears up to launch a commercial hosted product in 2022. Unleash: Open source feature management Above: Unleash: An open source feature flag platform Feature management is an important part of the continuous release/continuous deployment ( CI/CD ) process, one that allows developers to test new features incrementally with a small subset of users, turn features on or off, and A/B test alternatives to gain insights into what works best — without having to ship a whole new version. Unleash is an open source platform that promises companies greater flexibility and control over their data and feature management deployment. The company raised $2.5 million last year to build on its recent growth, which has seen it secure customers such as Lenovo and U.S. manufacturing giant Generac. Conduktor: A GUI for Kafka Above: Conduktor founders Stéphane Maarek (CMO), Stéphane Derosiaux (CTO), and Nicolas Orban (CEO) Companies that need real-time data in their applications often use Kafka , an event streaming platform built to handle common business use-cases such as processing ecommerce payments, managing signups, matching passengers with drivers in ride-hailing apps. Around 80% of Fortune 100 companies use Kafka to store, process, and connect all their disparate data streams — but Kafka requires significant technical nous and resources to fully leverage, which is where Conduktor is setting out to help with an all-in-one graphical user interface (GUI) that makes it easier to work with Kafka via a desktop client. Conduktor last year raised $20 million in a series A round of funding led by Accel, as it looks to “simplify working with real-time data” on the Kafka platform. Scarf: Measuring open source software Above: Scarf: Example dashboard While open source software may well have eaten the world , the developers and companies behind open source projects often lack meaningful insights into their project’s use and distribution, something that Scarf has set out to solve. The company’s core Scarf Gateway product serves as a central access point to all open source components and packages wherever they are hosted, and provides key usage data that the registry provider typically doesn’t offer. This includes which companies are installing a particular package; which regions a project is most popular in; and what platforms or cloud providers the package is most commonly installed on — it’s similar to Google Analytics, but for open source software. After emerging from stealth back in March with $2 million in seed funding, the company went on to raise a further $5.3 million. Rudderstack: Leveraging customer data Above: Rudderstack: Integrations Customer data platforms (CDPs) bring the utility of customer analytics to non-technical personnel such as marketers, allowing them to derive key insights from vast swathes of data — CDPs serve as a unified customer database built on real-time data such as behavioral, transactional, and demographic, drawn from myriad sources. While there are many CDPs to choose from, RudderStack is a developer-centric, open source alternative that affords companies more flexibility in terms of how they deploy their CDP. Indeed, it’s pitched as “data warehouse-first,” which means that users can retain full control over all their data in their own warehouse. To accelerate its growth, RudderStack last year announced a $21 million series A round of funding led by Kleiner Perkins. AtomicJar: Integration testing AtomicJar is setting out to commercialize Testcontainers , a popular open source integration testing framework used at major companies including Google, Oracle, and Uber. While unit testing is all about testing individual software components in isolation, integration testing is concerned with checking that all the components operate as they should when connected together as part of an application. Founded last March, AtomicJar has invited a number of enterprises to participate in a private beta to trial various enhancements and extensions that it’s adding to Testcontainers. To help, the company last year raised $4 million in seed funding. Bit: Bringing microservices to the frontend Above: Bit: How a hotel booking website might look when broken down into multiple components Microservices is a familiar concept in backend engineering, but it has been gaining steam in the frontend sphere too as companies explore ways to leverage a flexible, component-based architecture across the entire development process. And this is where Bit is hoping to carve its niche. Bit provides open source tools and a cloud platform to help frontend developers collaborate and build component-driven software. At its core, Bit makes it easier for companies to split frontend development into smaller features and codebases, allowing teams to develop features independently, while continuously integrating as part of a unified application. The company announced a $25 million series B round of funding back in November , as it prepares to launch new products such as Ripple CI , which continuously integrates component changes from across all applications and teams in an organization. Ripple CI is scheduled to launch later in 2022. Cerbos: Managing user permissions Companies often need to enable different user permissions in their software, so some employees can only submit expense reports, for example, while others can “approve” the expenses or mark them as “paid.” These various permissions might vary by team, department, and geographic location — and companies need to be able to set their own user permission rules. There are plenty of tools in the identity and access management (IAM) space that allows for this already, but a young company called Cerbos is setting out to streamline how software developers and engineers manage user permissions, while also addressing the myriad access control compliance requirements driven by regulations and standards such as GDPR and ISO-27001. Cerbos is adopting a self-hosted, open source approach to the user permissions problem, one that works across languages and frameworks — and one that gives companies full visibility into how it’s handling user data. To help build a commercial product on top of the open source platform, Cerbos recently announced it had raised $3.5 million in a seed round of funding. Chatwoot: Customer engagement Above: Chatwoot: Shared inbox Chatwoot has built an open source platform to challenge some of the major players in the customer engagement software space, including the multi-billion dollar publicly-traded company Zendesk. The core Chatwoot platform constitutes a shared inbox that allows companies to connect all their various communication channels in a single, centralized location, while it also offers a live chat tool, native mobile apps, and myriad out-of-the-box integrations. As with other open source companies, Chatwoot promises greater data control and extensibility versus the proprietary incumbents. The Y Combinator (YC) alum announced a $1.6 million seed round of funding back in September. Cal.com: An open source Calendly alternative With Calendly now a $3 billion company , this has shone a light on the broader meeting scheduling space as companies search for new tools to cut down on needless, repetitive admin. With that in mind, Cal.com last year launched what it calls “scheduling infrastructure for everyone,” aimed at anyone from yoga instructors and SMEs all the way through to enterprises. Similar to Calendly, meeting organizers use Cal.com to share a scheduling link with invitees, who are then asked to choose from a set of time slots — the slot that everyone can make is then added to everyone’s calendar. Above: Cal.com in action As an open source product available via GitHub , however, companies using Cal.com can also retain full control of all their data through self-hosting. Moreover, they can manage the entire look-and-feel of their Cal.com deployment via its white-label offering. If users don’t want the hassle of self-hosting, Cal.com is available as a fully-hosted service too. Cal.com recently announced that it has raised $7.4 million in seed funding from a slew of angel investors and institutional backers, including YouTube cofounder and former CEO Chad Hurley. PostHog: Open source product analytics Above: PostHog: Feature flags PostHog is an open source alternative to popular product analytics platforms such as Amplitude, serving companies with data on how people are using their products, insights into notable trends, and — ultimately — removing bottlenecks and reducing churn. The company last year raised $15 million in a series B round of funding from notable backers including Alphabet’s venture capital arm GV, while it also launched a new self-hosted plan that lets companies track their product engagements on their own infrastructure for free. Hoppscotch: Open source API development Above: Hoppscotch for teams APIs (application programming interfaces) are the glue that holds most modern software together — they are what bring data to sales and marketing teams; privacy to banking and health care apps; and maps to your fitness-tracking app. And that is why Hoppscotch is striving to build what it calls an “API development ecosystem,” with open source at its core. The Hoppscotch platform includes several integrated API development tools, aimed at engineers, software developers, quality assurance (QA) testers, and product managers. In pursuit of commercialization, Hoppscotch recently announced it had raised $3 million in a seed round of funding from a slew of investors including WordPress.com parent company Automattic and OSS Capital. Element: Open source team communications Above: Element: An instant message app built on Matrix There are several open source Slack alternatives out there, one of which is Element — the company behind an end-to-end encrypted team messaging platform powered by the Matrix protocol. Matrix is something akin to a telephone network or email, insofar as it’s an interoperable communication system that doesn’t lock people into a closed ecosystem. Because Element is built on Matrix, it essentially serves as a catalyst for the growth of the broader Matrix network. And to help it push further into the commercial sphere, Element last year raised $30 million in a series B round of funding. MindsDB: Giving enterprise databases a brain MindsDB enables companies to make machine learning-powered predictions directly from their database using standard SQL commands, and visualize them in their application or analytics platform of choice. In the company’s own words, it wants to “democratize machine learning by giving enterprise databases a brain.” There are many use cases for MindsDB, such as predicting customer behavior, improving employee retention, credit-risk scoring, and predicting inventory demand — it’s all about using existing data to figure out what that data might look like at a later date. MindsDB ships in three broad variations, including a free and open source incarnation that can be deployed anywhere. To further develop and commercialize its product, MindsDB recently announced it had raised $3.75 million in seed funding and unveiled partnerships with major database brands, including Snowflake, SingleStore, and DataStax. TerminusDB: Open source graph database Knowledge graphs enable businesses to extract new information by aggregating and analyzing connections between large volumes of internal data. Music streaming services, search engines, fraud detection software, and more can all be aligned through their use of knowledge graphs to derive insights from disparate data that may not seem closely related. While several larger established graph database companies raised sizable sums in 2021, some newer players also raised VC cash, suggesting that the graph database space has room for growth. One of those was TerminusDB , which raised $4.3 million in seed funding to build what it calls a “knowledge collaboration infrastructure” for the internet, combining an open source graph database and document store with the commercial, cloud-based collaboration TerminusHub built on top of TerminusDB. The company is also working on a cloud-based version of TerminusDB. Open source Firebase rivals Above: Nhost founders Johan Eliasson and Nuno Pato. The burgeoning backend-as-a-service (BaaS) market was pegged at $1.6 billion in 2020 , a figure that’s predicted to grow to nearly $8 billion within six years. The value for companies and developers is that BaaS enables them to forget about infrastructure and put all their efforts into the front end, while open source can also help ensure that they are not locked into any specific ecosystem. With that in mind, a handful of young open source upstarts have emerged to challenge the big incumbents such as Google’s Firebase. Nhost With Nhost , companies can automate their entire backend development and cloud infrastructure spanning file storage, databases, user authentication, APIs, and more. The company last year raised $3 million from a slew of notable investors, including GitHub founders Scott Chacon and Tom Preston-Werner. Appwrite Similarly, Appwrite is a self-hosted BaaS solution for web and mobile app development — it includes user authentication, file storage, a database for storing and querying data, API management, security and privacy, and more. The company last year announced $10 million in funding as it prepares to launch its cloud product in 2022. Supabase Much like Nhost and Appwrite, Supabase pitches itself as an open source Firebase alternative, one that allows developers to create an entire backend in minutes. The company announced a $30 million series A round of funding back in September. Data integration Above: Airbyte: Data replication Businesses often have a wealth of data spread across tools such as CRM, marketing, customer support, and product analytics. While accessing the data isn’t the problem, deriving meaningful insights from data stored in different locations and formats is — this means that businesses have to combine it in a centralized location and transform it into a common format that makes it easier to analyze. A typical process for achieving this is what’s known as “extract, transform, load” (ETL), which involves transforming the data before it arrives in a central data warehouse. Though a more modern alternative — “extract, load, transform” (ELT) — allows companies to transform the raw data on-demand when it’s already in the warehouse. While there are pros and cons to both methods, we’re seeing countless companies emerge to tackle the broader data integration problem, with open source serving as a common theme throughout. Airbyte It was a rollercoaster of 12 months for open source data integration platform Airbyte , which announced its $5.2 million seed fundraise in March and then swiftly followed this up with a $26 million series A and $150 million series B which valued the company at $1.5 billion. In the midst of all this, Airbyte — which was only founded in 2020 — announced its first data lake integration , starting with Amazon’s Simple Storage Service (S3). Dbt Labs Fishtown Analytics, the company behind an open source “analytics engineering” tool called dbt (data build tool), rebranded as Dbt Labs and raised $150 million in a series C round of funding at a $1.5 billion valuation. Analytics engineering refers to the process of taking raw data after it enters a data warehouse and preparing it for analysis, meaning that dbt effectively serves as the “T” in ELT. Estuary Combining data from SaaS applications and other sources to unlock insights is a major undertaking, one made all the more difficult when it comes to real-time, low-latency data streaming. And this is where Estuary enters the fray, with a fully-managed ELT service — built on top of the open source Gazette project — that combines the benefits of both “batch” and “stream” data processing pipelines. The company raised a $7 million seed round of funding last year. Meltano GitLab had initially debuted Meltano back in 2018, and through various iterations, it ended up as an open source platform for data integration and transformation. Last year, however, GitLab spun out Meltano as a standalone business , with backing from major investors including Alphabet’s GV. Preset Preset was founded by Apache Superset (and Airflow) creator Maxime Beauchemin. Superset is a data exploration and visualization platform, upon which Preset offers enterprise hosting, security, compliance, governance, and more. The company last year launched its fully-managed cloud service out of beta and raised $35.9 million in series B funding. Treeverse Data lakes that constitute petabytes of different datasets can become unwieldy and difficult to manage. This is where young startup Treeverse is setting out to help, with an open source platform called LakeFS that enables enterprises to manage their data lake in a way similar to how they manage their code — this includes version control and other Git-like operations such as branch, commit, merge, revert, and full reproducibility of all data and code. The company last year raised $23 million in a series A round of funding. Cube Dev Once a company has combined and transformed all its data, how do they actually leverage this data to create internal business intelligence dashboards or add analytics to existing customer applications? This is something that Cube Dev is setting out to solve. Cube Dev is the company and core developer behind the open source “analytical API platform” Cube.js , which gives developers the backend infrastructure to connect their aggregated and transformed data to end-user visualizations. It helps circumvent many of the technical barriers — such as SQL generation, caching, API design, and security — that are involved in making data useful. Back in July, Cube Dev announced it had raised $15.5 million in a series A round of funding to commercialize Cube.js, which included launching a cloud-hosted SaaS version of the open source project. Kubernetes is king Above: Nirmata dashboard The rise of Kubernetes since Google open-sourced the project back in 2014 highlight’s a broader industry push toward containerized applications. This was a trend that continued into last year, with the recently-published State of Cloud Native Development report indicating that 31% of all backend developers use Kubernetes today, representing a 67% year-over-year increase. And as with just about every other hot open source project out there, Kubernetes is giving rise to a slew of commercial companies. Nirmata Nirmata is setting out to “ conquer Kubernetes complexity ” with a unified management platform for Kubernetes clusters. The company is the creator of and chief contributor to Kyverno , an open source policy engine for Kubernetes, and last year it raised $3.6 million in pre-series A funding to “capitalize on the full potential of Kubernetes-native policy management.” Rafay Systems Rafay Systems is a platform that unifies the lifecycle management for Kubernetes infrastructure and apps, bringing together capabilities spanning automation, security, visibility, and governance — the company last year raised $25 million in a series B round of funding. Loft Labs Loft Labs promises self-service Kubernetes access for all developers in a company. The company, which raised a $4.6 million seed round of funding last year, has open-sourced several Kubernetes projects, on top of which sits its commercial product known as Loft, which enables enterprises to “scale self-service access” to Kubernetes across the engineering workforce. Kubermatic Similar to Loft Labs, Kubermatic targets developers with a self-service Kubernetes platform for deploying their clusters across any infrastructure, and enabling them to centrally manage all their workloads from a single dashboard. The company last year raised $6 million in a seed round of funding. Akuity Akuity emerged from stealth last year with $4.5 million in seed funding to be the “ Argo enterprise company for Kubernetes app delivery. ” Akuity was founded by the co-creators of Argo , a popular open source project for orchestrating Kubernetes-native application delivery, and is used at major companies including Google, Tesla, GitHub, and Intuit. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,181
2,021
"Dbt Labs raises $150M to help analysts transform data in the warehouse | VentureBeat"
"https://venturebeat.com/2021/06/30/dbt-labs-raises-150m-to-help-analysts-transform-data-in-the-warehouse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dbt Labs raises $150M to help analysts transform data in the warehouse Share on Facebook Share on X Share on LinkedIn Dbt Labs cofounder and CEO Tristan Handy Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Fishtown Analytics , the company behind an open source “analytics engineering” tool called dbt (data build tool), today announced it has rebranded as Dbt Labs and raised $150 million in a series C round of funding at a $1.5 billion valuation. Analytics engineering, for the uninitiated, is a relatively new role that describes the process of taking raw data after it enters a data warehouse and preparing it for analysis. The role itself serves as a bridge of sorts between the data engineering and data analytics spheres, requiring them to transform data into a usable form that can be easily queried by others (e.g. marketers) at the company. Dbt Labs juxtaposes the role against that of a data analyst in this description : Analytics engineers provide clean data sets to end users, modeling data in a way that empowers end users to answer their own questions. While a data analyst spends their time analyzing data, an analytics engineer spends their time transforming, testing, deploying, and documenting data. Founded out of Philadelphia in 2016, Dbt Labs has spent the past five years designing a toolset to help data analysts “create and disseminate organization knowledge,” as it puts it. This has largely meant offering consulting services on top of the open source dbt project, which is used by major companies including HubSpot, GitLab, and JetBlue. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But what is dbt, exactly? In a nutshell, dbt is a command-line tool that enables data analysts to transform raw data by writing dbt code in their usual text editor, and then invoking dbt from their command line. Dbt then compiles the code into SQL and executes it against the company’s database. Thus, dbt is a development environment that “speaks the preferred language of data analysts” (that’s SQL, in case you were wondering). Above: User interacting with dbt For context, a modern enterprise data stack comprises myriad components, spanning data ingestion tools such as Fivetran and cloud-based data warehouses such as Snowflake and Google’s BigQuery. Data can be “transformed” on entry to the data warehouse as part of a process known as “extract, transform, load” (ETL). But data can also be transformed later by running SQL scripts directly in the warehouse, through a process that is known as “extract, load, transform” (ELT). The latter achieves faster loading times but requires more processing power as the data needs to be transformed on-demand — that is where the power of modern analytics databases such as Snowflake and BigQuery really shine. Put simply, dbt is the “T” in ELT — it’s built to transform data that already lives in a data warehouse. “Dbt is a key piece in the modern data stack — it connects to the cloud data platform and leverages all the computing power of these platforms to transform, test, and deploy data sets,” Dbt Labs CEO and cofounder Tristan Handy told VentureBeat. Post-transformation, companies can use these datasets for whatever they wish, be it to train machine learning models or to feed into business intelligence (BI) tools such as Tableau or Looker. The story so far According to Handy, he developed dbt initially based on his own experiences as a data analyst. “I worked as a data analyst for a decade-and-a-half and was always slowed down by terrible workflows — emailing spreadsheets back and forth, downloading massive .CSV files, saving SQL files on my desktop,” he said. Fast-forward five years, and Handy said that dbt adoption has grown 200% each year since its launch, and in Q1 2021 his company’s enterprise revenue doubled year-on-year. The main driving force behind this, as is seemingly the case with just about every new technology these days, is the rapid transition from on-premises infrastructure to cloud computing — in this case, cloud-based data platforms such as Databricks, BigQuery, Snowflake, and Amazon Redshift. “The big shift for our industry is the transition to the cloud,” Handy said. “The modern cloud data platforms are all a fundamentally new class of ‘thing’ that was simply not possible in the on-premises world of a decade ago. Data grows very quickly, and processing workloads on top of that data are highly variable. Both of these factors mean that the elasticity of the cloud is just supremely important, and it’s our belief that all — or appreciably all — data workloads will migrate to the cloud in the coming decade.” The scalability and elasticity afforded by the cloud opens the doors to things that just weren’t an option before, such as the ability to perform data transformation in the warehouse, which speeds things up greatly. “The fundamental unlock of the cloud has meant that performance became much less of an issue, which has enabled data analysts to take over the entire insights-generation process,” Handy continued. “This has, in turn, led to the rise of analytics engineering — the practice by which analysts construct modern pipelines on top of cloud data platforms.” Show me the money Prior to now, Dbt Labs had raised around $42 million, the entirety of which came in the past 14 months across two separate rounds of funding. With its latest cash injection — which was led by Sequoia Capital, Andreessen Horowitz, and Altimeter — the company said that it will double down on the development of its core open source platform. “Right now our focus is on improving our core offering and supporting its exponential growth as the foundation of one of the highest-growth areas in all of enterprise software,” Handy said. “We also have our eye on some experimental new areas of product development, but nothing we’re ready to share yet.” As for the rebrand, well, that also makes a great deal of sense given how Fishtown Analytics and dbt have evolved over the past five years. Initially, dbt was purely an open source product with no commercial component — Fishtown Analytics was the project’s primary contributor and user, and it sold consulting services on top of the open source project. In the intervening years, however, dbt gained its own premium Team and Enterprise plans , which include API access, single sign-on, professional services, and more. For this reason, there is no need for two separate “brands” monetizing the same product, something which could also cause confusion. “This confusion was a big motivator for changing our name from Fishtown Analytics,” Handy said. “In renaming the company, we’re making a statement of both our relationship to dbt — which we created and maintain — and commitment to its long-term success.” Above: Dbt interface The OSS factor According to Dbt Labs, there are some 15,000 “data professionals” in the dbt community Slack, 5,500 companies using dbt, and 1,000 dbt cloud customers who pay for centralized access through a web-based interface. However, given that dbt is released under a permissive Apache 2.0 license, this means that there are very few restrictions on how the broader commercial world adopts it. So couldn’t this mean that other deep-pocketed companies could look to build on top of dbt? It very much could, which is partly why Dbt Labs has chosen to raise another sizable chunk of funding so soon after the previous two rounds. “Dbt drives a tremendous amount of usage across several of the major cloud data platforms — that makes dbt and its community very strategic for these platforms,” Handy said. “We also know that the major cloud providers love selling managed versions of open source software. Put those two things together and our expectation is that at least one, if not more, of the cloud platforms will launch some sort of managed dbt service in the coming year. Our space is heating up, and this forces us to accelerate our ability to build differentiated products. That’s why we raised again.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,182
2,022
"Europe’s Move Against Google Analytics Is Just the Beginning | WIRED"
"https://www.wired.com/story/google-analytics-europe-austria-privacy-shield"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Burgess Security Europe’s Move Against Google Analytics Is Just the Beginning Illustration: Elena Lacey Save this story Save Save this story Save The Austrian website of medical news company NetDoktor works like millions of others. Load it up and a cookie from Google Analytics is placed on your device and tracks what you do during your visit. This tracking can include the pages you read, how long you are on the website, and information about your device—with Google also assigning an identification number to your browser that can be linked to other data. NetDoktor can use this analytics data to see how many readers it has and what they’re interested in—the website picks what it collects. But by using Google Analytics, the tech giant’s traffic monitoring service, all this data passes through Google’s servers and ends up in the United States. For data regulators in Europe, the shipping of personal data across the Atlantic remains problematic. And now a small Austrian medical website finds itself at the center of an almighty tussle between US laws and Europe’s powerful privacy regulations. On December 22, the Austrian data regulator, Datenschutzbehörde, said the use of Google Analytics on NetDoktor breached the European Union’s General Data Protection Regulation (GDPR). The data being sent to the US wasn’t being properly protected against potential access by US intelligence agencies, the regulator said in a decision that was published last week. Days earlier it was revealed that European Parliament’s Covid-19 testing website had also breached GDPR by using cookies from Google Analytics and Stripe, according to a decision from the European Data Protection Supervisor (EDPS). The two cases are the first decisions following a July 2020 ruling that Privacy Shield , the mechanism used by thousands of companies to move data from the EU to the US, was illegal. These landmark cases will likely pile pressure on negotiators in the US and Europe who are trying to replace Privacy Shield with a new way for data to flow between the two. If an agreement takes too long, then similar cases across Europe could have a domino effect, with cloud services from Amazon, Facebook, Google, and Microsoft all potentially being ruled incompatible, one country at a time. “This is an issue that touches all aspects of the economy, all aspects of social life,” says Gabriela Zanfir-Fortuna, vice president of global privacy at Future of Privacy Forum, a nonprofit think tank. NetDoktor isn’t unique—but it is the clearest hint yet that European regulators still don’t like the way US tech companies send data across the Atlantic. Current US surveillance laws, including Section 702 of the Foreign Intelligence Surveillance Act and Executive Order 12333 , don’t protect data held on people living outside the US as well as they do those living inside it. ​​In short: It’s theoretically possible for US surveillance agencies to collect huge amounts of data that’s moved to the country. “What they do right now would be a violation of the Fourth Amendment if it's for US citizens,” claims Max Schrems, honorary chair of legal nonprofit organization noyb, who launched the legal cases that brought down Privacy Shield in 2020 and its predecessor Safe Harbor in October 2015. “Just because people are foreigners it's not a violation of the US Constitution.” One outcome of the 2020 Privacy Shield ruling is that companies moving data from the EU to the US must make sure there are extra measures in place to protect that information. Now the Austrian Data Protection Authority has determined that the technical measures put in place by Google Analytics—including limiting access to data centers and encrypting data as it moves around the world—don’t do enough to stop it potentially being scooped up by US intelligence agencies. Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Because Google could access data in plain text, the data wasn’t protected from potential surveillance, the body’s decision says. “This transfer was found to be unlawful because there was no adequate level of protection for the personal data transferred,” says Matthias Schmidl, the deputy head of the Austrian data regulator. He adds that website operators cannot use Google Analytics and be in line with GDPR. At the moment, the decision applies only in Austria and isn’t final. Websites across Europe aren’t suddenly going to stop using Google Analytics. NetDoktor didn’t respond to a request for comment. “While this decision directly affects only one particular publisher and its specific circumstances, it may portend broader challenges,” says Kent Walker, Google’s senior vice president for global affairs and chief legal officer. In a blog post published on January 19 , Walker says that the company believes the technical measures it has put in place protect people’s data, and that this kind of decision could impact how data flows across the “entire European and American business ecosystem.” And this is just the beginning. When noyb filed the complaint against NetDoktor in August 2020, it also filed 100 other cases with other data protection authorities across Europe. “It's not specific to Google Analytics. It's basically about outsourcing to US providers in general,” Schrems says. Regulators in 30 European countries are currently investigating the other cases, which cover both the use of Google Analytics and Facebook Connect, the company’s tool to link your account to other sites. Country-specific websites belonging to Airbnb, Sky, Ikea, and The Huffington Post are also subject to complaints. “The majority of these decisions will have the same or similar outcomes,” says Zanfir-Fortuna. This is likely, she says, as noyb used the same legal arguments for all of its cases, and in response data protection regulators formed a task force to discuss the legal issues. “We expect that this is going to mobilize country by country, wherever it drops,” Schrems says. The Dutch data protection authority, Autoriteit Persoonsgegevens, says it is finalizing its investigation and hasn’t ruled out the possibility that the use of Google Analytics in its current form will be banned. In Germany, where data issues are regulated by region, Hamburg’s data protection authority received two complaints from noyb and says in one case the website has removed Google Analytics, so it “does not plan to issue any orders or a fine” in this case. It is still investigating the other case. Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Despite coordination by data regulators, there may be some differences of opinion, says Simon McGarr, director of data compliance for Europe at McGarr Solicitors. “The Austrian position is probably at one end of a spectrum of opinion—and it would probably represent the most radical end,” he says, adding that other data bodies will either endorse, amend, or reject that line of reasoning. Disagreement across the EU’s 27 GDPR enforcers is not uncommon: Last year an Irish Data Protection Authority fine against WhatsApp was increased by €175 million after other regulators disagreed with the decision. McGarr says it’s possible other EU regulators looking at the noyb cases may come to different conclusions based on the facts of each case. A spokesperson for the EDPS says its view is that personal data moving to the US needs to be protected by “effective supplementary measures.” The body is also currently investigating how official EU organizations use Amazon Web Services and Microsoft Office 365. So what happens next? The Austrian decision—and other similar cases currently being considered—highlight the tensions between Europe’s strong privacy laws and what happens to data once it leaves the bloc. Some are optimistic that it could reduce Europe’s reliance on major US technology companies, while others say it highlights the importance of making sure negotiators from both sides strike a new deal that allows data sharing before data flows and economies are disrupted. Companies are likely to look at the decision by the Austrian authority and potentially consider alternatives while they wait for further rulings from other national data bodies, says Guillaume Champeau, director of public affairs at cloud architecture platform Clever Cloud. “It could really help change the business landscape to make competition fairer in Europe,” he adds. Champeau argues there are plenty of European cloud-based analytics businesses that don’t get as much attention as Google Analytics, which is estimated to be used by 28 million websites worldwide. Schrems says that if similar decisions keep dropping in the next year, he expects that some large companies, such as banks, may start to question who should be responsible for their GDPR problems. “If people invest millions of euros into some cloud solution that then turns out to be illegal, there's going to be huge questions about who pays the bills in the end,” he says. The Austrian regulator did not say if it had fined NetDoktor, but the case is yet to be fully finalized. Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Wider than this, Schrems says he does not expect Silicon Valley companies to change their technology or attitudes yet. “There is simply no willingness by Silicon Valley to adapt to these rules,” he claims. Internal Facebook documents seen by Politico show that the company thinks there aren’t any problems with shipping EU data to the US, and that the company’s lawyers think US laws protect data from the EU as well as if it were staying in the bloc. A Google spokesperson says the company has “no plans to share,” when asked if it intends to change where European data is processed. It’s more likely that EU and US negotiators will broker a new data sharing deal before major technology firms radically change their approach. The EU and US have been discussing what should replace Privacy Shield since it was struck down in July 2020. But these discussions are yet to result in many concrete proposals. Officials have floated greater oversight of US security agencies , including judges who decide whether the collection of EU data is legal. “The easiest way would be to say there needs to be some judicial approval of surveillance, and so on, as it is for American citizens,” Schrems says. Negotiations have intensified in recent months and are a priority for both sides, says a European Commission spokesperson. There are red lines though: It is unlikely the commission would want a Privacy Shield successor to be defeated in court again. “Only an arrangement that is fully compliant with the requirements set by the EU court can deliver the stability and legal certainty stakeholders expect on both sides of the Atlantic,” the commission spokesperson says. US representatives had not replied to a request for comment at the time of publication. Zanfir-Fortuna says the Austrian decision is likely to put more pressure on negotiators but adds it is unlikely there will be any legislative changes in the US. A federal US privacy law appears to be some way off and there may not be much appetite for entirely reforming surveillance laws. Instead, Zanfir-Fortuna says, changes that allow for Privacy Shield to be replaced may come from executive orders that can be passed with less political debate. That position is something Google largely agrees with. Minutes of meetings between Google and the European Commission, released under freedom of information laws , show the company hoped any Privacy Shield successor “would not require Congressional action.” In his blog post, Walker urged EU and US negotiators to “quickly finalize” a successor to Privacy Shield. “The stakes are too high—and international trade between Europe and the US too important to the livelihoods of millions of people—to fail at finding a prompt solution to this imminent problem,” he claims. Ultimately the ongoing legal wranglings and political negotiations may open up Privacy Shield’s replacement to more legal scrutiny—the cycle of agreements being struck down could continue if European organizations don’t consider data moving to the US to be properly protected from surveillance. “It's very possible that we will see a replacement of the Privacy Shield in the next couple of months,” Zanfir-Fortuna says. “The question then is for how long will a new Privacy Shield ensure certainty for transfers in the absence of reforms in the US?” 📩 The latest on tech, science, and more: Get our newsletters ! 4 dead infants, a convicted mother, and a genetic mystery Explosives, a robot, and a sled expose a doomsday glacier How to delete your Facebook, Twitter, and more The best TV shows you missed in 2021 Public transit systems refocus on their core riders 👁️ Explore AI like never before with our new database 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior writer X Topics Regulation privacy Europe GDPR Kate O'Flaherty Dell Cameron Dell Cameron Lily Hay Newman Dell Cameron Dell Cameron Lily Hay Newman Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
15,183
2,022
"Meta details plans to build the metaverse (and put Siri and Alexa to shame) | VentureBeat"
"https://venturebeat.com/2022/02/24/meta-details-plans-to-build-the-metaverse-and-put-siri-and-alexa-to-shame"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta details plans to build the metaverse (and put Siri and Alexa to shame) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In its deep-dive, two-hour-plus video explanation of how it sees the metaverse operating in the future , Meta offered 2,000-plus online listeners both high-level descriptions and details on several specific areas of this proposed new world. They included how the Facebook-led company is using AI and machine learning in the metaverse for research, product development, running a universal language translator, giving personal assistants human-level intelligence, and establishing responsible use of AI and all the personal data that goes with it. [ Special Report — The Metaverse: How close are we? ] CEO Mark Zuckerberg led off with a 16-minute high-level overview of the day’s session, noting several times that his company is placing high priority on building the Metaverse with a “responsible” approach to data stewardship, something which lost Facebook credibility in past years. Eight presentations followed in the 140-minute session. How Meta plans to beat Siri, Alexa and Google Personalized assistants that understand people and let them control the flow of conversation can make peoples’ lives easier and pave the way to smarter devices — at home or on the go. But today, in 2022, they generally still leave a lot to be desired in terms of understanding requests, speed and accuracy of information. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Assistants today — whether via voice or chat— are generally underwhelming,” Meta conversational AI tech lead Alborz Geramifard said. “There are several reasons why, starting with how they are engineered. We’re sharing the challenges developers and engineers face when attempting to build useful assistants and how we can navigate these challenges as we build for the metaverse.” Zuckerberg’s hope for his company is to build a personal assistant that puts Siri, Alexa, and Google to shame. While Meta hasn’t picked out a name for it yet, Zuckerberg said Meta wants its voice assistant to be more intuitive: picking up contextual clues in conversations, along with other data points that it can collect about our bodies, such as where our gaze is going, facial expressions, and hand gestures. “To support true world creation and exploration, we need to advance beyond the current state of the art for smart assistants,” Zuckerberg said. “When we have glasses on our faces, that will be the first time an AI system will be able to really see the world from our perspective — see what we see, hear what we hear, and more. So, the ability and expectation we have for AI systems will be much higher.” Meta’s team appears to be up for those challenging tasks. During the presentation, Meta also introduced Project CAIRaoke, which Geramifard described as “breakthrough research that aims to make assistants more helpful and interactions with them more enjoyable. Project CAIRaoke is an AI model created for conversational agents. It works end-to-end, combining the four existing models typically used by today’s assistants into a single, more efficient and flexible model.” Project CAIRaoke is leveraging years of advancement in natural language processing instead of scripted conversations delivered by applications that are deeply contextual and personalized, and the user is in charge of the conversation flow, Geramifard said. “This unified system is better built for natural conversations and can adapt to their normal but complicated flows,” Geramifard said. Meta’s coming universal language translator One of the more substantial news items from the session was the introduction of Meta’s universal language translator, which when it becomes widely used will enable much more than mere understanding between cultures – it will lead to an improved exchange of data, science, and business projects. More than 2,000 languages and dialects are now being used each day somewhere in the world. The universal language translator will enable anybody to translate any language to one or more others, using AI and machine learning in real-time. Presently, only about 100 languages can be translated on a one-to-one basis, with English the most used by far. The idea is to lessen the dominance of “majority languages,” Meta’s Angela Fan said, and to augment the value of lesser-known and used languages. “Over 20% of the world’s population is excluded from accessing information on the internet because they can’t read in a language that works for them,” said Fan , herself a multilingual machine-learning translator specialist from Shanghai who has lived in Canada, the U.S., and France. “While access to technology is advancing, language and machine translation capabilities are limited. To ensure inclusion, we need to support everyone, regardless of the language they speak. Today, we will pull the curtain back on an ambitious body of work that aims to provide people with the opportunity to access anything on the internet in their native language and speak to anyone, regardless of their language preferences.” Some examples of the use of this translator, which is now in Meta development: In a marketplace in Kenya, vendors, artists, and customers from across Africa could negotiate easily in any of the many languages who can pay. An entrepreneur in China could learn from the same lectures making the rounds in other centers of technology. In the future, AR glasses could translate instantly for an engineer talking with local techs in rural India speaking any language, including the dozen spoken there. “Language is not just the sounds that we speak or the words that we write, but a fundamental connection of an individual to their family, their culture, and its history and traditions from generation to generation,” Fan said. “Think about the music that you listen to, the holidays you might celebrate, or the food that you eat. Language serves as a foundation for our identity. Because it’s one of the primary tools that we use to understand and then interact with the world around us.” Meta is working with partners that specialize in speech and audio to help make these advancements in translation technology, Fan said. The translator is estimated to be a few years from widespread operation. Building responsible AI at Meta AI is a core component of Meta’s systems that does everything from ranking posts in Facebook’s News Feed to tackling hate speech and misinformation. But, as with other emerging technologies, AI also raises many hard questions around issues such as privacy, fairness, accountability, and transparency. Facebook supports more than 2.5 billion users worldwide and obviously cannot control the input of so many users. However, it has been criticized for deficiencies in this department over the last five years or so and is eager to improve its reputation there. Seems like a never-ending challenge, but Meta staff aren’t shrinking from it. “Our commitment is to building AI responsibly and using a cross-disciplinary team to support these efforts; we have tangible examples of how we are developing and testing approaches to help ensure that our machine learning (ML) systems are designed and used responsibly,” Facebook AI Senior Program Manager Jacqueline Pan said. Meta builds and tests approaches to help ensure that its machine-learning systems are designed and used responsibly, Pan said. “Our mission is to ensure that AI and ML benefits people in society. Now this requires deep collaboration both internally and externally across a diverse set of teams, including parts of platform groups, policy and legal experts. Support from across the highest levels of mental leadership, and researchers who are really steeped in the larger community. We also develop our practices in regular consultation and collaboration with outside experts and regulators. And further we partner with impacted communities external experts in academic institutions, and industry stakeholders to understand the broader community’s expectations when it comes to AI,” Pan said. An example of Facebook’s work in AI fairness this year, Pan said, was when the AI team collaborated with another internal team to release “casual conversation” datasets. “We built and released casual conversations in order to address the need for more high-quality datasets designed to help evaluate potential algorithmic biases in complex real-world AI systems,” Pan said. The dataset consisted of more than 45,000 videos of paid participants having non-scripted conversations; participants disclosed their age and gender, which allowed this dataset to be a relatively unbiased collection of age and gender samples. Additionally, the team was able to provide labels on skin tone and ambient lighting conditions. This data set is designed to help researchers evaluate their computer vision and audio models for accuracy across these dimensions, Pan said. “With this data set, we hope to unlock more fairness measurements and research and bring the field one step closer to building fairer, more inclusive technologies,” Pan said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,184
2,015
"Facebook will use satellites to bring broadband to large parts of Africa | VentureBeat"
"https://venturebeat.com/2015/10/05/facebook-will-use-satellites-to-bring-broadband-to-large-parts-of-africa"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook will use satellites to bring broadband to large parts of Africa Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In its latest effort to open up Internet access to developing markets, Facebook has partnered with French satellite company Eutelsat Communications. The ensuing initiative, which spans a number of years, is scheduled for launch in the latter half of 2016, and will “leverage satellite technologies to get more Africans online,” according to a press release. It will use the upcoming AMOS-6 satellite, a $200 million, 5-ton satellite built by Israel Aerospace Industries. Facebook and Eutelsat say they will create a system specifically aimed at bringing connectivity to large swaths of Sub-Saharan Africa, and that the system will be “optimised for community and Direct-to-User Internet access using affordable, off-the-shelf customer equipment.” For this latest project, Eutelsat is setting up a new company, based in London, to oversee the African satellite broadband rollout. Facebook says it will work with “local partners” across Africa to help deliver services, using both satellite and terrestrial capacity. “Facebook’s mission is to connect the world and we believe that satellites will play an important role in addressing the significant barriers that exist in connecting the people of Africa,” said Chris Daniels, VP of Facebook’s Internet.org program. “We are looking forward to partnering with Eutelsat on this project and investigating new ways to use satellites to connect people in the most remote areas of the world more efficiently.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Founded in 1977, Paris-based Eutelsat provides satellite capacity for many parts of Europe, Africa, the Middle East, Asia, and the Americas, and is used by thousands of TV, radio, and cable networks, powered by Eutelsat’s access to 39 satellites that orbit the Earth. Although it now claims a whopping 1.5 billion monthly active users , Facebook has been looking to emerging markets to sustain its growth — markets that have hitherto been stymied by limited Internet access. Back in 2013 , Facebook launched Internet.org , a collaborative project to help “connect the next five billion.” It went to market with some notable mobile-focused companies onboard, including Samsung, Microsoft, and Qualcomm. A number of projects have emerged since then, including an India launch back in February that promised to bring Internet access to millions of new users. However, Internet.org has faced increasing criticism for only permitting users access to select websites free of charge, including Facebook and a curated selection of local websites. The core complaint has centered around the question of net neutrality, and whether Facebook can dictate what content is accessible through its free app and mobile website. But a couple of weeks back, Facebook renamed the Internet.org apps and website as “Free Basics by Facebook,” and opened up to more developers and web services. The broader Internet.org concept remains, however, and this will continue to provide a platform for new projects such as Aquila, a massive drone designed to deliver Internet access to developing countries. Aquila is a solar-powered aircraft that creates a 50-kilometer communications radius for up to 90 days. It’s similar to Google’s own experimental Project Loon , which promises Internet delivery by hot air balloons. It looks like the battle of the tech giants to bring Internet access — and their own associated services — to the next few billion people is starting to heat up. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,185
2,016
"Facebook starts Telecom Infra Project with Intel, Nokia, Deutsche Telekom, EE, Equinix, Globe, HCL, others | VentureBeat"
"https://venturebeat.com/2016/02/21/facebook-starts-telecom-infra-project-with-intel-nokia-deutsche-telekom-ee-equinix-globe-hcl-others"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook starts Telecom Infra Project with Intel, Nokia, Deutsche Telekom, EE, Equinix, Globe, HCL, others Share on Facebook Share on X Share on LinkedIn A diagram showing the parts of the Facebook-led Telecom Infra Project (TIP) alongside Facebook's Connectivity Labs and the Facebook-initiated Open Compute Project (OCP). Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At the 2016 Mobile World Congress conference in Barcelona today, Facebook announced the formation of the Telecom Infra Project (TIP) , an effort that resembles the Open Compute Project (OCP) that Facebook kicked off five years ago, but which is narrowly focused on the development of new telecommunications networking hardware. This isn’t just another internal effort on the part of Facebook. As it did with the Open Compute Project, Facebook is announcing TIP with a bunch of partners, including telcos like Deutsche Telekom of Germany, BT subsidiary EE of the United Kingdom, Globe Telecom of the Philippines, and SK Telecom of South Korea. “Every day, more people and devices around the world are coming online, and it’s becoming easier to share data-intensive experiences like video and virtual reality,” Jay Parikh, Facebook’s global head of engineering and infrastructure wrote in a blog post. “Scaling traditional telecom infrastructure to meet this global data challenge is not moving as fast as people need. We know there isn’t a sole solution for this, and no single company can tackle the problem alone.” Some participants in the new program, like Intel, Nokia, and Facebook itself, will share designs, and the telco participants can go out and use technology based on those designs. At the start, the group’s work will focus on access, backhaul, and core/management, Parikh wrote. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The creation of new hardware could help telcos connect more people with greater efficiency and provide better connections to those who are already connected. That’s just good business for the telcos. And of course with more people connecting and getting access to more content, it’s more likely that Facebook will pick up millions more users and retain existing ones. Facebook is already engaged in efforts to bring Internet to more people around the globe, with its Aquila drone and its partnership with French satellite company Eutelsat Communications. It also has the controversial Free Basics app for delivering Facebook and other basic Internet services to developing markets. But the infrastructure that telcos use to provide their services is different from those that web services and other general-purpose applications have traditionally relied upon. The OCP has brought openness to cutting-edge designs for multiple generations of servers, storage, and networking equipment that companies other than Facebook have experimented with or even deployed to improve their own operations. The TIP won’t do exactly the same thing, but it could have just as much of an effect around the world, and the work could end up making a difference to the telcos’ hundreds of millions of end users who pay for Internet access month after month. Here’s a list of TIP’s initial members: Africa Mobile Network Amarisoft Aricent ASOCS Athonet AW2S BaiCells Bandwidth Deutsche Telekom EE Equinix Facebook Globe Harman HCL iDirect Intel IP Access Lemko Nexius Nokia Quortus Radisys SDS SK Telecom Ss7ware Star Solutions Sysmocom Vanu VNL For more on TIP, check out the program’s new website. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,186
2,017
"Facebook's huge drone completes second test flight without crashing | VentureBeat"
"https://venturebeat.com/2017/06/29/facebooks-huge-drone-completes-second-test-flight-without-crashing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook’s huge drone completes second test flight without crashing Share on Facebook Share on X Share on LinkedIn Facebook Aquila drone in flight. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook’s big, solar-powered drone has taken flight once again. The social networking giant said Thursday that it’s Aquila drone completed its second test flight on May 22 at the Yuma Proving Ground, a U.S. military facility, in Arizona. The drone, which has the wingspan of a Boeing 737, flew for 1 hour and 46 minutes and “landed perfectly on our prepared landing site,” wrote Martin Luis Gomez, Facebook’s director of aeronautical platforms. The fact that the drone landed correctly is noteworthy considering that last summer’s test flight had a rocky ending, with strong winds and turbulence leading to technical errors in its autopilot system and the drone’s right wing snapping off. Last summer, Facebook said the first test flight was a success and did not reference the crash. A few months later, however, Bloomberg News reported that the National Transportation Safety Board was investigating the accident , which ultimately led to the NTSB releasing a public report of the crash. In case anyone had doubts about the latest test flight, on Thursday Facebook also showed a short video of the drone landing without smashing in an open field. Gomez outlined a couple ways ensured that Aquila’s second flight would be smoother than the last, including outfitting the drone with more sensors to gather additional aerial data, tweaking the autopilot software, and “installing a horizontal propeller stopping mechanism to support a successful landing.” The purpose of this flight appeared to be whether Facebook’s new safety and anti-crashing techniques would work. Facebook debuted its drone project in 2015, and pitched it as a way for the company to eventually beam the Internet down to areas in the world where people lack web-connectivity. The ultimate goal is for Facebook to fly large fleets of these drones that will hover in the air for days at altitudes of 60,000 to 90,000 feet. From there, the drones will connect to one another and distribute the web down to the earth like a sort of makeshift data center in the high skies. Aquila reaching an altitude of 3,000 feet for its second test flight is a sign that Facebook is likely years away from reaching its intended goal. But the company has competition. Chinese media reported in June that the China Academy of Aerospace Aerodynamics, a China-based research institution, successfully flew a solar-powered drone over 65,000 feet in the air. The China Daily report did not say how long the flight was, just that the drone “took off in the morning and flew back to the airport late at night.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,187
2,018
"Facebook is building a 1,553-mile subsea cable to boost internet speeds in Argentina | VentureBeat"
"https://venturebeat.com/2018/09/06/facebook-is-building-a-1553-mile-subsea-cable-to-boost-internet-speeds-in-argentina"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook is building a 1,553-mile subsea cable to boost internet speeds in Argentina Share on Facebook Share on X Share on LinkedIn 3D rendering of subsea cable installation Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook is adding another subsea cable system to its roster — partnering with telecom infrastructure operator GlobeNet to build a new link between Brazil and Argentina. GlobeNet, which offers a range of infrastructure services across the Americas — underpinned by more than 23,500 kilometers (14,200 miles) of submarine cabling — first announced its latest project back in May; however, Facebook’s involvement was not known at the time. The 2,500 km (1,553 mile) cable, which will be known as “Malbec,” will connect Argentina‘s capital of Buenos Aires with São Paulo and Rio de Janeiro in Brazil, with an additional branch reaching the Brazilian city of Porto Alegre. The cable will be co-owned by GlobeNet and Facebook. On the surface, Malbec is designed to better connect Brazil with Argentina. Within the context of GlobeNet’s broader subsea cable infrastructure, however, it will serve to improve connectivity between the United States and South America’s southern segment, including Argentina. Most of Facebook’s servers are based in the U.S., so this is a key step for the company in terms of improving speed and data transfers between its core servers and Argentina. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The new infrastructure will provide seamless connectivity between the Southern Cone of South America and the United States,” GlobeNet said in a press release. Above: GlobeNet’s cable coverage Facebook is no stranger to submarine cable investments. Back in 2012, news emerged that the social network giant had invested in the Asia Pacific Gateway underwater internet cable, connecting Malaysia, South Korea, and Japan, among other countries in the region. Last year, Facebook and Microsoft completed their 4,000-mile transatlantic internet cable Marea , connecting North America with mainland Europe, while in January news emerged that Facebook was also investing in an 8,000-mile cable stretching from the U.S. to Hong Kong. Infrastructure is a key component of Facebook’s business, as the company needs to deliver lag-free services, including messaging, video-streaming, and even virtual realty (VR). In addition to its U.S. servers, Facebook has datacenters in Sweden and Ireland, and a Danish hub is scheduled to open in 2020, followed by one in Singapore in 2022 — which was announced just today. The new Malbec cable is expected to be operational by 2020, and the companies say it will “double the current international capacity” delivered to Argentina. “Argentina deserves state-of-the-art infrastructure to satisfy the pressing demands of the years ahead,” said GlobeNet CEO Eduardo Falzoni. “This project is a testament to our capabilities, expertise, and commitment to the region where we have been operating for 15 years.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,188
2,022
"Nvidia cyberattack not related to Russia's invasion of Ukraine, report says | VentureBeat"
"https://venturebeat.com/2022/02/25/nvidia-cyberattack-not-related-to-russias-invasion-of-ukraine-report-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia cyberattack not related to Russia’s invasion of Ukraine, report says Share on Facebook Share on X Share on LinkedIn HANGZHOU, CHINA - OCTOBER 20, 2021 - Photo taken on Oct. 20, 2021 shows the booth of Nvidia at the 2021 Hangzhou Computing Conference in Hangzhou, east China's Zhejiang Province. Nvidia is abandoning its plan to buy Arm from SoftBank Group due to regulatory objections, ending what would have been the biggest deal in the chip industry. (Photo credit should read Costfoto/Future Publishing via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia has yet to disclose further details on the cyber “incident” that it has been investigating — but a report Friday said the apparent cyberattack was not tied to the crisis in Ukraine that’s been brought on by Russia. The unprovoked invasion of Ukraine by its neighbor Russia this week prompted increased sanctions from the U.S. and other western nations on Thursday. Russian President Vladimir Putin has made repeated threats to take actions against the west if its nations were to “interfere” with Russia’s campaign against Ukraine — something that many believe could include deployment of cyberattacks, given the Putin regime’s frequent use of this tactic. However, according to a Bloomberg report Friday, the cyberattack against Nvidia was not related to Russia’s war against Ukraine. The breach was “not connected to the crisis in Ukraine,” the report said, citing a source familiar with the matter. When reached Friday, Nvidia said it could not confirm the report and did not have any additional information to add beyond its prior statement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Bloomberg report also said that the incident “appears” to have involved a ransomware attack, and suggests the attack was “relatively minor.” In Nvidia’s statement earlier Friday, a spokesperson said that the company was “investigating an incident” and was “still working to evaluate the nature and scope of the event.” “Our business and commercial activities continue uninterrupted,” the Nvidia spokesperson said in the statement. Outages reported The statement came in response to a Friday report in The Telegraph that Nvidia, one of the largest producers of graphics chips, has been investigating “a potential cyber attack that has taken parts of its business offline for two days.” Quoting an unnamed “insider” at Nvidia, The Telegraph reported that the potential cyberattack had “completely compromised” internal systems at the company — “although some email services were working on Friday,” the report said. The potential “malicious network intrusion” has caused outages for the company’s email systems and developer tools, the report says. In situations such as this, cyber defenders shouldn’t “immediately assume” that attacks are retaliation to western sanctions against Russia, said Rick Holland, CISO at Digital Shadows. “This response is possible, but it needs to be investigated and validated,” Holland said. “Ransomware crews have been extorting victims for years and will continue to do so.” Retaliation threatened Still, in his addresses in recent days, Putin has made it clear that the entire Western world is his enemy and all options are on the table, according to Eric Byres, a cybersecurity veteran who is now CTO of aDolus Technology. In his speech on Thursday, Putin said that “for those who may be tempted to interfere in these developments,” that “Russia will respond immediately, and the consequences will be such as you have never seen in your entire history.” RussianRussian cyber offensives have also been playing a role in the country’s build-up to its assault on Ukraine this week. Authorities in the U.S. and U.K. blamed Russia for last week’s massive distributed denial-of-service (DDoS) attacks in Ukraine. Fresh DDoS attacks, as well as destructive cyberattacks that involved wiper malware , struck Ukraine on Wednesday just ahead of the invasion. Meanwhile, Russia’s attacks on Ukraine have led hacking groups worldwide to increase their activities — in numerous instances to support one of the two sides, in what some are calling a “cyber proxy war.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,189
2,021
"Cortical.io's AI makes bulk contract analysis faster and more accurate | VentureBeat"
"https://venturebeat.com/2021/01/25/cortical-ios-ai-makes-bulk-contract-analysis-faster-and-more-accurate"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cortical.io’s AI makes bulk contract analysis faster and more accurate Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the past, reviewing large stacks of documents was a mind-numbing chore for junior attorneys — a process that could literally consume months of multiple employees’ lives. But innovations in artificial intelligence have enabled Cortical.io to automate the document review process with software, and the company today announced a new version of Contract Intelligence with accuracy and speed improvements that promise to reduce data analysis time by as much as 80%. Using large quantities of documents as inputs and a semantic folding theory-based natural language understanding system to parse content, Contract Intelligence can transform structured agreements and unstructured documents into comprehensible data. The software is able to search, extract, classify, and compare data from contracts, policies, financial reports, and other documents, including the ability to understand the meanings of concepts and whole sentences — more than just keywords, which might previously have been extracted and searchable using basic optical character recognition. Cortical.io’s update is significant for technical decision-makers because it demonstrates how AI is enabling computers to cut through formerly time-consuming human data processing tasks like a hot knife through butter — including formerly complex legal tasks. The new software speeds up the data extraction process, renders documents in higher fidelity, and enables advanced searches that can look for ranges of dates or numbers. Contract Intelligence’s date range searches can quickly identify coverage gaps in insurance policies or the application dates of key contract provisions, concepts that would have previously required human analysis. Although it might go without saying in some industries, the labor involved in reviewing documents is a prime example of the saying “time is money,” as lawyers tasked with going through contracts can each bill hundreds of dollars per hour of review time, making computerized review potentially very valuable. Cortical.io’s solution promises “quick training” — it can get moving with only 50 sample documents — and doesn’t require AI expertise, instead relying on subject matter experts who use the app to fine-tune the results over time. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Overall, the system promises to reduce contract review time by four-fifths, a potentially massive savings for what might otherwise have been multi-day or multi-month document analysis tasks. It also promises “human-level accuracy.” The system can automatically output processed data to a contract management system and/or business intelligence tools, depending on the client’s needs. Cortical.io prices Contract Intelligence based on the annual volume of documents, making the software a useful long-term solution for businesses that frequently review large collections of contracts, including those in the insurance industry. A demonstration video is available for companies that might be interested in learning more about the technology, along with a free online demo for potential customers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,190
2,021
"Google launches AI-powered document processing services in general availability | VentureBeat"
"https://venturebeat.com/2021/04/21/google-launches-ai-powered-document-processing-services-in-general-availability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches AI-powered document processing services in general availability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google today announced that several of its cloud-based, AI-powered document processing products have become generally available after launching in preview last year. DocAI platform , Lending DocAI , and Procurement DocAI, which have been piloted by thousands of businesses to date, are now open to all customers and include new features and resources. Companies spend an average of $20 to file and store a single document, by some estimates , and only 18% of companies consider themselves paperless. An IDC report revealed that document-related challenges account for a 21.3% productivity loss, and U.S. companies waste a collective $8 billion annually managing paperwork. How it works With the launch in general availability, Lending DocAI, which processes loan applicants’ asset documents, now offers a set of specialized AI models for paystubs and bank statements. The service also now benefits from DocAI platform’s Human-in-the-Loop AI capability, which provides a workflow to manage human data review tasks. As Google explains, Human-in-the-Loop AI enables human reviewers to verify data captured by Lending DocAI, Procurement DocAI, and other offerings in DocAI platform. The system shows a percentage score of how “sure” it is that the AI ingested the document correctly, and it’s customizable, with the flexibility to set different thresholds and assign groups of reviewers to stages of a workflow. Developers can choose reviewers to assign to tasks either from within their own company or from partner organizations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Lending institutions like banks and brokers process hundreds of pages of paperwork for every loan. It’s a heavily manual process that adds thousands of dollars to the cost of issuing a loan. While hardly flawless , automated processing can give customers a degree of confidence they can afford the property they’re interested in, and some lenders are able to complete the ordeal within minutes, as opposed to the weeks it once took. Procurement DocAI Procurement DocAI , which performs document processing for invoices, receipts, and more, has gained an AI parser for electric, water, and other utility bills. The latest release taps Google’s Knowledge Graph to validate information, a system that understands over 500 facts about 5 billion entities from the web, as well as from open and licensed databases. Google claims that Knowledge Graph can help increase document parsing accuracy by identifying, for example, that “Angelina” correlates to “Angelina Paris,” a bakery identified using geodata. Google also today announced a partnership with mortgage servicing firm Mr. Cooper, following a collaboration with home loan company Roostify last October. Google says Mr. Cooper will offer its customers greater automation and workflow tools by connecting them with the DocAI platform. “Over the last few years, we have made substantial investments in our proprietary servicing technology and core mortgage platform that have revolutionized the customer experience while providing dramatic efficiencies in operating cost. By joining forces with Google … we are able to build on those advances and help make these technologies available for the mortgage industry to deploy through Google Cloud,” Mr. Cooper CEO Jay Bray said in a press release. The general release of the DocAI platform comes after Google launched PPP Lending AI , an effort to help lenders expedite the processing of applications for the since-exhausted U.S. Small Business Administration’s (SBA) Paycheck Protection Program. As Google explained at the time in a whitepaper , AI can automate the handling of volumes of loan applications by identifying patterns that would take a human worker longer to spot. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,191
2,022
"What is intelligent document processing? Why IDP matters in the enterprise | VentureBeat"
"https://venturebeat.com/2022/02/20/what-is-intelligent-document-processing-why-idp-matters-in-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is intelligent document processing? Why IDP matters in the enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Paperwork is the lifeblood of many organizations. According to one source, 15% of a company’s revenue is spent creating, managing and distributing paper documents. But documents aren’t just costly — they’re time-wasting and error-prone. More than nine in 10 employees responding to a 2021 ABBY survey said that they waste up to eight hours each week looking through documents to find data, and using traditional method to create a new document takes on average three hours and incurs six errors in punctuation, spellings, omissions or printing. Intelligent document processing (IDP) is touted as a solution to the problem of file management and orchestration. IDP combines technologies like computer vision, optical character recognition (OCR), machine learning and natural language processing to digitize paper and electronic documents and extract data from then — as well as analyze them. For example, IDP can validate information in files like invoices by cross-referencing them with databases, lexicons and other digital data sources. The technology can also sort documents into different storage buckets to keep them up to date and better organized. Because of IDP’s potential to reduce costs and free up employees for more meaningful work, interest in it is on the rise. According to KBV research, the market for IDP solutions could reach $4.1 billion by 2027, rising at a compound annual growth rate of 29.2% from 2021. Processing documents with AI Paper documents abound in every industry and every company, no matter how fervently the industry or company has embraced digitization. Whether because of compliance, governance, or organizational reasons, enterprises use files for things like order tracking, records, purchase orders, statements, maintenance logs, employee onboarding, claims, proof of delivery and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A 2016 Wakefield research study shows that 73% of the “owners and decision-makers” at companies with fewer than 500 employees print at least four times a day. As Randy Dazo, group director at InfoTrends, explained to CIO in a recent piece, employees use printing and scanning both for ad hoc businesses processes (for example, because it’s more “in the moment” to scan a receipt) and for “transactional” processes (such as part of a daily workflow in human resources, accounting and legal departments). Adopting digitization alone can’t solve every processing bottleneck. In a 2021 study published by PandaDoc, over 90% of companies using digital files still found business proposals and HR documents difficult to create. The answer — or at least part of the answer — lies in IDP. IDP automates processing data contained in documents, which entails understanding what the document is about and the information it contains, extracting that information and sending it to the right place. IDP platforms begin with capturing data, often from several document types. The next step is recognition and classification of elements like fields in forms, the names of customers and businesses, phone numbers and signatures. Lastly, IDP platform validates and verifies the data — either through rules, humans in the loop or both — before integrating it into a target system, such as customer relationship management or enterprise resource planning software. Two ways IDP recognize data in documents are OCR and handwritten-text recognition. Technologies that have been around for decades, OCR and handwritten text recognition attempt to capture major features in text, glyphs and images, like global features that describe the text as a whole and local features that describe individual parts of the text (like symmetry in the letters). When it comes to recognizing images or the content within images, computer vision comes into play. Computer vision algorithms are “trained” to recognize patterns by “looking” at collections of data and learning, over time, the relationships between pieces of data. For example, a basic computer vision algorithm can learn to distinguish cats from dogs by ingesting large databases of cat and dog pictures captioned as “cat” and dog,” respectively. OCR, handwritten text recognition, and computer vision aren’t flawless. In particular, computer vision is susceptible to biases that can affect its accuracy. But the relative predictability of documents (e.g., invoices and barcodes follow a certain format) enables them to perform well in IDP. Other algorithms handle post-processing steps like brightening and removing artifacts such as ink blots and stains from files. As for text understanding, it typically falls under the purview of natural language processing (NLP). Like computer vision systems, NLP systems grow in their understanding of text by looking at many examples. Examples come in the form of documents within training datasets, which contain terabytes to petabytes of data scraped from social media, Wikipedia, books, software hosting platforms like GitHub and other sources on the public web. NLP-driven document processing can let employees search for key text within documents, or highlight trends and changes in documents over time. Depending on how the technology is implemented, an IDP platform might cluster onboarding forms together in a folder or automatically paste salary information into relevant tax PDFs. The final stages of IDP can involve robotic process automation (RPA), a technology that automates tasks traditionally done by a human using software robots that interact with enterprise systems. These AI-powered robots can handle a vast number of tasks, from moving files database-to-database to copying text from a document, pasting it into an email and sending the message. With RPA, a company could, for example, automate report creation by having a software robot pull from different processed documents. Or they could eliminate duplicate entries in spreadsheets across various file formats and programs. Growing IDP platforms Lured by the enormous addressable market, an expanding number of vendors are offering IDP solutions. While not all take the same approach, they share the goal of abstracting away filing that’d otherwise be performed by a human. For example, Rossum provides an IDP platform that extracts data while making corrections through what it calls “spatial OCR (optical character recognition).” The platform essentially learns to recognize different structures and patterns of different documents, such as the fact that an invoice number might be on the top left-hand side in one invoice but somewhere else in another. Another IDP vendor , Zuva, focuses on contract and document review, offering trained models out of the box that can extract data points and present them in question-answer form. M-Files applies algorithms to the metadata of documents to create a structure, unifying categories and keywords used within a company. Meanwhile, Indico ingests documents and performs post-processing with models that can classify and compare text as well as detect sentiment and phrases. Among the tech giants, Microsoft is using IDP to extract knowledge from paying organizations’ emails, messages and documents into a knowledge base. Amazon Web Services’ Textract service can recognize scans, PDFs, and photos and feed any extracted data into other systems. For its part, Google hosts DocAI , a collection of AI-powered document parsers and tools available via an API. How IDP Makes a difference Forty-two percent of knowledge workers say that paper-based workflows make their daily tasks less efficient, costlier, and less productive, according to IDC. And Foxit Software reports that more than two-thirds of companies admit that their need for paperless office processes increased during the pandemic. The benefits of IDP can’t be overstated. But implementing it isn’t always easy. As KPMG analysts point out in a report , companies run the risk of not defining a clear strategy or actionable business goal, failing to keep humans in the loop and misjudging the technological possibilities of IDP. Enterprises that operate in highly regulated industries might also have to take additional security steps or precautions when using IDP platforms. Still, the technology promises to transform the way companies do business — importantly while saving money in the process. “Semistructured and unstructured documents can now be automated faster and with greater precision, leading to more satisfied customers,” Deloitte’s Lewis Walker writes. “As business leaders scale to gain competitive advantage in an automation-first era, they’ll need to unlock higher value opportunities by processing documents more efficiently, and turning that information into deeper insights faster than ever.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,192
2,021
"Report: 76% of manufacturers plan to adopt private 5G by 2024 | VentureBeat"
"https://venturebeat.com/2021/10/15/report-76-of-manufacturers-plan-to-adopt-private-5g-by-2024"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 76% of manufacturers plan to adopt private 5G by 2024 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a report by Accedian , 76% of manufacturers plan to adopt private 5G by 2024. The potential for 5G to completely transform industries is immeasurable. By promising decreased latency, enhanced security, and the ability to support artificial intelligence (AI) and augmented reality (AR), sectors like health care, gaming, and emergency services will be forever altered. But those opportunities are still far in the future. Most industries and the majority of enterprises aren’t yet equipped with the right infrastructure to support 5G. The major exception is manufacturing. As an industry almost entirely dependent on the successful connections between machines, manufacturing is ripe to adopt 5G. But that’s only true if their IT teams are equipped with the right knowledge, skills, tools, and the ecosystem of service providers and technology vendors that can provide the tools and support to make individual visions of Industry 4.0 a reality. To understand manufacturing teams’ readiness for 5G, Accedian, a provider of performance analytics, cybersecurity threat detection, and end-user experience solutions, partnered with Analysys Mason on a new study of 5G adoption trends. The result was overwhelmingly positive: More than three-fourths (76%) of respondents are planning to adopt private 5G networks by 2024, and list the benefits of increased network security (63%), network performance (49%), and application performance (45%) as a few of the key reasons why. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But respondents also pointed out a few challenges that have impeded their adoption process thus far, such as the increased complexity of managing the network (43%). This is where service providers have a part to play, particularly in moving from being solely a vendor to becoming more of a strategic partner throughout the 5G adoption process. It’s with this collaborative mindset that the promise of 5G in manufacturing will become a reality, and usher in Industry 4.0. Analysys Mason, a management consultancy focused on technology, media, and telecommunications (TMT), surveyed 200 respondents from Germany, Japan, United Kingdom, and the United States across six verticals to gain insight into private 5G adoption. Read the full report by Accedian. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,193
2,022
"How HPE's private 5G creates more intelligent enterprise networks | VentureBeat"
"https://venturebeat.com/2022/02/24/how-hpes-private-5g-creates-more-intelligent-enterprise-networks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How HPE’s private 5G creates more intelligent enterprise networks Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. HPE’s launch of private 5G that includes seamless integration to private Wi-Fi networks is what enterprise networks need to increase their intelligence, speed and scalability, the company says.. Enterprises also need to achieve greater 5G-to-Wi-Fi 6 reliability for all devices to get the most value from new enterprise and industrial applications being deployed from edge to cloud. HPE’s goal in launching its private 5G is to combine the advantage of Wi-Fi 6, including its cost-effective indoor connectivity, with the wide coverage, high mobility and high-reliability applications on its own. The private 5G solution is pre-integrated with radio access capabilities from leading vendors, enabling their private 5G solution can be deployed quickly and at scale across enterprises. HPE’s 5G Core Stack is available as a GreenLake infrastructure-as-a-service solution, in response to enterprises’ interest in this deployment approach, according to Tom Craig, VP and general manager of HPE’s Communications Technology Group, during a recent press briefing. One of the main design goals of providing a private 5G solution is to deliver carrier-grade quality of service. HPE relies on Aruba Air Pass , which automates 5G and Wi-Fi roaming, to accomplish that goal, working together with Aruba Wi-Fi networks to provide a consistently high-quality network experience. Enterprises are wary of being locked into private 5G architectures, which is why HPE designed in O-RAN compliance into the private 5G network they are launching today. Where enterprises need private 5G now HPE’s prioritizing Wi-Fi 6 and private 5G connectivity performance can close large gaps enterprise networks are struggling with today. The most urgent challenges enterprises face are ensuring the network gaps outside their organizations stay secure and reliable. HPE cites use cases including real-time applications, video conferencing, Wi-Fi calling in enterprises, and virtual private 5G networks. A second core market provides ruggedized, hardened 5G-in-a-box private 5G networks for military operations and remote oil and gas production centers. Private 5G with Wi-Fi 6 integration can help close the following major gaps enterprise networks are dealing with today, enabling them to deliver greater intelligence in the process. The most urgent needs are outside organizations’ supply chains, distribution channels and service centers. Real-time monitoring of remote production centers for greater productivity and safety. Private 5G networks with Wi-Fi 6 integration can deliver Condition-Based Maintenance (CBM) data and, when combined with predictive analytics applications, predict when a given machine or asset will need to be repaired. Using private 5G networks as the basis of real-time monitoring to collect condition-based data from machinery and remote equipment can reduce maintenance and operating expenses while prolonging a machine’s useful life. Move more digital twin pilots into production across all production centers. HPE currently has an automotive manufacturer in Germany using their private 5G network to accomplish digital twin strategies across the automotive manufacturing process. Research firm Gartner defines a virtual twin as a ‘digital representation of a real-world entity or system,’ which comes in the form of a ‘software object or model that mirrors a unique physical object, process, organization, person or other abstraction.’ The auto manufacturer uses the 5G network to streamline its Operational Technology environment. HPE says it is achieving continuous and bidirectional data flows among operational technology and IT systems with low-latency connectivity. Richard Band, head of mobile core and 5G, communication and media solutions at Hewlett Packard Enterprise, says, “what is resonating [with enterprises] is the ability to separate IT and OT networks. In addition, there is the need to have slicing capabilities. They don’t necessarily want to have very advanced or very ability to customize those slices, but they want something simple to deploy them.” Manufactures struggle with keeping IT and OT optimized in digital twin configurations, making HPE’s private 5G network a good fit for this specific requirement. Achieving real-time production and process monitoring across all manufacturing centers. Given the continual unpredictability of supply chains today, manufacturers need greater real-time visibility and control across all production centers. Equating what’s going on within each center and its implications on financial performance is a gap private 5G and wi-Fi six integration can help solve. Add to that the fact that manufacturers are continually faced with the challenges of improving time-to-market while reducing production costs and increasing quality, and the value of real-time production and process data reliably captured in a private 5G network becomes clear. Improving supply chain track-and traceability when combined with IoT-enabled containers. Private 5G networks can revolutionize track-and-traceability across supply chains by combining Wi-Fi 6 and private 5G networks. Improving the accuracy and speed of track-and-traceability information across all production locations enables greater collaboration and knowledge sharing company-wide. A track-and-trace system that can scale quickly across an entire supplier network is invaluable in ensuring high inventory accuracy and forecasting precision. Knowing the specific levels of inventory and their relative status across a supply chain is needed for attaining higher production rates in each manufacturing center. In addition, track-and-trace systems over time generate data sets that tend to show patterns, making it possible to anticipate shifts in demand. This insight contributes to greater forecast accuracy and the potential to optimize manufacturing schedules. Accelerate environmental monitoring, smart metering, renewable plants supervision, and inventory intelligence. Private 5G networks have the potential to deliver value, including improving operator productivity in production plants, improving the accuracy of warehouse management and inventory monitoring, reducing non-technical operations losses, and enabling smart product tracking. When combined with smart IoT sensors, private 5G networks can help improve the payback period of investments in core operational areas. The following grid compares the benefits from IoT implementation by payback investment period. Capgemini predicts IoT will contribute to Manufacturing Intelligence and Product Quality Optimization gaining adoption over the long term. Private 5G networks provide the latency and coverage to help improve the performance of each of these use cases: The potential of private 5G Enterprises are struggling with wide gaps in what they know about local and remote manufacturing, distribution, supply chain, and service performance. HPE’s private 5G announcement shows the potential to close those gaps when combined with IoT sensors and intelligent devices. Private 5G and Wi-Fi 6 seamless integration that HP claims is enabled with Aruba AirPass will make monitoring remote machines easier for quality engineers and lead to greater productivity gains across supply networks that need greater visibility and control today to stay efficient and safe. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,194
2,021
"How Levi Strauss is upskilling its workforce to embrace data and AI | VentureBeat"
"https://venturebeat.com/2021/07/12/how-levi-strauss-is-upskilling-its-workforce-to-embrace-data-and-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Levi Strauss is upskilling its workforce to embrace data and AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For Katia Walsh, chief AI officer at Levi Strauss & Co, data and technology have been at the forefront for her entire career. “People are the most important part of artificial intelligence,” Walsh said at VentureBeat’s Transform 2021 in a conversation with Mike Hendrickson, vice president of technology and developer products at Skillsoft. The iconic denim jeans from Levi’s, which was founded 167 years ago in San Francisco, aren’t the only notable thing about the clothing company. Walsh says Levi’s has always used cutting-edge technology, a “natural progression” for its brand. Levi’s deploys AI to build better relationships with its customers and optimize its profit margins. While most companies bring in engineers to scale up machine learning operations, Hendrickson said, Levi’s deviates from the norm by teaching its existing workforce how to work with data and new technology through a boot camp. Levi’s workers come into their AI training with no background in data and machine learning and come out with new technological skills. Investing in those qualities helps Levi’s expand its imprint on the fashion world and ultimately, its profitability. It also serves the company’s mission to “democratize fashion” and in turn “democratize machine learning,” Walsh said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All 43 people in the first AI boot camp cohort have graduated from the program and have gained valuable hard skills that would ordinarily not be offered at a retail chain. Levi’s also created a class for people who had basic or outdated skills in data and analytics and helped them update their experience. In 2020, Levi’s launched a loyalty program based on its machine learning models to personalize the brand experience for its fans. So far, Walsh said, the program has increased the amount of time customers engage with the company. AI has also helped Levi’s manage its stores, regulate inventory, and optimize pricing. One employee is already using neural networks to design images on traditional products; another has used machine learning to predict stock for its popular outlet stores. Culture change did not come easy. Walsh said the loyalty program required buy-in from managers across all of Levi’s hierarchies to let go of employees for eight weeks and encourage them on their machine learning journey. Creating the “ upskilling ” environment helped Walsh and her team figure out how quickly they could pivot when faced with new challenges. It has also helped them figure out an answer to a contemporary issue: sustainability. “What we’ve all become inspired by is this notion that AI can save fashion,” Walsh said. “Fashion has not always been the best citizen of our planet. The fashion industry has been one of the biggest offenders when it comes to climate change.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,195
2,021
"Levi-Strauss’ Dr. Katia Walsh on why diversity in AI and ML is non-negotiable | VentureBeat"
"https://venturebeat.com/2021/08/02/levi-strauss-dr-katia-walsh-on-why-diversity-is-non-negotiable-in-ai-and-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Levi-Strauss’ Dr. Katia Walsh on why diversity in AI and ML is non-negotiable Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As part of VentureBeat’s series of interviews with women and BIPOC leaders in the AI industry, we sat down with Dr. Katia Walsh, chief strategy and artificial intelligence officer, Levi Strauss & Co. In her career she has forged paths for people from every intersection of race, culture, class, and education, giving them the tools they need in an AI- and data-centric world to be creative, solve problems, develop new solutions, and change the game in their roles across their companies. She’s passionate about the power of diversity, about empowering her employees, and about using technology for good. Learn more about her career, from communist Bulgaria to Levi-Strauss’ first chief strategy and artificial intelligence officer, and her DE&I manifesto below. See the others in the series: Intel’s Huma Abidi, Redfin’s Bridget Frey, Salesforce’s Kathy Baxter , McAfee’s Celeste Fralick , and ThoughtSpot’s Cindi Howson. VB: Could you tell us about your background, and your current role at your company? I started my career as a journalist in communist Bulgaria, where I personally experienced the power of information through a story I wrote while still in high school. That experience led me to become an investigative reporter who aimed to impact human lives, democracy, and society overall. After the fall of communism, I pursued further education in the U.S. During my master’s studies, I discovered the power of new communication technology and specifically, the internet, to amplify the power of information. I then continued my education through a PhD. program that specialized in new communication technology. It was at that point that I became fascinated with a third power, the power of machine learning and its ability to drive desired outcomes. This convergence of three powers — information (or data), technology (or digital), and machine learning (part of artificial intelligence) became the focus of my career. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over the past 20 years, I’ve used my passion for these three powers to help global businesses win with digital, data, and AI. Throughout my career, I have worked to enable companies to thrive through these powers. I’ve used digital, data, and AI to help both digital-born and established businesses become customer-centric, indispensable to their consumers, and grow. I’ve found myself gravitating to companies that not only strive for profit but also stand for doing social good in the world. Today, as the first chief strategy and artificial intelligence officer for Levi Strauss & Co., I’m responsible for digital strategy and transformation, while infusing our culture with data, analytic, and artificial intelligence capabilities. This helps us put our fans in the center of everything we do, drive business value across the company globally, and serve as a platform for doing good in the world. VB: Any woman in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman in tech” — how has that influenced your career? One of the myths about digital transformation is that it’s all about harnessing technology. It’s not. To be successful, digital transformation inherently requires and relies on diversity. Artificial intelligence is the result of human intelligence, enabled by its vast talents and also susceptible to its limitations. Therefore, it is imperative that all teams that work in technology and AI are as diverse as possible. By diversity of people I don’t mean just the obvious in terms of demographics such as race, ethnicity, gender, and age. We critically need people with different skill sets, experiences, educational backgrounds, cultural and geographic perspectives, ways of thinking and working, and more. For example, on the teams I’ve led, I’ve had the privilege of working with many advanced degrees and also people with no formal education. Why? When you have a diverse team reviewing and analyzing data whether it’s for decision-making or algorithms for digital products, you mitigate bias, you move the technology world closer to reflecting the real world, and you are better able to serve your customers who are much more diverse than most companies give them credit for. VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community? I consider the world to be my community. As an American and European citizen who’s worked at global companies, a global perspective comes naturally, but it’s also important for fostering diversity. The teams I’ve led have been located all over the world, from Boston, Toronto, and Dallas to all geographies throughout Europe and the U.K., plus Singapore, Shanghai, Mumbai, Bangalore, Johannesburg, Nairobi, and Cairo. At Levi Strauss &Co., diversity of skill sets has played a key role in our digital transformation. In addition to engineers and computer scientists, our growing AI team comprises statisticians, physicists, philosophers, sociologists, designers, retail employees, and distribution center operators. I recently initiated and led our company in its first-ever digital upskilling program, a Machine Learning Bootcamp. By design, it tapped employees with no previous coding or statistics experience working in all markets and functions of the company throughout the world. The goal was to train people who have deep apparel and retail experience and upgrade their abilities with the latest machine learning skills. In eight weeks, we took people who had never seen code before and trained them to work with Python, use libraries, program neural networks, write automation scripts, and deliver value from coding. This combination of apparel retail expertise and machine learning skills is already resulting in new ways of connecting with consumers, new efficiencies, new creative designs, and new opportunities for our storied brand. This first-of-its-kind-initiative in the apparel retail industry helped us cultivate more diversity, and attract more women into the traditionally male-dominated field of AI. For example, women represented almost two thirds of our first machine learning graduating class, and the graduates are located in 14 different locations around the world. VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation? Like anything in society, our industry would benefit greatly from the work that women, especially Black and BIPOC women do. We owe it to the world to have more diverse talents creating the current and future solutions and products of technology and artificial intelligence. Human-centric design by definition means design of and for all humans on the planet. I am personally very grateful to the brave women who have helped uncover and expose bias; advocate for equality, representation and fairness; introduce necessary regulation; and keep solving the myriad of problems that have traditionally stemmed from lack of diversity in our industry. I look forward to seeing more women — across all ethnicities, Black and BIPOC included, more backgrounds, more skillsets, more geographies, and more perspectives consistently present and evident in our field. This will amplify the power of digital transformation and enable businesses and organizations for future success, while literally changing industries, society, and the world. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,196
2,021
"Enterprise AI development platform DataRobot raises $300M, acquires Algorithmia | VentureBeat"
"https://venturebeat.com/2021/07/27/enterprise-ai-development-platform-datarobot-raises-270m-acquires-algorithmia"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise AI development platform DataRobot raises $300M, acquires Algorithmia Share on Facebook Share on X Share on LinkedIn Dan Wright, CEO of DataRobot Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. DataRobot , a startup creating an enterprise AI development platform, today closed a $300 million series G funding round led by Altimeter Capital and Tiger Global, with participation from Morgan Stanley’s Counterpoint Global, Franklin Templeton, ServiceNow Ventures, and Sutter Hill Ventures. The round brings DataRobot’s valuation to $6.3 billion post-money, up from $2.7 billion in November 2020 , and comes as the company finalizes the acquisition of Seattle, Washington-based MLOps startup Algorithmia. CEO Dan Wright says the new funds will be used to “fuel platform innovation” and “enable DataRobot to bring … augmented intelligence” to clients around the world. Specifically, DataRobot plans to build out its go-to-market teams, with Sutter Hill supporting the company’s hiring initiatives. “This … investment further validates our vision for combining the best that humans and machines have to offer in order to power predictive insights, enhanced data-driven decisions, and unprecedented business value for our customers,” Wright said in a statement. “DataRobot is seeing more customers across the globe choose our platform to solve their biggest challenges at scale.” The benefits of AI and machine learning can feel intangible at times, but research shows this hasn’t deterred organizations from adopting the technologies at scale. According to a recent survey from ManageEngine, the IT division of Zoho, business deployment of AI is on the rise. And Gartner separately found that business use of AI grew 270% from 2015 to 2019, with IBM claiming almost one-third of respondents to its corporate May 2021 report are employing some form of AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI adoption doesn’t always meet with success, however. That’s why Jeremy Achin, previously director of research and modeling at Travelers, and Tom de Godoy, Achin’s former colleague, founded Boston, Massachusetts-based DataRobot in 2012. The cofounders sought to build tools to help customers prepare data and create and validate machine learning models, including classification, advanced regression, time series, and deep learning algorithms. DataRobot’s platform runs on cloud platforms, on-premise datacenters, or as a fully managed service. Once it’s deployed, customers can use it to monitor models from a dashboard and test, run, and maintain the models to optimize outcomes. Depending on a customer’s needs, DataRobot can automatically run a “competition” by testing hundreds or even thousands of solutions to a problem and delivering models to provide predictions. The platform also allows data scientists to explore, combine, and shape a range of data types and content — from traditional tabular data in rows and columns to free-form text, images, and geospatial data — into assets ready for AI models. DataRobot claims to have had triple-digit recurring revenue growth dating back to 2015, as well as more than 2 billion models built on the platform to date. The company’s customers span more than a third of the Fortune 50, including Kroger, Nationwide, Lenovo, PNC, and others across banking, health care, insurance, finance, manufacturing, retail, government, sports, and gaming verticals. The new round brings DataRobot’s total raised to over $1 billion. This makes the company one of the top-funded AI startups in the world — competitors like Determined AI , Explorium , Ople , Domino Data Lab , and Kaskada trail hundreds of millions of dollars behind. Algorithmia DataRobot says its purchase of Algorithmia, which was announced alongside the series G close this morning, will “further cement” its place as a provider of solutions in the MLOps space. According to Wright, DataRobot will integrate Algorithmia’s model-serving tools with its existing monitoring and management capabilities to offer customers access to “the most reliable and cost-effective operational backbone for running any machine learning model,” including deep learning workloads for natural language processing and image processing on CPUs and GPUs. “Algorithmia’s people and technology significantly enhance our mission to rapidly move from experimental to applied AI by helping customers bring every model into production with rapid time to value,” Wright continued. “We are thrilled to welcome the Algorithmia team and advance our leading MLOps offering with world-class, enterprise-grade MLOps infrastructure to organizations across the globe.” Algorithmia, which was founded in 2014 by Diego Oppenheimer and Kenny Daniel, provides a platform that aims to bring models into production with “enterprise-grade” security and governance. Algorithmia combines practices from AI, MLOps, and DevOps, establishing processes for organizing customers’ machine learning work from IT, data scientists, and various divisions to orchestrate moving models into production. Over 130,000 IT operations, data scientists, and engineers have used Algorithmia to date, including Fortune 500 companies like Merck, Ernst & Young, and Deloitte, according to CEO Oppenheimer. Prior to the acquisition, Algorithmia had raised $38.1 million in capital from Norwest Venture Partners, Gradient Ventures, and other investors. “We understand that businesses cannot get the value of their machine learning models unless they have the ability to deliver those models quickly, reliably, and at scale,” Oppenheimer said in a press release. “It’s been clear to us for many years that DataRobot shares this philosophy, and we’re thrilled to combine our dedication of enabling customers to thrive in today’s market by delivering more models in production, faster, while protecting their business.” DataRobot’s other acquisitions include Nutonian, which developed a service called Eureqa focused on classification, numeric, multi-series, and time series capabilities. More recently, DataRobot nabbed Nexosis and Paxata, companies working to simplify the way developers build AI apps and prep data, as well as Cursor and ParallelM. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,197
2,021
"Dataiku releases new version of unified AI platform for machine learning | VentureBeat"
"https://venturebeat.com/2021/12/01/dataiku-releases-new-version-of-unified-ai-platform-for-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dataiku releases new version of unified AI platform for machine learning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dataiku recently released version 10 of its unified AI platform. VentureBeat talked to Dan Darnell , head of product marketing at Dataiku and former VP of product marketing at H2O.ai, to discuss how the new release provides greater governance and oversight of the enterprise’s machine learning efforts, enhances ML ops, and enables enterprises to scale their ML and AI efforts. Governance and oversight For Darnell, the name of the game is governance. “Until recently,” he told VentureBeat, “data science tooling at many enterprises has been the wild west, with different groups adopting their favorite tools.” However, he sees a noticeable change in tooling becoming consolidated “as enterprises are realizing they lack visibility into these siloed environments, which poses a huge operational and compliance risk. They are searching for a single ML repository to provide better governance and oversight.” Dataiku is not alone in spotting this trend, with competing products like AWS MLOps tackling the same space. Having a single point of governance is helpful for enterprise users. Darnell likens it to a single “watchtower, from which to view all of an organization’s data projects.” For Dataiku, this enables project workflows that provide blueprints for projects, approval workflows that require managerial sign-off before deploying new models, risk and value assessment to score their AI projects, and a centralized model registry to version models and track model performance. For its new release, governance is centered around the “project,” which also contains the data sources, code, notebooks, models, approval rules, and markdown wikis associated with that effort. Just as GitHub went beyond mere code hosting to hosting the context around coding that facilitates collaboration, such as pull requests, CI/CD, markdown wikis, and project workflow, Dataiku ‘s eponymous “projects” aspire to do the same for data projects. “Whether you write your model inside Dataiku or elsewhere, we want you to put that model into our product,” said Darnell. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ML ops Governance and oversight also extend into the emerging field of ML ops , a rapidly growing discipline that applies several DevOps best practices for machine learning models. In its press release , Dataiku defines ML ops as helping “IT operators and data scientists evaluate, monitor and compare machine learning models, whether under development or in production.” In this area, Dataiku competes against products like Sagmaker’s Model Monitor , GCP’s Vertex AI Model Monitoring , or Azure’s MLOps. Automatic drift analysis is an important newly released feature. Over time, data can fluctuate due to subtle underlying changes outside the modeler’s control. For example, as the pandemic progressed and consumers began to see delays in gym re-openings, sales of home exercise equipment began creeping up. This data drift can lead to poor performance for models that were trained on out-of-date data. What-If scenarios are one of the more interesting features of the new AI platform. Machine learning models usually live in code, accessible only to trained data scientists, data engineers, and the computer systems that process them. But nontechnical business stakeholders want to see how the model works for themselves. These domain experts often have significant knowledge, and they often want to get comfortable with a model before approving it. Dataiku what-if “simulations” wrap a model so that non-technical stakeholders can interrogate the model by setting different inputs in an interactive GUI, without diving into the code. “Empowering non-technical users as part of the data science workflow is a critical component of MLOps,” Darnell said. Scaling ML and AI “We think that ML and AI will be everywhere in the organization, and we have to unlock the bottleneck of the data scientist being the only person who can do ML work,” Darnell said. One way Dataiku is tackling it is to reduce the duplicative work of data scientists and analysts. Duplicative work is the bane of any large enterprise where code silos are rampant. Data scientists redo the work because they simply don’t know if it was done elsewhere. A catalog of code snippets can provide data scientists and analysts greater visibility on prior work so that they can stand on the shoulders of colleagues rather than reinvent the wheel. Whether or not the catalog can work will hinge on search performance — a notoriously tricky problem — as well as whether search can easily identify the relevant prior work, therefore freeing up data scientists to accomplish more valuable tasks. In addition to trying to make data scientists more effective, Dataiku’s AI platform also provides no-code GUIs for data prep and AutoML capabilities to perform ETL , train models, and assess their quality. This feature is geared at technically-proficient users who cannot code and empowers them to do many of the data science tasks. Through a no-code GUI, users can control which ML models are available to the AutoML algorithm and perform basic feature manipulations on the input data. After training, the page provides visuals to aid in model interpretability, not just regression coefficients, hyperparameter selection, and performance metrics, but more sophisticated diagnostics like subpopulation analysis. The latter is very helpful for AI bias, where model performance may be very strong overall but weak for a vulnerable subpopulation, leading to bias. No-code solutions are hot, with AWS also releasing Sagemaker Canvas , a competing product. More on Dataiku Dataiku’s initial product, the “ Data Science Studio ,” focused on providing tooling for the individual data scientist to become more productive. With Dataiku 10, its focus is shifted to the enterprise, with features that target the CTO as well as the rank and file data scientist. This shift is not uncommon among data science vendors chasing stickier seven-figure enterprise deals with higher investor multiples. This direction mirrors similar moves by well-established competitors in the cloud enterprise data science space, including Databricks , Oracle’s Autonomous DataWarehouse , GCP Vertex , Microsoft’s Azure ML , and AWS Sagemaker , which VentureBeat has written about previously. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,198
2,021
"What we can learn from China's proposed AI regulations | VentureBeat"
"https://venturebeat.com/2021/10/03/what-we-can-learn-from-chinas-proposed-ai-regulations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest What we can learn from China’s proposed AI regulations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In late August, China’s internet watchdog, the Cyberspace Administration of China (CAC), released draft guidelines that seek to regulate the use of algorithmic recommender systems by internet information services. The guidelines are thus far the most comprehensive effort by any country to regulate recommender systems, and may serve as a model for other nations considering similar legislation. China’s approach includes some global best practices around algorithmic system regulation, such as provisions that promote transparency and user privacy controls. Unfortunately, the proposal also seeks to expand the Chinese government’s control over how these systems are designed and used to curate content. If passed, the draft would increase the Chinese government’s control over online information flows and speech. The introduction of the draft regulation comes at a pivotal point for the technology policy ecosystem in China. Over the past few months, the Chinese government has introduced a series of regulatory crackdowns on technology companies that would prevent platforms from violating user privacy, encouraging users to spend money, and promoting addictive behaviors, particularly among young people. The guidelines on recommender systems are the latest component of this regulatory crackdown, and appear to target major internet companies — such as ByteDance, Alibaba Group, Tencent, and Didi — that rely on proprietary algorithms to fuel their services. However, in its current form, the proposed regulation applies to internet information services more broadly. If passed, it could impact how a range of companies operate their recommender systems, including social media companies, e-commerce platforms, news sites, and ride-sharing services. The CAC’s proposal does contain numerous provisions that reflect widely supported principles in the algorithmic accountability space, many of which my organization, the Open Technology Institute has promoted. For example, the guidelines would require companies to provide users with more transparency around how their recommendation algorithms operate, including information on when a company’s recommender systems are being used, and the core “principles, intentions, and operation mechanisms” of the system. Companies would also need to audit their algorithms, including the models, training data, and outputs, on a regular basis under the proposal. In terms of user rights, companies must allow users to determine if and how the company uses their data to develop and operate recommender systems. Additionally, companies must give users the option to turn off algorithmic recommendations or opt out of receiving profile-based recommendations. Further, if a Chinese user believes that a platform’s recommender algorithm has had a profound impact on their rights, they can request that a platform provide an explanation of its decision to the user. The user can also demand that the company make improvements to the algorithm. However, it is unclear how these provisions will be enforced in practice. In some ways, China’s proposed regulation is akin to draft legislation in other regions. For example, the European Commission’s current draft of its Digital Services Act and its proposed AI regulation both seek to promote transparency and accountability around algorithmic systems, including recommender systems. Some experts argue that the EU’s General Data Protection Regulation (GDPR) also provides users with a right to explanation when interacting with algorithmic systems. Lawmakers in the United States have also introduced numerous bills that tackle platform algorithms through a range of interventions including increasing transparency, prohibiting the use of algorithms that violate civil rights law, and stripping liability protections if companies algorithmically amplify harmful content. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Although the CAC’s proposal contains some positive provisions, it also includes components that would expand the Chinese government’s control over how platforms design their algorithms, which is extremely problematic. The draft guidelines state that companies deploying recommender algorithms must comply with an ethical business code, which would require companies to comply with “mainstream values” and use their recommender systems to “cultivate positive energy.” Over the past several months, the Chinese government has initiated a culture war against the country’s “chaotic” online fan club culture, noting that the country needed to create a “healthy,” “masculine,” and “people-oriented” culture. The ethical business code companies must comply with could therefore be used to influence, and perhaps restrict, which values and metrics platform recommender systems can prioritize and help the government reshape online culture through their lens of censorship. Researchers have noted that recommender systems can be optimized to promote a range of different values and generate particular online experiences. China’s draft regulation is the first government effort that could define and mandate which values are appropriate for recommender system optimization. Additionally, the guidelines empower Chinese authorities to inspect platform algorithms and demand changes. The CAC’s proposal would also expand the Chinese government’s control over how platforms curate and amplify information online. Platforms that deploy algorithms that can influence public opinion or mobilize citizens would be required to obtain pre-deployment approval from the CAC. Additionally, When a platform identifies illegal and “undesirable” content, it must immediately remove it, halt algorithmic amplification of the content, and report the content to the CAC. If a platform recommends illegal or undesirable content to users, it can be held liable. If passed, the CAC’s proposal could have serious consequences for freedom of expression online in China. Over the past decade or so, the Chinese government has radically augmented its control over the online ecosystem in an attempt to establish its own, isolated, version of the internet. Under the leadership of President Xi Jinping, Chinese authorities have expanded the use of the famed “Great Firewall” to promote surveillance and censorship and restrict access to content and websites that it deems antithetical to the state and its values. The CAC’s proposal is therefore part and parcel of the government’s efforts to assert more control over online speech and thought in the country, this time through recommender systems. The proposal could also radically impact global information flows. Many nations around the world have adopted China-inspired internet governance models as they err towards more authoritarian models of governance. The CAC’s proposal could inspire similarly concerning and irresponsible models of algorithmic governance in other countries. The Chinese government’s proposed regulation for recommender systems is the most extensive set of rules created to govern recommendation algorithms thus far. The draft contains some notable provisions that could increase transparency around algorithmic recommender systems and promote user controls and choice. However, if the draft is passed in its current form, it could also have an outsized influence on how online information is moderated and curated in the country, raising significant freedom of expression concerns. Spandana Singh is a Policy Analyst at New America’s Open Technology Institute. She is also a member of the World Economic Forum’s Expert Network and a non-resident fellow at Esya Center in India, conducting policy research and advocacy around government surveillance, data protection, and platform accountability issues. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,199
2,021
"Are you ready for China’s Personal Information Protection Law?  | VentureBeat"
"https://venturebeat.com/2021/11/13/are-you-ready-for-chinas-personal-information-protection-law"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Are you ready for China’s Personal Information Protection Law? Share on Facebook Share on X Share on LinkedIn In August 2021, China passed a major piece of legislation called the Personal Information Protection Law (PIPL). It went into effect on November 1 and applies to all companies looking to conduct business in China, whether they are established in the country or not. The PIPL has been a long time in the making. China has been talking about this sort of omnibus law on personal information since at least 2015. The current law stems from a non-compulsory 2017 standard called “Personal Information Security Specification,” which was enforced in 2018 and revised in 2020. The difference with the PIPL is that it is a bit more inclusive of all personal data and, more importantly, it is now going to be compulsory. On the surface, PIPL echoes many of the same regulations brought forward by the EU’s General Data Protection Regulation ( GDPR ), which went into effect in May 2018. Essentially, it will enable individuals to decide how their personal data is used. This power ranges from opting in for marketing purposes to approving the use and processing of more sensitive data, such as biometrics, financial information, and location services. The PIPL includes the same principles and operating structure as GDPR, including controllers, processors, the legal basis for processing, security measures, organizational measures, notification of breaches and more. The difference is that this is happening through a Chinese lens — and this means there will be no independent organizations for oversight. In the EU’s GDPR is the following clause: “Each Member State shall provide for one or more independent public authorities to be responsible for monitoring the application of this Regulation, in order to protect the fundamental rights and freedoms of natural persons in relation to processing and to facilitate the free flow of personal data within the Union.” A clause like this does not exist in China’s PIPL, defining the sharpest difference between the two. If the PIPL provides for supervisory authorities, these are not independent from the state. In that regard, the PIPL is consistent with previous laws, such as the China Cybersecurity Law. Similar to GDPR, organizations that process over a certain threshold of personal data will be required to hire a data protection officer who supervises data protection and will be subject to more stringent obligations around certain activities, including cross-border transfers of personal information, among others. If China has yet to announce what the threshold amount will be for the mandatory hiring of a data protection officer, the draft Measures on Security Assessment of Cross-border Data published by the Cyberspace Administration of China (CAC) on October 29 give some insight. Indeed, these measures will, among other things, mandate any cross-border operator that cumulatively processes the personal information of more than 100,000 China residents, or sensitive personal information of more than 10,000 China residents, to submit to Chinese authorities a security self-assessment of such cross-border data transfer for approval. In other words, the threshold is quite low, so cross-border data transfers of personal information will be under high scrutiny. While many companies have essentially managed to be GDPR-compliant to date, China is much less likely to tolerate businesses that skirt the rules or put in the bare minimum, and consequences could be quite dire. In addition to astronomically high fines, non-compliant organizations may find their business license suspended or their company shut down entirely. The impact in China, as well as worldwide, is going to be huge. Chinese authorities will be stern in regard to enforcing this law, because we are seeing the strengthening of enforcement of every law relating to cybersecurity, data security, and data processing. The Chinese government wants to be seen as very protective of personal data. Companies that are already compliant with the GDPR will feel less impact than others, but this will still have a significant impact on our increasingly global society. If any part of your business touches China at all, you must comply with this law. The government can cut off any access to its population. Additionally, any violation of the PIPL may lead to an administrative fine of up to RMB 50 million or 5% of the processor’s turnover in the previous year. One specific provision in the law calls out foreign governments for special consideration: “If [foreign countries] adopt measures against China in the area of personal information, China may adopt retaliatory measures.” Such provisions may well be a direct response to Trump’s trade war with China. This is a catch-all provision that gives other countries little insight into, or control over, what China considers discriminatory. It could ultimately affect the flow of information, which is key in international business. As a final note, here are a few Dos and Don’ts for successful compliance with the PIPL: Do conduct a compliance self-assessment. This is going to be key in China. You absolutely need to start examining your own situation, so you know where you stand and where you have gaps in terms of non-compliance. Do know the risks associated with every decision you make regarding the PIPL. Do continue to conduct regular compliance audits. Do not just ignore it. Do not think you’re going to fly under the radar. Do not attempt to use a VPN to get around compliance. The PIPL is a long time in the making and certainly won’t be the last digital regulation that we see from China. It is extremely important for all organizations to get aligned with the rules of such new regulations. Isabelle Hajjar is Cybersecurity & Privacy head of compliance for digital risks and security firm TekID. Mathieu Gorge is author of the ForbesBooks title The Cyber-Elephant in the Boardroom , as well as CEO and founder of cybersecurity company VigiTrust. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,200
2,021
"App Annie: Smartphone users spent 3.8 trillion hours on mobile in 2021 | VentureBeat"
"https://venturebeat.com/2022/01/12/app-annie-smartphone-users-spent-3-8-trillion-hours-on-mobile-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages App Annie: Smartphone users spent 3.8 trillion hours on mobile in 2021 Share on Facebook Share on X Share on LinkedIn App Annie's stats on 2021 mobile app usage. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Smartphone users spent 3.8 trillion hours on mobile in 2021, according to a State of Mobile 2022 report by mobile data and analytics firm App Annie. The hours spent was up from a year earlier — and it was up 30% over the past two years — as consumers continued to embrace a mobile lifestyle in the ongoing pandemic, said Lexi Sydow, head of market insights at App Annie, in an interview with GamesBeat. Gaming numbers Mobile gaming grew to $116 billion an increase of 15%, fueled by the growth of hypercasual games. And interest in the metaverse catapulted leading avatar apps forward with 160% year-over-year growth. In the top 10 mobile markets, consumers spent 4.8 hours a day on mobile. The growth came from deeper use per person as well as more devices in the market, Sydow said. In the U.S., the number was 4.2 hours per day spent on mobile. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Hypercasual games saw growth in 2021. Social and communications drove the growth in total time spent, followed by photo and video. Games was about 10% per market on total time spent. But gaming is the biggest in terms of spending, as games accounted for 68% of consumer spending in mobile apps, Sydow said. And every year, time spent on games has gone up. In gaming, App Annie studied the top subgenres for the past decade. In 2011, there were titles like Fruit Ninja, Angry Birds, and Words With Friends. So that was slicing actin, physics puzzles, and word puzzles. Now, very few of those genres survived in the top ranks. The one exception was Subway Surfers, an endless runner that is still popular today. Instead, in 2021, many of the top titles were hypercasual games, which are played in a minute or less and are ad-based. “This really shows an evolution of consumer preferences,” Sydow said. One of the hottest recent apps is Wordle, a web-based game created by a lone developer. It’s growing about 850% per week, from a low base of around 40,000. The game was profiled in the New York Times. App Annie doesn’t have its 2022 forecast ready for games yet. TikTok’s growth Above: Hours spent in apps grew a lot in 2021. “In China, we’ve seen a big trend emerge of time shifting from traditional video streaming platforms to TikTok and some short-form video players in the space,” Sydow said. “We’ve seen indications of that as well emerging in other markets.” In the U.S., people who use Netflix doubled their use of TikTok during the past year. Outside of China, TikTok grew by 90% in terms of total hours spent with the app. And they spent $170 billion on apps, which is up 19% from last year. Downloads continue growing at 5% year over year to reach 230 billion. Led by TikTok (an increase of 90% globally outside of China), seven of every 10 minutes was spent on either social, photo, and/or video apps. Publishers released 2 million new apps and games, bringing the cumulative total to 21 million. “China was slower because they were the first mover in the space for TikTok, so the growth is about 45% year over year,” Sydow said. Facebook still dominates in terms of apps in the space for total time spent. Facebook is at No. 1 in total time spent with an app, followed by WhatsApp, Instagram, and then TikTok and Facebook Messenger. “The really compelling point is around the average time per user, because outside of China globally, Facebook and TikTok average 19.6 hours per month for the average person, but the growth of TikTok is phenomenal,” Sydow said. TikTok has grown around 465% growth over a four-year period, which is just huge, on a global basis. In average time per user, outside of China, TikTok hit 25.6 hours per person per month, compared with 15 for Facebook, when it comes to current users. “TikTok’s growth is unparalleled in the top cohorts of social apps,” Sydow said. “We call it the TikTok tidal wave.” IDFA change Above: Gaming growth in mobile in 2021. Apple made changes to how users opt-in to the Identifier for Advertisers (IDFA), and the result was more consumers chose privacy over being targeted by ads. Despite that, advertising spend topped $295 billion, up 23% year over year, and is estimated to top $350 billion in 2022. And apps earning more than $100 million in consumer spend grew by 20%. “At this point in time, we’re not seeing a huge impact on our gaming metrics,” Sydow said. “We still see downloads, consumer spend, and usage, through the App Store,” Sydow said. “So we acknowledge and definitely there were fears of the IDFA change’s impact. But I think on a macro level, we haven’t seen it soften the demand to place the ads.” Worldwide consumer spend on dating apps surged past $4.2 billion (55% increase from 2019). Time in shopping apps reached 100 billion hours, up by 18% year over year, led by fast fashion, social shopping, and big box players. Food and drink apps hit a new milestone at 194 billion sessions in 2021 (up 50% year over year) Metaverse apps As for the metaverse apps, Sydow said there is an emerging trend for avatar apps. This includes social apps, user-generated content, and creativity. Litmatch has seen 405% growth in the past year. Geppetto has also grown significantly. In such apps, you can dress up a virtual character. Project Makeover was popular in the holidays. Roblox had a big year in consumer spending among games. “Like the social media apps, I would say it’s going to be benefiting again from the metaverse and the rise of interest in it,” said Sydow. Pandemic growth Above: Top rankings for games in 2021. During the outbreak of the pandemic in 2020, App Annie saw several years of growth for mobile materialize in a single year, compared to previous forecasts. There was more average time per user, and average usage per day. Apps from grocery apps to shopping apps took off. Rapid delivery apps are expected to continue to grow as people got hooked on the habit. In 2021, she said, “We basically accelerated what we were building on for the habits that were catalyzed during 2020. And I think this is where the habits are forming, where now once you’ve used that app, you’ve done food delivery, or the new Walmart experience, it becomes a habit.” Finance also saw a massive increase in 2020, and they continued to grow in 2021. Cryptocurrency apps were also a major area of growth. Another pandemic effect: meditation apps did well, with Calm ranking in the top five most-searched health and fitness keyword searches. The Great Resignation, where more people quit their jobs during the pandemic, was reflected in the downloads, as business apps like WhatsApp did well, as did DoorDash delivery driving. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,201
2,020
"Amazon's new AI technique lets users virtually try on outfits | VentureBeat"
"https://venturebeat.com/2020/06/05/amazons-new-ai-technique-lets-users-virtually-try-on-outfits"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon’s new AI technique lets users virtually try on outfits Share on Facebook Share on X Share on LinkedIn Models wearing clothing from other images, composited by Amazon-proposed AI. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In a series of papers scheduled to be presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Amazon researchers propose complementary AI algorithms that could form the foundation of an assistant that helps customers shop for clothes. One lets people fine-tune search queries by describing variations on a product image, while another suggests products that go with items a customer has already selected. Meanwhile, a third synthesizes an image of a model wearing clothes from different product pages to demonstrate how items work together as an outfit. Amazon already leverages AI to power Style by Alexa , a feature of the Amazon Shopping app that suggests, compares, and rates apparel using algorithms and human curation. With style recommendations and programs like Prime Wardrobe, which allows users to try on clothes and return what they don’t want to buy, the retailer is vying for a larger slice of sales in a declining apparel market while surfacing products that customers might not normally choose. It’s a win for businesses on its face — excepting cases where the recommended accessories are Amazon’s own , of course. Virtual try-on network Researchers at Lab126, the Amazon hardware lab which spawned products like Fire TV, Kindle Fire, and Echo, developed an image-based virtual try-on system called Outfit-VITON designed to help visualize how clothing items in reference photos might look on an image of a person. It can be trained on a single picture using a generative adversarial network (GAN), Amazon says, a type of model with a component called a discriminator that learns to distinguish generated items from real images. “Online apparel shopping offers the convenience of shopping from the comfort of one’s home, a large selection of items to choose from, and access to the very latest products. However, online shopping does not enable physical try-on, thereby limiting customer understanding of how a garment will actually look on them,” the researchers wrote. “This critical limitation encouraged the development of virtual fitting rooms, where images of a customer wearing selected garments are generated synthetically to help compare and choose the most desired look.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Outfit-VITON comprises several parts: a shape generation model whose inputs are a query image, which serves as the template for the final image; and any number of reference images, which depict clothes that will be transferred to the model from the query image. In preprocessing, established techniques segment the input images and compute the query person’s body model, representing their pose and shape. The segments selected for inclusion in the final image pass to the shape generation model, which combines them with the body model and updates the query image’s shape representation. This shape representation moves to a second model — the appearance generation model — that encodes information about texture and color, producing a representation that’s combined with the shape representation to create a photo of the person wearing the garments. Outfit-VITON’s third model fine-tunes the variables of the appearance generation model to preserve features like logos or distinctive patterns without compromising the silhouette, resulting in what Amazon claims is “more natural” outputs than those of previous systems. “Our approach generates a geometrically correct segmentation map that alters the shape of the selected reference garments to conform to the target person,” the researchers explained. “The algorithm accurately synthesizes fine garment features such as textures, logos, and embroidery using an online optimization scheme that iteratively fine-tunes the synthesized image.” Visiolinguistic product discovery One of the other papers tackles the challenge of using text to refine an image that matches a customer-provided query. The Amazon engineers’ approach fuses textual descriptions and image features into representations at different levels of granularity, so that a customer can say something as abstract as “Something more formal” or as precise as “Change the neck style,” and it preserves some image features while following customers’ instructions to change others. The system consists of models trained on triples of inputs: a source image, a textual revision, and a target image that matches the revision. The inputs pass through three different sub-models in parallel, and at distinct points in the pipeline, the representation of the source image is fused with the representation of text before it’s correlated with the representation of the target image. Because the lower levels of the model tend to represent lower-level input features (e.g., textures and colors) and higher levels higher-level features (sleeve length or tightness of fit), hierarchical matching helps to train the system to ensure it’s able to handle textual modifications of different resolutions, according to Amazon. Each fusion of linguistic and visual representations is performed by a separate two-component model. One uses a joint attention mechanism to identify visual features that should be the same in the source and target images, while the other identifies features that should change. In tests, the researchers say that it helped to find valid matches to textual modifications 58% more frequently than its best-performing predecessor. “Image search is a fundamental task in computer vision. In this work, we investigate the task of image search with text feedback, which entitles users to interact with the system by selecting a reference image and providing additional text to refine or modify the retrieval results,” the coauthors wrote. “Unlike the prior works that mostly focus on one type of text feedback, we consider the more general form of text, which can be either attribute-like description, or natural language expression.” Complementary-item retrieval The last paper investigates a technique for large-scale fashion data retrieval, where a system predicts an outfit item’s compatibility with other clothing, wardrobe, and accessory items. It takes as inputs any number of garment images together with a numerical representation called a vector indicating the category of each, along with a category vector of the customer’s sought-after item, allowing a customer to select things like shirts and jackets and receive recommendations for shoes. “Customers frequently shop for clothing items that fit well with what has been selected or purchased before,” the researchers wrote. “Being able to recommend compatible items at the right moment would improve their shopping experience … Our system is designed for large-scale retrieval and outperforms the state-of-the-art on compatibility prediction, fill-in-the-blank, and outfit complementary item retrieval.” Above: An example of outfit complementary item retrieval. First row: partial outfit with a missing item (shoe). Second row: retrieved results using the authors’ method. Images pass through a model that produces a vector representation of each, and each representation passes through a set of masks that de-emphasize some representation features and amplify others. (The masks are learned during training, and the resulting representations encode product information like color and style that’s relevant only to a subset of complementary items, such as shoes, handbags, and hats.) Another model takes as input the category for each image and the category of the target item and outputs values for prioritizing the masks, which are called subspace representations. The whole system is trained using an evaluation criterion that accounts for the outfit. Each training sample includes an outfit as well as items that go well with that outfit and a group of items that don’t, such that post-training, the system produces vector representations of every item in a catalog. Finding the best complement for a particular outfit then becomes a matter of looking up the corresponding vectors. In tests that use two standard measures on garment complementarity, the system outperformed its three top predecessors with 56.19% fill-in-the-blank accuracy (and 87% compatibility area under the curve) while enabling more efficient item retrieval, and while achieving state-of-the-art results on data sets crawled from multiple online shopping websites (including Amazon and Like.com). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,202
2,021
"Report: 50% of execs report improving customer data management is top CX priority | VentureBeat"
"https://venturebeat.com/2021/10/24/report-50-of-execs-report-improving-customer-data-management-is-top-cx-priority"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 50% of execs report improving customer data management is top CX priority Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new Forrester survey commissioned by Capgemini, Salesforce, and MuleSoft reveals that 40% of businesses experience difficulty creating an integrated, 360-degree view of the customer from CRM/customer experience (CX) technology data. In April 2021, Capgemini , Salesforce, and MuleSoft commissioned Forrester to explore the central needs of modern customer experience and common challenges brands looking to improve CX faced as they emerged from pandemic-related restrictions and attempted to resume business either at or near full capacity. Among key findings, the top reported challenge in the last year has been uncertainty due to the pandemic. More specifically, companies are struggling to provide a consistent and holistic omnichannel experience for their customers while blending remote and in-person work. As many as 50% of respondents intend to improve management of customer data as their top CX priority over the next 12 months because they recognize that creating the immersive, personalized, and compelling experiences customers now expect comes from integrated customer data systems, improved uses of customer data to generate insights, and the combination of legacy technology with modern, cloud-based solutions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, while gaining a single view of the customer is crucial, beginning the journey is a struggle for many businesses. They are often challenged with garnering executive buy-in and with aligning internal leadership teams to support the adoption of a more holistic platform and strategy. In order to get CX right, businesses need CRM/CX technologies that identify the same customer across various touchpoints and combine integrated information from both internal and external systems and external information to gain strong customer insights. Additionally, businesses must work closely with IT to develop a roadmap that includes plans for onboarding and integration. “By designing a CX strategy with complete customer data in mind and bringing IT back into their approach to CRM/CX technologies, companies will be a step closer to gaining a single view of the customer which leads to better all-around CX,” says Jay Rumwell, VP and group partner executive at Capgemini. The global online survey sampled 426 directors and above in marketing, IT, and line-of-business (LOB) roles who have responsibility for technology purchase decisions and direct ownership of technology project management in North America, EMEA, and APAC. Read the full report by Forrester. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,203
2,022
"How emerging tech will influence freedom, industry, and money in the metaverse | VentureBeat"
"https://venturebeat.com/2022/01/22/how-emerging-tech-will-influence-freedom-industry-and-money-in-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How emerging tech will influence freedom, industry, and money in the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Brad Yasar, founder and CEO of EQIFI. Innovation related to the metaverse has brought with it some expected criticism and skepticism. Like any fast-growing, emerging technology, the parameters for its operation have yet to be fully established. This means, essentially, that those who hope to glean financial returns from interacting with the metaverse do not know what investment looks like. Is it VR headsets , digital land, or a pair of Gucci sneakers wearable only with AR? Some might argue that the metaverse is a dystopian fantasy conjured up by gaming fanatics and tech titans. Facebook’s transformation to a metaverse-centric social media company only heightens this dominant apprehension. With Facebook’s Meta rebrand costing the company an estimated $60 million, it seems Mark Zuckerberg may be onto something. Given that Instagram boasts one billion monthly users , it would be wise to assume that the metaverse may impact our lives significantly in the near future, much like social media does. Much like the early days of social media, the metaverse’s impact is limited by its rate of progression. Soon, however, this progression will bring about a transformative era of industry, influenced by a variety of decentralized tools like DeFi, cryptocurrencies, NFTs , and Web3. Once the power of these technologies is fully realized, life as we know it will have changed forever. What is a metaverse? Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The word “metaverse” was coined in Snow Crash , a 1992 novel by Neal Stephenson. In his depiction, users could immerse themselves in a digital world through the use of headphones and specially designed glasses. This digital world created a space for users to engage with one another, exchange goods, and essentially live a double life through VR. Predictions about the metaverse of the 21st-century detail absolute similarities with Stephenson’s imagined reality. The main difference is that the many infrastructural shortcomings associated with the metaverse in the fictional world have been addressed by using blockchain technology as a means to engage and interact in this new virtual world. Since Snow Crash ’s release over 30 years ago, the largest technology providers in the world, like Facebook, have dominated the technology industry insurmountably. Google’s recent acquisition of Canadian company North to adopt a more modern approach to AR hardware and software could point towards the company’s plans for metaverse involvement. Similarly, Apple, the most valuable company in the world with a valuation of over $2.5 trillion, is producing a currently unconfirmed and unnamed headset designed to act as an entryway to the metaverse’s digital realm. Organizations such as these are not accustomed to discarding funds into projects without a future. The advent of NFTs delivers a secure method of transferring digital assets from one party to another in a secure manner. Web3 delivers decentralized interaction and connectivity between separate entities, underpinning the decentralization of the metaverse. Cryptocurrencies and stablecoins provide the financial infrastructure befitting a decentralized marketplace. DeFi possesses the ability to bring fully realized financial decentralization to the process of transferring funds and assets in the metaverse. This would round up the network’s infrastructure, facilitating an expansive digital universe unhindered by centralized middlemen. The future is already here: metaverse and industry It is clear to see that the variety of businesses, individuals, and entities that could potentially operate in the metaverse is vast. The widespread use and acceptance of decentralization through the growth of crypto, NFTs, and DeFi point to a fully-realized future operating outside of the parameters of today’s established markets. Evidently, therefore, the metaverse is not a sci-fi fantasy conjured up in a dystopian novel, but a more tangible and natural progression for the current structuring of the internet. The founding principles of the metaverse have already been introduced in many ways. Now its development centers on blockchain technology and DeFi to propel it from the conceptual stage towards the implementation phase. This development will allow us to firmly realize the true extent that the metaverse will impact our lives. The gaming industry is one such sector that stands to benefit greatly from developments arising in the metaverse. Gaming skins , which are in-game avatar outfits, are expected to trade at a level of $40 billion every year. Eighty-one percent of players aware of these skins want to trade them for real-world money, according to a report from DMarket. Currently, there is no method of transferring skins across gaming universes or trading them for currency. In the metaverse, however, as every separate gaming universe is connected through a decentralized economy, this would be possible. The use of metaverse-based banks would also enable transactions like these. Money in the metaverse Like in the gaming industry, many sectors and industries will benefit from metaverse-related funding and asset transfers. Much like the bankless barter system that precludes our current financial structure, the metaverse stands to reach maximum potential, alongside fully operational and functioning digital banks. This is now possible through the advent and expansion of decentralized finance (DeFi). As the current banking infrastructure separates further and further from cash and brick and mortar establishments, DeFi will be the financial model that facilitates financing across the metaverse. To operate effectively in the metaverse, and offer a standard practice for the transfer of digital assets, banks will need to be decentralized. As continued innovations are made and more industries shift their operations to the metaverse, the likelihood of DeFi enabled banks becomes a compelling growth development. Centralized banking systems simply cannot operate on the metaverse, meaning the expansion and increased sophistication of industries like gaming will fuel the push towards DeFi enabled banking, which will underpin the financial structure of the metaverse. This will open a multitude of benefits for industries, technologists, and digital enthusiasts as innovation is led through the metaverse. In gaming, for example, play to earn becomes a viable and attractive prospect for users and gaming companies alike. The introduction of the metaverse provides a concentrated arena where altcoins can be exchanged for playtime. NFTs can be used to exchange in-game assets, facilitating a whole new era of gaming, and operating efficiently with DeFi enabled banks. This again works to illustrate how blockchain-based emerging technologies will be used to facilitate user interactions across different industries. Dystopia or utopia? It is not just gaming and entertainment that stands to transform and expand with the onset of the metaverse. Synchrony, a fully functioning economy, and the interoperability of digital assets, information, and consumers means industries like supply chain management, property sales, and even office workflows stand to benefit from developments related to the metaverse. As a concept and technological innovation fueled by the decentralization of blockchain technology, the future is in the hands of these industries, without the stringent parameters of centralized control. Given the issues that have arisen from unregulated innovations like social media, the likelihood of focused and coherent regulation influencing the future of the metaverse is likely. The OASIS Consortium , for example, pulls together leaders from industries like gaming, dating apps, and immersive tech platforms to address safety and privacy in Web3. Developments like these are favorable, given that the metaverse’s regulatory parameters are being developed by those invested in its growth and expansion in a positive light for the end-user. It is clear that the correlating growth of cryptocurrency, NFTs, DeFi, VR, and AR will eventually collide to create the metaverse. Will this look exactly like the depiction in Snow Crash , a dystopian online universe where reality is no longer the central connector for civilization? Or, could the metaverse serve to disenfranchise the dominant financial structures of today’s economy, pulling power from the intermediaries that caused multiple financial crises? Nobody really knows. One thing however is certain: The metaverse is coming, and it will change how we look at money, entertainment, and society forever. Brad Yasar is the founder and CEO of EQIFI. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,204
2,021
"4 things you should do before meeting with venture capitalists | VentureBeat"
"https://venturebeat.com/2021/11/27/4-things-you-should-do-before-meeting-with-venture-capitalists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 4 things you should do before meeting with venture capitalists Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Getting a meeting with a venture capitalist is notoriously difficult, but there is such a thing as being too fast to get connected. If you aren’t properly prepared, you can burn your one shot to make a great first impression. From the moment you make it public that you are seeking investment, you turn over an hourglass of about six months. You should be able to get all the funding you want in that time, or investors will begin to worry about why you’ve taken so long. Meeting venture capitalists and raising venture capital isn’t something you should do on a whim, you should be as prepared as possible to maximize your chances of success. Throughout the six months, you will want to focus on talking to investors, not doing the things you could have done before. Here are the four things you should do before approaching any venture capitalists. Know that you are ready The world of pitching can be brutal, and there’s no space for hesitancy. You must believe 100% you are ready, or the potential investors will sniff your uncertainty. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For first-time founders, the difficult question is: How do you know you are ready? It’s hard to dispel doubts when you feel like an imposter, and believe me, we’ve all been there. The crucial thing is being clear on what your startup’s story is. What problem does it aim to solve? How does it do this? What is so unique about you that other companies can’t simply replicate your idea? When you’re ready to start approaching and meeting with venture capitalists, you’ll know your story so well that you can articulate it into a simple solution sentence. Practice it until you know it so well that it feels like it is part of your DNA. Here’s the template: “We’re doing X (solving a problem) for Y (a specific audience) by Z (what is your solution?)” You should also have a clear plan of exactly how you will use any money you invest. They are taking a calculated risk and need to know what the next steps are so their money is not squandered. Have a precise number you want, in which areas the money will be used, and most important, what KPIs (Key Performance Indicators) can be achieved with the funding and for how long. If you can do these two things, you’re well on your way to being ready to contact VCs. Do your homework You’re competing against some of the world’s brightest minds for funding and VCs are flooded with high-quality requests , so you need to get the basics right. Misspelling someone’s name or misgendering them shows a lack of care and eye for detail. It could mean your email is instantly moved to their junk folder as a result. A VC once told me that they see so many great companies, they look for the little flaws as a signal to say no. Take a little extra time to make sure you get the small details right. Before approaching any investor, you should make a hit list of whom you want to talk to in a spreadsheet. Collect all their relevant information here. You want to answer the following questions to ensure they are even the right people to contact: What vertical do they invest in? – Don’t approach an Enterprise SaaS investor if you’ve created a dating app, for example. What stage do they invest in? – Some venture capitalists will only invest as a company is in the later rounds of funding or when a company is close to IPO. If you aren’t there, then you’re wasting everyone’s time. What geographical region do they invest in? – Don’t waste your time crafting a pitch to someone who never invests outside of Silicon Valley if you’re based in London. Yes, COVID-19 has shuffled the cards on this, so you can try, but focus first on investors in your geographic region or who are looking to invest in your region. Who’s in their portfolio? – You want to be sure that you aren’t competing directly with anyone in their existing portfolio. It puts them in a conflict of interests and if they can’t see any synergies, they will pass on you. In addition, be sure to check out their website and any interviews they have available online. It can give you an insight into their style and personality, so you approach them in the right manner. Build your network before meeting with venture capitalists A common mistake of first-time founders is trying to go directly to meet a venture capitalist without first engaging in the wider community. If they’ve already heard good things about you from someone in their network before they’ve talked to you, then you have a huge head start. You get points for how you land on their desk. Venture capitalists are human and, like everyone else, the opinions of the people they trust affect their decision-making. You should be actively connecting with trusted advisors in their space, other investors, and most importantly, other founders. One of the best ways to meet a venture capitalist is through someone they’ve already funded. The founder who introduces you will have been through the exact same process and will have a strong understanding of the investor’s personality. Alternatively, if you’re not the right fit for an investor, but they like you, they can champion you to other investors and get you in the door. The important thing is to create organic relationships where people do not feel as if they are being used. You should bond with people over your shared passions and add value wherever you can. Hopefully, you’ll find people with the “pay it forward” spirit and then get into the karma game when you’re on the other side. Make a teaser deck Some founders try to have their entire pitch deck ready before they even think about meeting venture capitalists, but in reality, things change so fast that your deck will be out of date quickly anyway. Instead, focus on building a teaser deck of just five to eight slides. Send this with your pitch email to give a potential investor a better idea of what your company does without needing to contact you. This works because you’re making their life easier. Rather than needing to look you up or to arrange a meeting, they have the information they want available to them straight away. The brevity is crucial, so you don’t make too high a demand on their time. It should include a great intro blurb that you hope will excite any investor who opens it. As well as making the investor’s decision easier, it also saves you from meeting with an investor who isn’t a right fit. This stops you from chasing dead ends, and you know if someone has read it then they are a hot opportunity because they already know enough to make an informed decision. Move thoughtfully and build relationships When it comes to innovation , “move fast and break things” is a great mantra, but the world of investment isn’t quite the same. You might only get one chance to meet the venture capitalist who could transform your business, you don’t want to blow it. Prepare in advance, and you’ll reduce the possibility of mistakes and increase your chances of getting that all-important funding. Donna Griffit is a storyteller and pitch alchemist who has helped hundreds of global startups and VCs raise over a billion dollars. She can be found online at www.donnagriffit.com. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,205
2,021
"How to use market size metrics to secure funding and scale your business  | VentureBeat"
"https://venturebeat.com/2021/12/01/necessary-market-size-metrics-to-secure-funding-and-scale-your-business"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to use market size metrics to secure funding and scale your business Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Marjorie Radlo-Zandi When founders seek pre-seed, seed, series A, or series B funding , angel investors and venture capitalists want to see a breakthrough product or service that scales and can acquire a significant market share within a sizable market. What do investors expect going into a funding round? Investors need you to understand the data as you use market size metrics to project the market size and future growth of your company. To satisfy investor expectations about your estimated market size, include a realistic set of percentages and dollar values in every slide deck you present to prospective funders. While no investor expects accuracy down to the penny, take extra care to present realistic market size metrics based on independent outside studies when possible. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! TAM, SAM, and SOM: the acronyms and real data points of market sizing When you present your market size data, investors will look for the acronyms TAM, SAM, and SOM. Using them will prove you fully understand what each means and why they’re critical to your product or service. Even when you’re scaling your business, TAM, SAM, and SOM numbers are critical for P&L projections and goals for the company. Use TAM, SAM, and SOM when you’re selecting which projects and geographical market areas to fund from operating capital, and to benchmark against goals you’ve set for the company. TAM (total addressable market) Also referred to as the total available market, TAM is the total revenue possible if a product or service were to achieve 100% market share. TAM answers the question, who theoretically in the world could buy your product or service. It describes the total revenues a company could theoretically make if it had an all-encompassing monopoly or total market share with its product or service. When you calculate TAM, discard any factors that could prevent the company from achieving this total market state. Ignore real-world constraints such as competition, production, and logistical capacity limits, language barriers and geographical distances, and other uncontrollable factors such as climate change. Using non-alcoholic beverage companies as an example, the TAM for this market takes in the total worldwide non-alcoholic beverage market, looks at all revenues from beverage purchases, imagines sales in all countries in the world, and assumes no competition except tap water. According to the research site Statista, the global non-alcoholic drinks market was $1.2 trillion in 2021; the TAM for the Non-Carbonated soft drinks segment, which includes the gut health/herbal tea category, was at $334.6 billion in 2021. SAM (service addressable market or served available market) SAM is the TAM segment within geographical reach that you can target with your products or services. Many beverage sector companies start out targeting only the U.S. market and have distribution capabilities only for the U.S. Waku is a beverage in the gut health/herbal tea category, which also includes Kombucha. According to Statista , gut health/herbal tea is considered a non-carbonated soft drink — a category with a 2021 market size of $83 billion in the U.S. For Waku, the geographic SAM would be the U.S. market in the gut health and herbal tea beverage category, within the larger category of non-carbonated soft drinks. Note that angel investors generally invest when the SAM is $100 million or more, whereas venture capitalists look for an even higher minimum SAM. SOM (serviceable obtainable market or share of market) Think of SOM as the portion of SAM you can realistically capture. Waku would need to research and estimate their SOM, this being a realistic percentage and dollar value of the gut health and herbal tea market within the non-alcoholic soft drink category that they will capture. Spending time understanding and communicating the TAM, SAM, and SOM are critical to attracting investors to the company, scaling the business, and achieving the goals we set. The TAM, SAM, and SOM market sizing data points where operational funds would yield the best results. Using the data, you would be able to consider an array of new products and geographical areas to expand into. A note of caution: If you ignore the data, you’ll be flying blind instead of strategically deploying resources where they matter most in your business. Use the data, and you’ll set the company on a path to attaining optimal ROI. Make market size metrics and research your friend There are a number of publicly available market research sources that will help you understand both existing and future markets. Data in these research reports were essential in setting goals, benchmarking progress, driving business, and seeing future opportunities. Even free research reports are full of valuable market data. Look around, and you’ll find abstracts rich with information that cost nothing. For a fee, it’s a good idea to tap into market research companies Frost & Sullivan, Nielsen, Statista, and Gartner. For the non-alcoholic beverage example, Statista is a good choice because of their specialized research in the beverage category. You can also retain a market research organization to do custom market research that specializes in the beverage category. You can use this data to plan your product roadmap, scale, and establish distribution. It’s important to note that with TAM, SAM, and SOM data you can make clear market projections going out multiple years. If you are unfamiliar with creating projections that cover longer timeframes, you may want to consider hiring professional assistance. A custom market researcher can provide critical insights and give you the numbers you need to make the right choices. You may be wondering how market data works if you’re in a highly specialized industry, or if your technology is new, and you’re not clear on how to pinpoint relevant market trends. It’s a scenario that shows it pays to hire a market research consultant specializing in your market. An expert will be more than worth their weight in gold. When you envisioned your product or service, your focus was on SAM — the segment within a defined geographic area of your total addressable market. As a business seeking funding and targeted operating capital to grow and scale, set your path forward with a deep understanding of your TAM and SOM — the percentage of the SAM market segment you can realistically capture. By understanding TAM, SAM, and SOM key market data and market size metrics, you’ll eliminate financial projections’ guesswork, show prospective investors how they stand to gain from investing in your company, and put yourself in the best possible position to achieve your goals. Marjorie Radlo-Zandi is an entrepreneur, board member, advisor to startups, and angel investor who shows early-stage businesses how to build and successfully scale their businesses. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,206
2,022
"As security issues dominate, use the right plans and metrics to thrive | VentureBeat"
"https://venturebeat.com/2022/01/11/as-security-issues-dominate-focus-on-plans-and-metrics-to-get-ahead-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community As security issues dominate, use the right plans and metrics to thrive Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Joe Partlow, CTO of ReliaQuest The end of the year has traditionally meant crunch time for organizations to finish their preparations for the upcoming year ahead. New budgets are allocated, and it’s up to the department leads to communicate metrics, results, and challenges from the past year in order to justify the additional spending for next year. In 2021, cybersecurity was under the spotlight like never before , with cybercrime increasing 600% due to the pandemic. Because of this, organizations are forced to address cybersecurity with direct orders from the top: CEOs and board members. However, among all the metrics that department leaders analyze, one of the most difficult aspects to track is security progress and effectiveness. In fact, measuring this progress remains the primary obstacle for organizations looking to implement an IT security risk management program, so it’s essential that cyber leaders understand how to communicate this to upper management effectively. As companies begin to implement plans for 2022, it is important for security leads to first meet with their direct reports to discuss which metrics to track, so the foundation for measurement is clearly established. Once that is settled, both parties will need to align on ways to continuously revisit and adjust these metrics to ensure the plan doesn’t become obsolete. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Creating a baseline for the year ahead When it comes to reporting metrics across an organization, it’s critical for all department leads to have a conversation with their direct reports at least three to four months prior to the reporting stage. This is a crucial step to ensure the department lead is well-prepared and can determine what results will resonate best with the board. From a sales lens, this conversation is fairly straightforward. How many sales leads are you getting per month? How many of those convert into successful sales? How good are you at talking on the phone to prospective clients? From a cybersecurity lens, however, tracking effectiveness and displaying ROI to the C-suite and board is more complicated. There aren’t any monthly quotas to meet, and many team leaders struggle with ways to display performance. Deciding which metrics to track is dependent on several factors, such as the size of your organization, how many customers you have, or even where your company headquarters is located. With that said, there are several aspects of an organization’s security posture that should be tracked for businesses of any size. Aligning on metrics for security One of the most important skills a security professional can develop is telling a complicated story to a non-technical colleague—and since 63% of security managers believe board members don’t understand the value of new security technologies, telling this story can be a challenge. The easiest way to have this conversation is to lead with metrics. While these will vary depending on the organization, look to the following metrics that all security team leaders should be aware of, and tactics for communicating that progress to the board. Level of preparedness : This metric should be constantly monitored since it shows how prepared a company is for an impending breach. It’s also one of the hardest to communicate to the board because there isn’t a hard and fast number that quantifies how “ready” an organization is. However, encouraging employees to keep corporate-network devices updated and patched is one actionable step and metric you can communicate and track to keep the organization secure. Tool efficacy : This is an important one because as a security leader you are responsible for providing insight into what tools and services the security team should invest in. Many services exist that will give you an average third-party vendor rating snapshot, which can be continuously checked on and presented to the board. These ratings are an effective way to show progress to a non-technical employee and justify the budget needed for specific security infrastructure. Breach attempts or security incidents : While it’s a hard one to discuss, this is a necessary metric to communicate. You can show how many times attackers not only tried to attack the corporate network, but also how many were detected and blocked. Highlighting a decrease in the number of times these events occur year-over-year will be a key benchmark for board members to measure in order to determine the success of their security programs and where changes may be necessary. Meantime to detect, resolve and contain attacks : These three should be tracked separately, but analyzing these metrics together can provide new insights about where certain parts of an incident response plan might be lacking. These measurements provide significant value to board members when you’re trying to convince them to invest more resources into security tools that will make the company’s response to a potential cyberattack as quick and efficient as possible. Trending and mapping risks to the business: Demonstrating that the security program is addressing the more important risks to the business is critical to get buy-in and support from the board. Mapping the critical business risks back to the security controls and technologies you are implementing is the best way to show ROI along with trending the results. All good plans should be consistently revisited and adjusted, and that’s especially true for cybersecurity. The threat landscape promises to evolve, with cybercriminals constantly leveraging new attack methods. This is not something security leaders and organizations should be thinking about just during the planning and reporting seasons, but all year long. Without refreshed response plans and solid security metrics, sophisticated attackers will outpace your organization. Security leaders will be able to mitigate some of the most common missteps and oversights organizations make if they take the time to determine how best to measure progress and therefore effectively communicate their needs up to the C-Suite and board. Joe Partlow is CTO of ReliaQuest DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,207
2,021
"This AI system learned to understand videos by watching YouTube | VentureBeat"
"https://venturebeat.com/2021/06/11/this-ai-system-learned-to-understand-videos-by-watching-youtube"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages This AI system learned to understand videos by watching YouTube Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Humans understand events in the world contextually, performing what’s called multimodal reasoning across time to make inferences about the past, present, and future. Given text and an image that seems innocuous when considered separately — e.g., “Look how many people love you” and a picture of a barren desert — people recognize that these elements take on potentially hurtful connotations when they’re paired or juxtaposed, for example. Even the best AI systems struggle in this area. But there’s been progress, most recently from a team at the Allen Institute for Artificial Intelligence and the University of Washington’s Paul G. Allen School of Computer Science & Engineering. In a preprint paper published this month, the researchers detail Multimodal Neural Script Knowledge Models (Merlot) , a system that learns to match images in videos with words and even follow events globally over time by watching millions of YouTube videos with transcribed speech. It does all this in an unsupervised manner, meaning the videos haven’t been labeled or categorized — forcing the system to learn from the videos’ inherent structures. Learning from videos Our capacity for commonsense reasoning is shaped by how we experience causes and effects. Teaching machines this type of “script knowledge” is a significant challenge, in part because of the amount of data it requires. For example, even a single photo of people dining at a restaurant can imply a wealth of information, like the fact that the people had to agree where to go, meet up, and enter the restaurant before sitting down. Merlot attempts to internalize these concepts by watching YouTube videos. Lots of YouTube videos. Drawing on a dataset of 6 million videos, the researchers trained the model to match individual frames with a contextualized representation of the video transcripts, divided into segments. The dataset contained instructional videos, lifestyle vlogs of everyday events, and YouTube’s auto-suggested videos for popular topics like “science” and “home improvement,” each selected explicitly to encourage the model to learn about a broad range of objects, actions, and scenes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The goal was to teach Merlot to contextualize the frame-level representations over time and over spoken words so it could reorder scrambled video frames and make sense of “noisy” transcripts — including those with erroneously lowercase text, missing punctuation, and filler words like “umm,” “hmm,” and “yeah.” The researchers largely accomplished this. They reported that in a series of qualitative and quantitative tests, Merlot had a strong “out-of-the-box” understanding of everyday events and situations, enabling it to take a scrambled sequence of events from a video and order the frames to match the captions in a coherent narrative, like people riding a carousel. Future work Merlot is only the latest work on video understanding in the AI research community. In 2019, researchers at Georgia Institute of Technology and the University of Alberta created a system that could automatically generate commentary for “let’s play” videos of video games. More recently, researchers at Microsoft published a preprint paper describing a system that could determine whether statements about video clips were true by learning from visual and textual clues. And Facebook has trained a computer vision system that can automatically learn audio, textual, and visual representations from publicly available Facebook videos. Above: Merlot can understand the sequence of events in videos, as demonstrated here. The Allen Institute and University of Washington researchers note that, like previous work, Merlot has limitations, some owing to the data selected to train the model. For example, Merlot could exhibit undesirable biases because it was only trained on English data and largely local news segments, which can spend a lot of time covering crime stories in a sensationalized way. It’s “very likely” that training models like Merlot on mostly news content could cause them to learn racist patterns as well as sexist patterns, the researchers concede, given that the most popular YouTubers in most countries are men. Studies have demonstrated a correlation between watching local news and having more explicit, racialized beliefs about crime. For these reasons, the team advises against deploying Merlot in a production environment. But they say the model is still a promising step toward future work in multimodal understanding. “We hope that Merlot can inspire future work for learning vision+language representations in a more humanlike fashion compared to learning from literal captions and their corresponding images,” the coauthors wrote. “The model achieves strong performance on tasks requiring event-level reasoning over videos and static images.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,208
2,021
"Reliable data and where to find them | VentureBeat"
"https://venturebeat.com/2021/12/13/reliable-data-and-where-to-find-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Reliable data and where to find them Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Susan Wu, senior director, marketing research at PubMatic. Data is a cornerstone of modern business , with the ability to uncover enlightening, even breakthrough discoveries with respect to business decisions. But one dataset can tell many stories, and sometimes those stories simply aren’t aligned with reality. The recent prediction data, released ahead of the 2021 NJ Governor’s election, provides one example. The data forecasted a sizable lead for incumbent Phil Murphy, but in the end, he clinched the victory by an extremely thin margin. This is not the first, nor likely the last, time that a dataset uncovers falsehoods, which begs the question: Is this data reliable? While the answer isn’t always clear-cut, data can prove effective and informative when it is appropriately managed. Data sources in today’s business environment are virtually limitless and constantly evolving, creating unprecedented opportunities to successfully leverage data, yet also countless pitfalls when inappropriately analyzed and applied. Avoiding such failure requires accurately defining datasets, identifying data limitations, and establishing reliable data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Defining the dataset Quant’s data science, or information that can be measured or quantified, clearly plays a critical role in business decision-making, but it must not be viewed as the absolute pathway to success due to the many unquantifiable intangibles that inevitably arise in analyzing and applying data. In other words, relying entirely on quant data to reach decisions may lead to disappointing results. No cookie-cutter method of analyzing data has yet to be discovered. However, by framing problems clearly and accurately, the chances of solving data-specific issues increase dramatically. Our team, for example, generates a quarterly industry report that looks at ad spend by industry categories. We sought to understand which ad categories were most impacted by events — the global health crisis (which is still a major ongoing event), along with the U.S. presidential election, housing boom, and most recently, the economic recovery — and how the market would recover, so we could anticipate or at least manage expectations on potential future impacts. While a regression would prove overkill for such research, category classification and segmentation techniques were helpful to understanding seasonality and discretionary spending among categories. The pandemic naturally created anomalies, which had to be considered in the data. At first, looking at year-over-year changes during certain 2020 months only showcased that ad spending was declining. But by looking at quarter-over-quarter, we were able to extract the leading category indicators driving different phases of recovery, which more accurately represented trend lines. Data limitations Data hygiene is king , although it invariably comes with data limitations. Consistent, quality, unbiased data is the source of impactful insight into trends, while compromises in these areas tend to create a bias in information. To minimize this concern, constant and vigilant awareness of data limitations (e.g., understanding how and where the data was mined) and seeking ways to keep data in check is vital. Trend analyses are often used to anticipate future events based on historical behaviors. In the case of our quarterly global digital advertising spend reports, the pandemic made the analyses fairly challenging due to the volatility in the market for an extended period of time. In order to create insightful analysis at an industry level, we employ a regimented protocol for the raw data: how it regularly gets mined from our systems to produce an error-free dataset for analysis. The data is aggregated, “checked and balanced” from other sources, and then vetted to ensure there’s no unintentional bias in the data pool. Only then can we start analysis, as the result will have much greater accuracy. Reliable data analysis Less is more when analyzing and writing about data. Readers typically do not require every detail, and data reliability benefits significantly from improved and focused writing skills. Intent reigns when writing data-specific content. Insights must aim at articulating only the necessary aspects of the story. Data reliability increases exponentially alongside a strong written analysis, as does the likelihood of applying it successfully to business applications. A second, yet equally critical, element of data reliability is the continued exploration and learning from other research and data professionals. Innovative approaches and new data resources continually surface at a frequency never before seen. Keeping up with current trends in a constantly evolving field is a task within itself, yet failure to do so may render all data processes irrelevant and, ultimately, lead a business the way of the dinosaur. Data is ubiquitous. On one hand, data is absolutely essential for making informed business decisions in today’s global business environment. On the other, it poses the enormous, constant challenge of accurately interpreting a dataset-specific to any given objective. In the end, data is only as valuable as the quality of the analysis. The more refined and meticulous that process, the more invaluable role data can play in everyday decision-making. Susan Wu is a senior director of marketing research at PubMatic. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,209
2,022
"CES 2022: AI is driving innovation in 'smart' tech | VentureBeat"
"https://venturebeat.com/2022/01/05/ces-2022-ai-is-driving-innovation-in-smart-tech"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CES 2022: AI is driving innovation in ‘smart’ tech Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Despite all the stories about big companies bailing out of CES 2022 amidst the latest surge in COVID-19 cases, the consumer electronics show in Las Vegas is still the place to be for robots, autonomous vehicles, smart gadgets, and their inventors — an opportunity to take stock of what’s required to build practical machine intelligence into a consumer product. A sampling of the innovations VentureBeat heard about in advance briefings: Efforts to improve data integration within vehicles and consolidate computing capabilities ( Sonatus ) and establish a common operating system or framework for automotive computing ( Apex.ai ). An autonomous driving retrofit kit for existing vehicles, with software that eschews bleeding edge AI in favor of deterministic programs and practical constraints on where the vehicle will operate ( Perrone Robotics ). Food and retail delivery robots for use within airports and for curbside delivery ( Ottonomy ). Smart driver-facing cameras for cars and trucks that sound an alert when a driver is falling asleep or not watching the road, either for industrial fleets ( SmartWitness ) or as a consumer safety device ( Xperi ) or an enhancement dashcams ( NextBase ), where the ability to record road rage incidents or aggressive police road stops is an additional benefit. A software framework for running deep neural network AI models on small, power-constrained devices ( Deeplite ). A partnership between Fluent.ai , a specialist in voice AI, and the audio tech experts at Knowles Corp. , on voice command applications for compact, low-power devices. Computer vision packed into compact devices for the visually impaired that can read — and extract meaning from — a full page of text at a glance, as well as a hearing aid that uses visual cues like reading the lips of the person the listener is talking with at a party to know which sounds to amplify ( OrCam ). OrCam and Sonatus are among the companies no longer planning to travel to Las Vegas or announce products at CES, and it’s possible some of the other vendors VentureBeat interviewed in advance of the event will also be no-shows. Big names like Microsoft, Google, Intel, Amazon, and T-Mobile backed out in recent weeks. Augmented reality, virtual reality, and the metaverse will be topics of discussion that will have to proceed without Meta (the company formerly known as Facebook). Automotive tech will be a big theme of the event, but General Motors, BMW, and Mercedes-Benz decided not to make the drive (GM’s all-digital presence is still supposed to include a video keynote from CEO Mary Barra on Wednesday). On the other hand, some like Perrone Robotics had already shipped vehicles and a test track set up, indicating their commitment Still, the Consumer Technology Association, which sponsors the event, determined that the show must go on. Despite the big company drama, exhibiting and networking at CES remains an opportunity for “thousands of smaller companies, entrepreneurs, and innovators who have made investments in building their exhibits and are counting on CES for their business, inspiration, and future,” CTA CEO Gary Shapiro wrote in an op-ed for the Las Vegas Review-Journal. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Driving AI progress Although CES exhibitors pitched VentureBeat on everything, including sexual wellness products, VB sought out briefings related to the uses of data and AI that readers can learn from. In particular, consumer device makers tend to want to take advantage of the cloud for software and data updates without being dependent on the cloud — with the smarts of the smart device resident on the device itself. So they’re worth studying as pioneers in edge computing. Much as enterprise tech may have to learn from the consumer technology world, the opposite is also true. For example, Jeffrey Chou, CEO, and founder at Sonatus, said that one way for computerized systems in automobiles to improve is by learning from the model of the enterprise datacenter. In other words, siloed software running on lots of little computers (electronics control units or ECUs in automotive jargon) needs to be simplified and tied together with middleware, which Sonatus provides. That unification has to happen while preserving real-time performance for vehicle safety systems and addressing new concerns like vehicle cybersecurity. “There’s no short-term fix. The long-term fix is to do software right,” Chou said. Apex.ai has a somewhat similar story about improving the software foundations for autonomous driving and other smart vehicle tech, which, in its case, involved a series of enhancements and optimizations for ROS, the open source Robotics Operating System (considered as more of a programming framework). “There isn’t a company in automotive that doesn’t use ROS for prototyping,” CEO Jan Becker said, adding that his company’s products help with turning successful prototypes into production products. Auto processor consolidation is paving the way for more sophisticated software, according to Becker. “The trend that we see now — that Tesla introduced that a couple of years ago and everybody else is introducing that in the next three years — is having those more powerful central more central computers, for infotainment for driver systems, potentially for a gateway potentially then also for vehicle safety functions like ESP and ABS anti-lock braking,” he said. At the same time, Becker noted it’s been years since self-driving car enthusiasts began predicting that robot taxis would be roaming the streets any day now. “The truth is, the problem is really, really hard. What our industry has begun to understand better in the last couple of years is which applications are commercially reasonable,” he said. For example, long before fully autonomous driving becomes available and affordable in passenger cars that need to be able to go anywhere, it can be practical for commercial vehicles navigating well-known and profitable routes. Perrone Robotics is applying that approach to autonomous commercial vehicles that can navigate freight yards or circulate through urban or campus bus routes. Although it has partnerships with electric vehicle manufacturers like GreenPower Motor Company, Perrone also sells a retrofit kit that works with the pedals, transmission, and steering wheel of conventional vehicles to render them autonomous for low-speed operation over a known route. “There’s going to be a very long path to autonomy,” CEO Paul Perrone said. “My focus is on, here’s what you can do now.” In fact, he’s a bit of a contrarian: rather than chasing bleeding edge AI applications, he leans toward “deterministic software,” the logic of which is easier to certify as safe for the operation of a vehicle. “You can’t just train it with some probabilistic learning system that will probably get it to its destination,” he said. Meanwhile, Ottobot is capitalizing on automotive innovations, such as lidar rangefinders, for use in its delivery robots, which began navigating the concourses at Cleveland International Airport in December 2020. Ottobot also recently announced a partnership with restaurant tech company Presto for curbside and parking lot delivery of food orders with less labor required. While taking advantage of autonomous vehicle tech, Ottobot has innovated in other directions to allow its bots to go where many other delivery bots can’t because, like cars, they rely on GPS navigation. To work within an airport, for example, Ottobot creates a software simulation of the floor plan. “We create a digital twin and then navigate within that,” CEO Ritukar Vijay said. The arrangement of sensors also needs to be different to navigate through crowds and see glass barriers. Scaling down While automobiles and robots are capturing increasing attention at CES , the show is best known for showcasing smaller gadgets. When device manufacturers talk about embedding AI, typically that doesn’t mean expecting big AI models to run on the device. A gadget may or not embed some modest machine learning capabilities, but typically the training of the model occurs in the cloud while what gets installed on the device is a much more compact inferencing model for interpreting and acting on sensor data. Even with that simplification, optimizing software to run within the size, power, and processing constraints of a given device can be a steep challenge. For example, OrCam’s assistive technologies for the blind and visually impaired are in form factors the size of magic or a clip-on camera for a pair of glasses. So while the vice president of R&D, Oren Tadmor, respects the AI processors from companies like Nvidia, “they’re not computers that we can ever dream of fitting into our devices,” he said. Instead, the company winds up working with specialized chipsets for vision processing. At the same time, Tamdor says, OrCam has been able to take advantage of big advances in the state of the art for deep learning as it applies to computer vision, which has made problems like face recognition much easier to solve. OrCam is an Israeli company whose cofounders Amnon Shashua and Ziv Aviram also founded Mobileye, a leader in computer vision for collision avoidance and self-driving car technology. “For computer vision, we can do anything, or almost anything, that a person can,” Tamdor said. “And it’s just a matter of finding, what are the features that our users can use?” Software versus hardware optimization Hardware-specific optimizations may sometimes be necessary, but that isn’t stopping software tool makers from trying to promote a more standardized approach to device programmability. “I think one of the exciting things here is the interplay between these two types of optimizations,” said Davis Sawyer, cofounder, and chief product officer at Deeplite. “Where the two meet up, that’s where we see 400 to 500% increases over one or the other on their own.” At CES, Deeplite announced the Deeplite Runtime software development kit for creating efficient deep learning models based on Pytorch, particularly for computer vision applications. Where the company’s previous Deeplite Neutrino product worked with GPUs and other types of processors, the new Deeplite Runtime is specifically for compiling applications to run on ARM processors, which are among the most popular on smart devices. “Given the prevalence of things like ARM CPUs , the familiarity with developers, and also the low power profile for battery-powered devices, that’s where I think there [are] a lot of opportunit[ies] created,” Sawyer said. Fluent.ai, a device software player focused on voice command systems, aims to be “as hardware agnostic as possible,” CEO Probal Lala said. However, some hardware partners prove to be easier to work with than others. At CES, Fluent.ai is announcing a partnership with audio tech specialist Knowles, and they’ll be jointly demoing voice-controlled earbuds. For Knowles, the attraction is that Fluent.ai’s software operates efficiently, without being dependent on cloud services or the power and network capacity required to access them. “They offer a large command set, the largest I’ve ever seen that’s completely offline,” said Raj Senguttuvan, director of strategic marketing for audio and sensing solutions at Knowles. That opens up a wide range of entertainment and business application opportunities, he said. Fluent’s key optimization is that it shortcuts the common voice application pattern of translating voice to text and then doing further processing on the text. Instead, the software does its pattern matching by working with the audio data directly. For smart tech innovation, just add imagination The increasing variety of base technologies, including AI capabilities, ought to get you thinking about business opportunities. “I’m a big believer that the technology doesn’t mean anything to the end-user without a little imagination as to how it is going to improve their lives,” Richard Browning, chief sales and marketing officer at NextBase, a maker of car dash cams. For NextBase, that means re-imagining how the dashcam can move beyond being just a mobile security camera you can use to share crash footage with your insurance company. Just the challenge of producing good video under conditions that can range from glaring daylight to rainy daylight is steep enough and requires some AI image processing power, Browning says. The NextBase IQ product being announced at the show and readied to ship in September takes that capability further to also provide driver assistance (recognizing when other drivers are behaving badly) and spatial awareness (anticipating accidents so they can be recorded more completely). The addition of an inside-facing camera allows the system to detect and warn drowsy or distracted drivers, but it also allows for capturing video evidence that wouldn’t be captured by a front-facing camera such as road rage or “aggressive road stop” incidents. With a voice command, the device can be toggled into “witness” mode to record exactly how you behave when a cop walks up to the vehicle and asks for your license and insurance. When in witness mode, whether triggered by voice command or sensors detecting that an accident has occurred, the video is transmitted to a cloud account for later review. Previous versions of NextBase’s products required the driver to manually download video data to their phone. With these and other features, the NextBase IQ has almost outgrown the “dashcam” category as it was previously defined — except the company can’t figure out what else to call it, other than a “smart dashcam,” Browning says. “People understand what ‘smart’ is these days — they’ve got [a] smart home, smart security, smart health — it’s a product that is connected and intelligent.” That will be a large part of what CES 2022 is about. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,210
2,022
"10 data-driven strategies to spark conversions in 2022 | VentureBeat"
"https://venturebeat.com/2021/12/23/10-data-driven-strategies-to-spark-conversions-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 10 data-driven strategies to spark conversions in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Atul Jindal, web design and marketing specialist. You have put in a lot of effort in driving traffic to your website. But what’s the point of all these web visitors if they don’t buy what your website sells or do what it wants them to do? Traffic acquisition is not the end. It is the means to an end. And what is the end? Customer acquisition. To convert your web traffic into leads and leads into customers, you need to optimize your sales funnel. In other words, you need conversion rate optimization (CRO). In this day and age, when your customers have more options than ever before, you need to fuel your CRO efforts with data-driven insights. Otherwise, it will not cut it. In this article, we will discuss ten powerful data-driven CRO strategies you can use to convert your web visitors and social followers into leads and leads into paying customers. Importance of CRO CRO is critical for your online stability. First and foremost, CRO directly affects your bottom line. When done right, it can increase your sales and decrease customer acquisition costs. Once that happens, you might see a tangible increase in your revenue. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Secondly, CRO reveals in-depth data. That means, once you go through the process, you end up with invaluable customer insights, which you can use to make data-based decisions and guide your marketing efforts, resulting in data-driven conversions. Finally, it helps SEO. While CRO has nothing to do with driving traffic to a website, it has implicit SEO benefits. CRO improves customer experience, which is a significant ranking factor. And with the CRO-driven increased revenue, you can invest more in SEO and drive better results. Now that you have a few pointers about the importance of CRO let’s discuss some time-tested and proven CRO strategies. Data-driven strategies for CRO Data is the new currency. And if you learn to use it right, well, you can get rich. And of course, when doing something as critical as CRO, we cannot neglect data as the most valuable resource, can we? So, here are top data-backed CRO strategies for your business: 1. Research your audience Audience analysis reveals data that can transform your CRO process and shape it to yield truly profitable results. It can help you personalize your customer touchpoints and be more relevant. Finding your audience’s interests and demographics can also help you run hyper-targeted ads and minimize your ad spend while maximizing ad revenue. Track your customers’ behavior to find out what interests them, which social media platforms they hang out on the most, the blogs they follow, and what grabs their attention and use that information to drive data-driven conversions. Here’s an example of how a small business leveraged audience data to increase their conversions: A Nashville-based lawn care business drove an ads campaign with a copy that read, “Local lawn pros in Nashville are a click away.” This ad had a 1% CTR and 10% conversion rate. They were in an effort to optimize their CTR and conversions, tapped into their audience’s census data. And what they found revolutionized their ad results. Their target audience was price-sensitive people. They used this information to optimize their ad copy and changed it to “Cheapest lawn mowing in Nashville. Lawn Mowing from $20.” As a result of their data-driven CRO approach, they saw a 30% increase in on-page conversions. Do you see how audience data can work wonders on your conversions? 2. Implement data analysis to find the most promising ways to post on social media Social media is a data goldmine. It can reveal so much about your customers that your metrics can skyrocket if you use all of this information correctly. Social media analytics can help you find: What visuals appeal to your target audience the most What content type gets the greatest engagement When is the best time to post on social media What CTAs generate the greatest leads And a whole lot more. Once you have successfully acquired this data, you can develop a visual theme for your brand. For example, data has shown that your customers are more inclined towards darker colors. You can include dark colors in your visual content. If they engage more with amusing content, get amusing. If they are active during the night, post at night, and so on. You can use data from social media to run personalized video ads as well and have another chance of optimizing your conversions as customers that arrive through these ads are 184% more likely to convert. Remember, all of these data-driven steps will help you connect better with your customers and be more relevant. And healthy connection and relevance combine to form one of the founding pillars of better data-driven conversion. 3. Use attention-grabbing headlines Conversion begins with a click. And clicks come after you have successfully grabbed your user’s attention. A headline is often the first thing your users come across, and hence an excellent tool to use for grabbing their attention. Therefore, using attention-grabbing headlines (paired with other factors) can lead to better conversions. This is not your pass to creating controversial and low-value titles. Grab attention while delivering value and maintaining class. Again, tap into website analytics to find out which headlines have worked the best for you. If you are entirely new to the website world, know that headlines with numbers have shown to have 30% higher conversions than those without numbers. Additionally, short and concise headlines, which have a negative superlative (like x number of things you have never seen before or x killer Instagram profiles you need to follow), have a higher tendency to earn more clicks. 4. Perform ad split testing (A/B testing) A/B testing or split testing reveals incredibly insightful data that can work wonders on your bottom line. It can reveal the weaknesses and strengths of your ad, so you can know what to change and what to do more for increased conversions. Use the data you have acquired from audience research to create different ads. Then target other users from the same demographics with these ads. Finally, analyze the ad metrics to uncover their performance data and see which one converts better. Continue changing the ad elements, such as ad copy, headline, image, CTA copy, CTA position, etc., and keep on analyzing and optimizing. For example, if an ad with a red CTA button converts better than the one with a green CTA button, turn all your CTAs red to improve conversions. Sony, world-leading consumer electronics manufacturer, increased its CTA by 6% and add-to-cart ratio by 21.3% through split testing their banner ads. 5. Use data from Google Analytics to optimize website elements Your CRO efforts boil down to how well-optimized your website is. You could have a great headline, a fantastic ad, and a wealth of customer information, but if your web elements aren’t conversion-optimized, you may not get that many sales. Perform A/B testing on various elements of your website to see which version works better. Analyzing the website rankings list reveals that color influences 85% of buying decisions, and adjusting them can result in a 24% increase in conversions. Google A/B tested more than 50 shades of blue on their CTA to see which one converted the best. You don’t have to be as obsessive as Google. But still, test the conversion metrics of your website by varying the colors of its elements, and analyzing data to find out which one generates the most conversions. Apart from changing colors, you can adjust many other elements to optimize your website’s conversion. For example, you can reduce the number of fields required in the signup form. Neil Patel did just that and experienced a 26% increase in conversions. You can try doing this if data has shown that people bounce from your website after seeing the signup form. Another thing you can do is to change the position of your CTA or the layout of your web page. Remember to track the effect of these changes by analyzing data so that you can yield actual CRO results. 6. A/B test your landing page Nothing influences conversion more than a landing page. Each element of your landing page can affect your conversion rate, and hence your bottom line. Therefore, you have to be very careful when optimizing it. First, use customer data to create a relevant and attractive landing page copy. Next, use heatmaps to find out where their attention is focused the most. Use this information to write and design a landing page that converts. Peep Laja and his team increased the data-driven conversion rate of a truck driver’s community website after several months of varying different elements of the website and testing results. They used attention heatmaps to find out that the left side of the page got the most attention, incorporated better design elements, used attention-grabbing images, and wrote a more relevant copy. As a result, the conversion rate of the website’s landing page increased by 79.35%. 7. Implement rich snippets Rich results or rich snippets are Google’s offering that enables websites to attract qualified leads. And do you know what qualified leads do? They convert. Therefore, activating rich snippets should be in your CRO toolbox. Use structured data to create schema markup and inform Google about what you want it to display in rich snippets. There are different types of rich snippets. If you want to leave no money on the table, you can try various snippets to see which one converts better. Then you can continue to use the well-performing one. A quick caveat though, including structured data in your HTML and running rich results tests do not guarantee rich snippets. 8. Use CTAs to optimize well-performing blogs Blogs are major lead magnets. They can attract 67% more leads and generate 6x more conversions. But that all depends on the CTA. A blog ends with a CTA. And how the CTA performs determines whether you will get a lead or a conversion out of the blog or not. Banner-blindness is real, which reduces the value of end-of-the-blog banner CTAs. Therefore, if you want your blog leads to convert, you should try using anchor text CTAs. Anchor text CTAs are texts incorporated in the blog body that links to a relevant landing page. It helps because, one: it is not a banner; and two: because it is eye-catching, even for people who are just skimming through the blog. Moreover, anchor text CTAs are said to boost conversions by up to 121%. 9. Add lead flows Lead flows are popup boxes that are triggered after users perform certain actions on your website. For example, a lead flow offering a limited-time discount pops up after a user is about to leave the web page. These pop-ups attract users’ attention and attempt to funnel them through to conversion. But that’s not the only reason why lead flows have the potential to enhance your data-driven conversion rate. Lead flows remove friction in your conversion funnel. They present the offer, collect information, and deliver value in one box, which is significantly better than a customer reading through the blog, then clicking on a link, filling up an email, and getting a download link. Based on past user activity, you can use any of the three types of lead flows, including slide-in boxes, drop-down boxes, or popup boxes, whichever has been shown to work best with your audience. 10. Use ecommerce data to optimize website conversions An ecommerce website thrives on increased conversions. You can tap into the Google analytics of your website to spot areas that are creating friction in your sales funnel and optimize them to enhance your conversion rates. For example, if most of your web traffic is leaving the website after viewing the products page, it’s time to optimize that page. Or, if you are experiencing high cart abandonment rates, you can use popups triggered by exit intent to offer discounts or surprise gifts. 61% of customers believe surprise gifts are excellent ways to keep them engaged. To remain competitive, conversion rate optimization is key Regardless of the industry you are in, if you want to stay competitive in the digital world, you have to perform conversion rate optimization. But, while optimizing your website for better conversions, you have to make sure to take data-driven steps so you can eliminate guesswork and be more confident about the results. Start by researching your audience and gathering information from as many channels as you can. Then optimize your customer touchpoints according to this research to be more relevant to the consumers and drive them towards conversion. Remember, CRO is not a one-time process. So, make a routine of A/B testing your website and marketing assets to uncover data that can drive your CRO efforts and keep your conversions high. Atul Jindal is a web design and marketing specialist. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,211
2,022
"As third-party cookies fade, the cold start problem becomes an opportunity | VentureBeat"
"https://venturebeat.com/2022/01/05/as-third-party-cookies-fade-the-cold-start-problem-becomes-an-opportunity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community As third-party cookies fade, the cold start problem becomes an opportunity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Alexandre Robicquet, CEO and cofounder of Crossing Minds We’re all familiar with the immediate popup on the websites we visit: will we accept all cookies, including third-party cookies? The answer is typically a quick but reluctant yes. Without accepting third-party cookies, we wouldn’t be able to interact meaningfully with those sites. But with the rise of new consumer data privacy laws , this popup has come to represent more than just an agreement. It now epitomizes the massive, pressing challenge that’s plaguing online businesses: the “cold start.” The cold start — often referred to as the cold start problem — is the phenomenon in which businesses don’t have the right technology to deliver truly personalized online experiences for new or anonymous users. This is due to a lack of existing information on those customers’ past purchases or preferences. When a customer enters a website for the first time, any “ personalization ” that a customer sees isn’t actually specific to them at all. But the stakes are high: in an ecommerce landscape where consumers have come to expect experiences tailored just for them, a lack of meaningful personalization can have dire consequences when it comes to conversions and revenue. In fact, 91% of today’s consumers say they’re more likely to shop with brands that provide offers and recommendations that are relevant to them — but on average, 68% of visitors on a site are new users. Over the past two decades, businesses have been trained to believe that the third-party cookie — a vehicle through which to gain access to customers’ personal information — is the best way to get around the long-standing cold start problem. Unfortunately, this strategy has also opened up a new can of worms, resulting in customers who either feel spied on due to hyper-personalized recommendations or who become frustrated by poor recommendations based only on their age or gender. In both instances, allowing third-party cookies to pull personal information is a sacrifice consumers have been making for years, without fully understanding the repercussions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But things are changing. The power of the third-party cookie is in rapid decline , thanks to constantly tightening consumer privacy laws, such as the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR). Facebook made headlines for its lax privacy protections that led to the Cambridge Analytica scandal. And Google has announced plans to phase out its own use of third-party cookies in Chrome by the end of 2023. Consumers are growing tired of the third-party cookie, too; a recent survey showed that 86% of consumers feel a growing concern about their data privacy, with 40% expressing a lack of trust that companies will use their data ethically. While first-party cookies — which are stored directly by the website a consumer visits — can help build a solid customer experience without exposing users’ data, it’s the third-party cookies that represent a clear threat to their privacy. Clearly, it’s no longer just a service to customers to keep private information private — it’s a business imperative. Still, after getting their tried-and-true, cookie-centric strategy yanked out from under their feet, companies are scrambling to pivot. A change in mindset is necessary. Businesses must adjust their thinking on the cold start and no longer view it as a problem, but rather embrace it as an opportunity. Crucially, major changes to existing platform architecture aren’t needed in order to accomplish this. By simply embedding technologies like AI into a platform, businesses can start cold with customers and seamlessly personalize at the outset—all without demanding their private information. How does this work? Instead of focusing on who customers are, companies should focus on what they’re doing once on a website. With increased access to live data through on-platform interactions like short quizzes or clever content filtering and product indexing, businesses can understand what customers really want and immediately deliver compelling experiences. Fostering early insights about customers will help paint a more nuanced picture of what they’re likely to want in the future, allowing companies to add more value through true personalization to each customer touchpoint — website, email, and beyond. Ultimately, this will result in better relationships between businesses and their customers. Language choice is also important. We typically only categorize something as a “problem” when there’s no better alternative available — hence the cold start “problem.” But today, there is a better alternative to traditional methods of personalization: embracing the cold start as a secret weapon to improve customer experiences and build long-term loyalty. We stand at the beginning of an inflection point. The demise of the third-party cookie is poised to transform how companies interact with their customers, and it all begins with the cold start — once a problem, now an opportunity. Alexandre Robicquet is an experienced AI scholar and CEO and cofounder of Crossing Minds. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,212
2,022
"Data users demand simple actionable insights quickly  | VentureBeat"
"https://venturebeat.com/2022/01/16/data-users-demand-simple-actionable-insights-quickly"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Data users demand simple actionable insights quickly Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Prasanna Dhungel, m anaging partner of GrowByData. Businesses today have lots of data , modern data warehousing , AI (Artificial Intelligence) tools , and nice visualization platforms. Still, users across small and large enterprises globally are frustrated by their inability to quickly get answers to their questions from their data. We now have data everywhere and are drowning in a sea of data. We have transformed from a data-scarce to a data-overwhelmed society, struggling with information overload. But users simply want answers from their data. Marketers, for example, want to know when a competitor’s product takes the spotlight from their brand. Doctors want to give pointed advice to their patients on what to do based on their medical records. There are many examples, but each user’s need is the same: to get answers to their questions. The obvious question is: how did we get here? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over the last few decades, we have continually digitized across a growing number of industries. Internet and mobile phones have enabled easy publishing and we now have data across text, images, sound, and videos stored in accessible cloud servers. We have warehouses like Azure and Redshift , and analytics like Big Query. These are exciting times for us data professionals. Unfortunately, I fear we are getting caught up in the tools hype. We are continuing to push more and more numbers, charts, and tables to our users. That is precisely the problem. We are not offering directions on what to do. Per Dell’s study, “Data overload has become a significant barrier to transformation”. Data users don’t ask whether we use the latest AI or for every data-related detail possible. When we share large volumes of data, they push back, stating, “I don’t know what to do with all of this. Can you provide simple, actionable tips and not overwhelm me?” Users want to save themselves from data overload. In retail, brand managers want to understand whether and how shoppers see their products. If a brand isn’t visible on Black Friday, that’s a massive problem. If a brand’s promotions, prices, or shipping isn’t as compelling as those offered by competitors, shoppers won’t buy. You can share prices, reviews, visibility, and more with a brand manager. However, the details alone aren’t valuable. You must really tell the brand manager when a competitor takes his/her spot or offers a better deal. In healthcare, electronic medical records store patients’ medical conditions, medicine taken, lab results, x-ray, and more. However, the physician must know which patient to remind about scheduling a mammogram, colonoscopy, or other wellness procedure. The doctor must know which patient is likely to get sick and needs intervention. If a home monitoring device detects that a sick patient falls, the device must notify 911 to get them to the hospital and notify their doctor. All these values must be provided from the data. In accounting, managers want to understand and have insight into financial ratios. Imagine a business owner not quickly knowing that cash is running low due to a bad accounts receivable collection. A customer service representative would need to inform a customer they will not get service unless payment is received. Our industry is flooded with articles on data tsunami, overload, and fatigue. Users want to understand what happened, why, and what to do next. Gone are the days when you could tell your users that you will get back with answers tomorrow. Stale data is not helpful. Users demand answers now. The right question is – how do we as data professionals address these user needs? I believe we must do the following: 1. Deeply understand the needs of data users Creating the right answer that the user finds impactful takes many conversations and product iterations. We must directly ask users what they find valuable. Usage data is a good proxy to understand customer adoption. Understanding whether customers are correlating ROI with the presented insights is key. 2. Engage an interdisciplinary team to create insights We want the undercover key performance indicators (KPIs) that allow users to spot opportunities to grow revenue and reduce cost. To remain unbiased from a data-heavy mindset, we need interdisciplinary team members including; colleagues from data science, customer success, content marketing, and design – to speak customer needs. 3. Combine macro trends with notable events We must provide high-level trends with pointed trends in interactive visuals. Current tools like Tableau and Google Data Studios visualize the stories in the data. To gain user trust in our data, we must be transparent on how we generated our analysis. 4. Allow data users to access insights on their tool of choice User preferences have evolved from “I will log into your software to understand what I need to know” to “alert me on my phone to tell me what I must do only when something important happens.” Today, data users ask devices like Alexa and Google Nest to remind and inform them of the news, time, weather, and more. This means that businesses must push information to a data user’s application of choice such as, podcasts, digital assistants, YouTube videos, email, and so on. Wherever our users are, that is where our solutions must be. Big data, cloud computing, and analytics advances offer us tremendous power. To share these benefits with our users, we must provide impactful, simple, and easy-to-use insights quickly and cost-effectively using a data users’ tool of choice. Only this will provide the value today’s data users are asking for. Prasanna Dhungel is managing partner of GrowByData , a marketing intelligence firm that serves global advertising agencies and brands. He has two decades of experience providing data insights to thousands of users globally in retail, advertising agencies, and healthcare. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,213
2,021
"When AI flags the ruler, not the tumor -- and other arguments for abolishing the black box (VB Live) | VentureBeat"
"https://venturebeat.com/2021/03/25/when-ai-flags-the-ruler-not-the-tumor-and-other-arguments-for-abolishing-the-black-box-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live When AI flags the ruler, not the tumor — and other arguments for abolishing the black box (VB Live) Share on Facebook Share on X Share on LinkedIn AI helps health care experts do their jobs efficiently and effectively, but it needs to be used responsibly, ethically, and equitably. In this VB Live event, get an in-depth perspective on the strengths and limitations of data, AI methodology and more. Hear more from Brian Christian during our VB Live event on March 31. Register here for free. One of the big issues that exists within AI generally, but is particularly acute in health care settings, is the issue of transparency. AI models — for example, deep neural networks — have a reputation for being black boxes. That’s particularly concerning in a medical setting , where caregivers and patients alike need to understand why recommendations are being made. “That’s both because it’s integral to the trust in the doctor-patient relationship, but also as a sanity check, to make sure these models are, in fact, learning things they’re supposed to be learning and functioning the way we would expect,” says Brian Christian, author of The Alignment Problem , Algorithms to Live By and The Most Human Human. He points to the example of the neural network that famously had reached a level of accuracy comparable to human dermatologists at diagnosing malignant skin lesions. However, a closer examination of the model’s saliency methods revealed that the single most influential thing this model was looking for in a picture of someone’s skin was the presence of a ruler. Because medical images of cancerous lesions include a ruler for scale, the model learned to identify the presence of a ruler as a marker of malignancy, because that’s much easier than telling the difference between different kinds of lesions. “It’s precisely this kind of thing which explains remarkable accuracy in a test setting, but is completely useless in the real world, because patients don’t come with rulers helpfully pre-attached when [a tumor] is malignant,” Christian says. “That’s a perfect example, and it’s one of many for why transparency is essential in this setting in particular.” At a conceptual level, one of the biggest issues in all machine learning is that there’s almost always a gap between the thing that you can readily measure and the thing you actually care about. He points to the model developed in the 1990s by a group of researchers in Pittsburgh to estimate the severity of patients with pneumonia to triage inpatient vs outpatient treatment. One thing this model learned was that, on average, people with asthma who come in with pneumonia have better health outcomes as a group than non-asthmatics. However, this wasn’t because having asthma is the great health bonus it was flagged as, but because patients with asthma get higher priority care, and also asthma patients are on high alert to go to their doctor as soon as they start to have pulmonary symptoms. “If all you measure is patient mortality, the asthmatics look like they come out ahead,” he says. “But if you measure things like cost, or days in hospital, or comorbidities, you would notice that maybe they have better mortality, but there’s a lot more going on. They’re survivors, but they’re high-risk survivors, and that becomes clear when you start expanding the scope of what your model is predicting.” The Pittsburgh team was using a rule-based model, which enabled them to see this asthma connection and immediately flag it. They were able to share that the model had learned a possibly bogus correlation with the doctors participating in the project. But if it had simply been a giant neural network, they might not have known that this problematic association had been learned. One of the researchers on that project in the 1990s, Rich Caruana from Microsoft, went back 20 years later with a modern set of tools and examined the neural network he helped developed and found a number of equally terrifying associations, such as thinking that being over 100 was good for you, or having high blood pressure was a benefit. All for the same reason — that those people were given higher-priority care. “Looking back, Caruana says thank God we didn’t use this neural net on patients,” Christian says. “That was the fear he had at the time, and it turns out, 20 years later, to have been fully justified. That all speaks to the importance of having transparent models.” Algorithms that aren’t transparent, or that are biased, have resulted in a variety of horror stories, which have led to some saying these systems have no place in health care, but that’s a bridge too far, Christian says. There’s an enormous body of evidence that shows that when done properly, these models are an enormous asset, and often better than individual expert judgments, as well as providing a host of other advantages. “On the other hand,” explains Christian, “some are overly enthusiastic about the embrace of technology, who say, let’s take our hands off the wheel, let the algorithms do it, let our computer overlords tell us what to do and let the system run on autopilot. And I think that is also going too far, because of the many examples we’ve discussed. As I say, we want to thread that needle.” In other words, AI can’t be used blindly. It requires a data-driven process of building provably optimal, transparent models, from data, in an iterative process that pulls together an interdisciplinary team of computer scientists, clinicians, patient advocates, as well as social scientists that are committed to an iterative and inclusive process. That also includes audits once these systems go into production, since certain correlations may break over time, certain assumptions may no longer hold, and we may learn more — the last thing you want to do is just flip the switch and come back 10 years later. “For me, a diverse group of stakeholders with different expertise, representing different interests, coming together at the table to do this in a thoughtful, careful way, is the way forward,” he says. “That’s what I feel the most optimistic about in health care.” Hear more from Brian Christian during our VB Live event, “In Pursuit of Parity: A guide to the responsible use of AI in health care” on March 31. Register here for free. Presented by Optum You’ll learn: What it means to use advanced analytics “responsibly” Why responsible use is so important in health care as compared to other fields The steps that researchers and organizations are taking today to ensure AI is used responsibly What the AI-enabled health system of the future looks like and its advantages for consumers, organizations, and clinicians Speakers: Brian Christian , Author, The Alignment Problem , Algorithms to Live By and The Most Human Human Sanji Fernando , SVP, AI Products & Platforms, Optum Kyle Wiggers , AI Staff Writer, VentureBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,214
2,022
"Report: How AI and ML optimize the diagnosis process in health care | VentureBeat"
"https://venturebeat.com/2022/01/18/report-how-ai-and-ml-optimize-diagnosis-process-in-health-care"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: How AI and ML optimize the diagnosis process in health care Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new report by CSA reveals that rapid developments in AI, ML, and data mining have allowed technology and health care innovators to create intelligent systems in order to optimize and improve the diagnosis process, quickly capturing unforeseen patterns within complex and large datasets. According to the Agency for Healthcare Research and Quality, 10% of patient deaths are a direct consequence of misdiagnosis. By using AI and ML, health care service providers can improve the precision of each diagnosis. Medical diagnostics using AI and ML are rapidly expanding, and automation is increasingly helping to detect life-threatening conditions in their earliest stages. For example, ML helps oncologists not only pinpoint the location of a tumor, but can also accurately determine if it’s malignant or benign in milliseconds. Although computer-based predictions aren’t error-free, new research has indicated that its accuracy of classification hovers around 88%. ML can also aid oncological diagnosis and treatment by improving the precision of blood and culture analysis, mapping the diseased cells, flagging areas of interest, and creating tumor staging paradigms. In dermatology, AI is used to improve clinical decision-making and ensure the accuracy of skin disease diagnoses. ML can aid dermatological diagnosis and treatment by using algorithms that separate melanomas from benign skin lesions. Furthermore, by employing tools that track the development and changes in skin moles, algorithms are used to pinpoint biological markers for acne, nail fungus, and seborrheic dermatitis. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI can also assist in the diagnosis of ophthalmologic conditions. Some of the latest innovations that these health care centers have adopted are AI-driven vision screening programs, allowing identification of diabetic retinopathy as well as providing physicians with treatment insights and early-stage diagnosis of macular degeneration. More recently, the COVID-19 pandemic provided a unique opportunity to prove that technologies like AI and ML could be leveraged for the benefit of all. AI-based algorithms have been used to optimize health care resources , prioritize hospital resource allocations, and aid in vaccine development and distribution. From the initial reports of the pandemic in December 2019, to the early predictions of its spread and impact, to the deployment of AI in the development of vaccines, automation has played a central role in the fight against COVID-19, as well as other critical diseases and conditions. Read the full report by CSA. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,215
2,022
"McKinsey donates machine learning pipeline tool Kedro to the Linux Foundation | VentureBeat"
"https://venturebeat.com/2022/01/19/mckinsey-donates-machine-learning-pipeline-tool-kedro-to-the-linux-foundation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages McKinsey donates machine learning pipeline tool Kedro to the Linux Foundation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up. The Linux Foundation, the nonprofit consortium that provides a vendor-neutral hub for open source projects. today announced that McKinsey’s QuantumBlack will donate Kedro, a machine learning pipeline tool, to the open source community. The Linux Foundation will maintain Kedro under Linux Foundation AI & Data (LF AI & Data), an umbrella organization founded in 2018 to bolster innovation in AI by supporting technical projects, developer communities, and companies. “We’re excited to welcome the Kedro project into LF AI & Data. It addresses the many challenges that exist in creating machine learning products today and it is a fantastic complement to our portfolio of hosted technical projects,” Ibrahim Haddad, executive director of LF AI & Data, said. “We look forward to working with the community to grow the project’s footprint and to create new collaboration opportunities with our members, hosted projects and the larger open-source community.” The importance of pipelines A machine learning pipeline is a construct that orchestrates the flow of data into — and out of — a machine learning model. Pipelines encompass raw data, data processing, predictions, and variables that fine-tune the behavior of the model with the goal of codifying the workflow so that it can be shared across an organization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Many machine learning pipeline creation tools exist, but Kedro is relatively new to the scene. Launched in 2019 by McKinsey, it’s a framework written in Python that borrows concepts from software engineering and brings them to the data science world, laying the groundwork for taking a project from an idea to a finished product. According to Yetunde Dada, product lead on Kedro, Kedro was developed to address the main shortcomings of one-off scripts and “glue-code” by focusing on creating maintainable, efficient data science code. By building in modularity, one of the aims was to inspire the creation of reusable analytics code and enhance team collaboration. In the two-and-a-half years Kedro has been available on GitHub, the community and user base has grown to over 200,000 monthly downloads and more than 100 contributors. Telkomsel, Indonesia’s largest wireless network provider, uses Kedro as a standard across its data science organization. “This is the only way [Kedro] can grow at this point — if it is improved by the best people around the world,” Dada said in a statement. “Our cross-disciplinary team of 15 people gets to own increased development and validation of Kedro with this milestone. It is also significant mark of validation for Kedro as a de-facto industry tool, joining a collection of other cutting-edge open-source projects such as Kubernetes donated by Google, GraphQL by Facebook or MLFlow and Delta Lake by Databricks.” Future usage Open source software has become ubiquitous in the enterprise, where it’s now used even in mission-critical settings. While the integrity of the software is in question — particularly in light of recent events — seventy-nine percent of companies expect that their use of open source software for emerging technologies will increase over the next two years, according to a 2021 Red Hat survey. According to Schwarzmann, after it’s open-sourced, Kedro will continue to be the foundation of analytics projects within McKinsey. “The ideas and guardrails that exist in Kedro are a reflection of that experience and are designed to help developers avoid common pitfalls and follow best practices,” product manager Joel Schwarzmann said in a blog post. A spokesperson added via email: “Kedro will be focused on pursuing a stable API, or 1.0 version, formal integrations with developer tools and cloud platforms and continued work on our experiment tracking functionality. We want our users also to have surety that it is easy to upgrade versions of Kedro and benefit from new features. At this moment, Kedro supports elementary integrations with different cloud providers, and we want to work with the cloud providers to create seamless integrations. Experiment tracking, a way for data scientists to keep track of data science experiments, has paved the way for users to find and promote production models. We will be extending this functionality with many more features according to user problems.” Kedro joins another open source pipeline tool released by Microsoft in November: SynapseML. With SynapseML, as with Kedro, developers can build systems for solving challenges across domains including text analytics, translation, and speech processing. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,216
2,021
"AI Weekly: AI model training costs on the rise, highlighting need for new solutions | VentureBeat"
"https://venturebeat.com/2021/10/15/ai-weekly-ai-model-training-costs-on-the-rise-highlighting-need-for-new-solutions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: AI model training costs on the rise, highlighting need for new solutions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Microsoft and Nvidia announced that they trained what they claim is one of the largest and most capable AI language models to date: Megatron-Turing Natural Language Generation (MT-NLG). MT-NLG contains 530 billion parameters — the parts of the model learned from historical data — and achieves leading accuracy in a broad set of tasks, including reading comprehension and natural language inferences. But building it didn’t come cheap. Training took place across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs. Experts peg the cost in the millions of dollars. Like other large AI systems, MT-NLG raises questions about the accessibility of cutting-edge research approaches in machine learning. AI training costs dropped 100-fold between 2017 and 2019, but the totals still exceed the compute budgets of most startups, governments, nonprofits, and colleges. The inequity favors corporations and world superpowers with extraordinary access to resources at the expense of smaller players, cementing incumbent advantages. For example, in early October, researchers at Alibaba detailed M6-10T, a language model containing 10 trillion parameters (roughly 57 times the size of OpenAI’s GPT-3 ) trained across 512 Nvidia V100 GPUs for 10 days. The cheapest V100 plan available through Google Cloud Platform costs $2.28 per hour, which would equate to over $300,000 ($2.28 per hour multiplied by 24 hours over 10 days) — further than most research teams can stretch. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google subsidiary DeepMind is estimated to have spent $35 million training a system to learn the Chinese board game Go. And when the company’s researchers designed a model to play StarCraft II , they purposefully didn’t try multiple ways of architecting a key component because the training cost would have been too high. Similarly, OpenAI didn’t fix a mistake when it implemented GPT-3 because the cost of training made retraining the model infeasible. Paths forward It’s important to keep in mind that training costs can be inflated by factors other than an algorithm’s technical aspects. As Yoav Shoham, Stanford University professor emeritus and cofounder of AI startup AI21 Labs, recently told Synced, personal and organizational considerations often contribute to a model’s final price tag. “[A] researcher might be impatient to wait three weeks to do a thorough analysis and their organization may not be able or wish to pay for it,” he said. “So for the same task, one could spend $100,000 or $1 million.” Still, the increasing cost of training — and storing — algorithms like Huawei’s PanGu-Alpha , Naver’s HyperCLOVA , and the Beijing Academy of Artificial Intelligence’s Wu Dao 2.0 is giving rise to a cottage industry of startups aiming to “optimize” models without degrading accuracy. This week, former Intel exec Naveen Rao launched a new company, Mosaic ML, to offer tools, services, and training methods that improve AI system accuracy while lowering costs and saving time. Mosaic ML — which has raised $37 million in venture capital — competes with Codeplay Software, OctoML, Neural Magic, Deci, CoCoPie, and NeuReality in a market that’s expected to grow exponentially in the coming years. In a sliver of good news, the cost of basic machine learning operations has been falling over the past few years. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of two every 16 months. Approaches like network pruning prior to training could lead to further gains. Research has shown that parameters pruned after training, a process that decreases the model size, could have been pruned before training without any effect on the network’s ability to learn. Called the “lottery ticket hypothesis,” the idea is that the initial values parameters in a model receive are crucial for determining whether they’re important. Parameters kept after pruning receive “lucky” initial values; the network can train successfully with only those parameters present. Network pruning is far from a solved science, however. New ways of pruning that work before or in early training will have to be developed, as most current methods apply only retroactively. And when parameters are pruned, the resulting structures aren’t always a fit for the training hardware (e.g., GPUs), meaning that pruning 90% of parameters won’t necessarily reduce the cost of training a model by 90%. Whether through pruning, novel AI accelerator hardware, or techniques like meta-learning and neural architecture search, the need for alternatives to unattainably large models is quickly becoming clear. A University of Massachusetts Amherst study showed that using 2019-era approaches, training an image recognition model with a 5% error rate would cost $100 billion and produce as much carbon emissions as New York City does in a month. As IEEE Spectrum’s editorial team wrote in a recent piece, “we must either adapt how we do deep learning or face a future of much slower progress.” For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,217
2,021
"Report: AI investments see largest year-over-year growth in 20 years | VentureBeat"
"https://venturebeat.com/2021/12/06/report-ai-investments-see-largest-year-over-year-growth-in-20-years"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: AI investments see largest year-over-year growth in 20 years Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to the latest report by Tortoise Intelligence , worldwide investment into AI companies has increased by 115% since 2020, marking the largest year-on-year growth in AI investment for at least two decades. Total AI investment reached $77.5 billion in 2021, a substantial increase from the previous record set last year of $36 billion. Overall, it’s clear from the report’s findings that COVID-19 has contributed to a surging global interest in AI from both governments and investors. A sudden need for digital collaborative spaces and remote working tools during the pandemic forced business leaders to understand the vital need for digitization, said Michael Chui, partner at the McKinsey Global Institute. This demand included further investments in AI and automation. The U.S. remains a global leader in AI thanks to a world-beating commercial scene, a large talent pool, and stellar research initiatives. In second place is China, which supports the rollout of AI technologies with its robust infrastructure and ambitious government strategy, but falls behind when it comes to talent. The U.K. remains in third place due to its superb pool of home-grown researchers and a strong AI startup scene, but still has areas of weakness in its development and operating environment, including a costly visa regime. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When it comes to funding per capita, however, Israel — which has knocked South Korea out of its 5th place spot — beats all other countries. Israel had $325,000 in funding invested in AI-focused companies for every million people. It also comes first in the world for the share of GDP going to R&D. Governments appear to have reacted to the AI zeitgeist, too. Since Tortoise Intelligence’s last report, eight additional countries have officially released government strategies on AI: Slovenia, Turkey, Ireland, Egypt, Malaysia, Brazil, Vietnam, and Chile. It’s a clear acknowledgement of the increasing popularity of this technology. The Tortoise Global AI Index ranks nations by AI preparedness. It analyzes metrics including investment into AI, the strength of a nation’s talent pool, the quality of its research, supercomputing capabilities, and the level of ambition and muscle behind its national government strategies. Read the full report by Tortoise Intelligence. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,218
2,020
"How to build tech products for a diverse user base | VentureBeat"
"https://venturebeat.com/2020/12/26/how-to-build-tech-products-for-a-diverse-user-base"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to build tech products for a diverse user base Share on Facebook Share on X Share on LinkedIn Pedestrians crossing intersection Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We have seen article after article about the fact that tech companies struggle with diversity at all levels of hiring, leading to toxic cultures for minorities (i.e. pre-2017 Uber ). Even the algorithms and AI that underpin these products can have racial or gender biases, with embarrassing outcomes. One topic that has been absent from this conversation is diversity and representation in early product testing. This stage of product development is hugely influential to the direction of a product, as well as who it ultimately serves. For example, if the majority of people who use a brand-new product are high-income white men who work in tech (which early adopters tend to be), then most of the user feedback the product team receives will serve to tailor that product to their needs and may not be generalizable to the needs of a broader audience. It is common wisdom that product-market fit is achieved by building for the small group of people who love your product. If product research and roadmaps are based on feedback from early adopters and those early adopters are not diverse, how can we build tech that serves a broader segment of society? Diversifying the feedback loop We are working through this issue at the startup I work for, Neeva. Since we are quite new, we have created a waitlist for folks who want to test our product for free before we launch publicly. The vast majority of people on our waitlist are men, and a significant number of them work in tech. We set out to do some research on how to attract more diverse sets of people to test an early-stage product and found a profound lack of resources for early stage startups looking to attract well-rounded audiences (and not pay a ton of money in the process, a common worry for pre-revenue companies). There seemed to be little attention paid to this topic, resulting in a lack of data on the demographics of early product adopters and testers. So we have had to forge our own way for the most part. First, we checked for skewed demographics in our signup list by plotting the distribution by key demographic slices. When we sliced our signup data by basic attributes. As you can see above, it was clear that certain demographics were over-represented. One contributing factor was that many of our users heard about us from tech publications and forums, which may not reflect the makeup of the overall US population. This has subsequently influenced how we try to attract new audiences post-launch. We then had to determine how to avoid building only for testers that fit the “early adopter” profile. Once testers were on our platform, we performed “stratified sampling” based on demographics, which is just a fancy way of saying we sampled within each category and then combined those sub-samples to create the overall sample. This ensured each demographic was appropriately represented in our sample. We used this methodology both when selecting users to poll for feedback and when selecting users to participate in research. This ensured that the majority viewpoint did not get over-sampled. We also built these demographic slices directly into our dashboards (i.e. usage by gender a, gender b, gender c, etc). The key here is to not apply the slice as just a “filter,” since it would be difficult to compare across filtered results in a systematic way, but build it into the dashboard as a core view. We also used tools like SurveyMonkey and UserTesting to find diverse sets of people and understand their needs when it came to our product. This feedback helped influence our roadmap and supplemented tester feedback. One thing to remember with self-reported data, diverse or otherwise, is that it is important to remove hurried or inconsistent responses. I’ve included a few examples below of questions you can use to weed out low-quality responses. Finally, it is important to make sure that the diverse slices are large enough to be statistically significant : otherwise, you have to treat the data as being directional in nature only. More perspectives leads to better products All of this work helped us understand that testers across the country, despite their profession, were quite knowledgeable about the applications of our product (ad-free search). They were also very aware of the influence of advertiser dollars on the products they use — which meant there were real problems we could solve for them. Minority groups of testers, although small percentage-wise, have meaningfully influenced our product direction. (And “minority” here can refer to any minority demographic, whether it be race, profession, interest, etc.) An example: By speaking with parents across all genders (~30% of our testers), we learned that family plans, where we can create safer and more private experiences for children and teens, would be a key differentiator in their search experience. Based on minority group feedback, we are also considering allowing people to find small boutique retailers, or those who only sell sustainably sourced products to avoid having results dominated by the obvious large retailers. By taking the time to deeply analyze our data and balance our research, we have discovered audiences we didn’t consider part of our target market originally. We are building a product that is useful beyond the bubble of early adopters for all sorts of use cases. Sandy Banerjee is Head of Marketing at Neeva. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,219
2,021
"AI Weekly: Recognition of bias in AI continues to grow | VentureBeat"
"https://venturebeat.com/2021/12/03/ai-weekly-recognition-of-bias-in-ai-continues-to-grow"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Recognition of bias in AI continues to grow Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, the Partnership on AI (PAI), a nonprofit committed to responsible AI use, released a paper addressing how technology — particularly AI — can accentuate various forms of biases. While most proposals to mitigate algorithmic discrimination require the collection of data on so-called sensitive attributes — which usually include things like race, gender, sexuality, and nationality — the coauthors of the PAI report argue that these efforts can actually cause harm to marginalized people and groups. Rather than trying to overcome historical patterns of discrimination and social inequity with more data and “clever algorithms,” they say, the value assumptions and trade-offs associated with the use of demographic data must be acknowledged. “Harmful biases have been found in algorithmic decision-making systems in contexts such as health care, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society,” the coauthors of the report write. “Many current algorithmic fairness techniques [propose] access to data on a ‘sensitive attribute’ or ‘protected category’ (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. [But] these demographic-based algorithmic fairness techniques [remove] broader questions of governance and politics from the equation.” The PAI paper’s publication comes as organizations take a broader — and more critical — view of AI technologies, in light of wrongful arrests , racist recidivism , sexist recruitment , and erroneous grades perpetuated by AI. Yesterday, AI ethicist Timnit Gebru, who was controversially ejected from Google over a study examining the impacts of large language models , launched the Distributed Artificial Intelligence Research (DAIR), which aims to ask question about responsible use of AI and recruit researchers from parts of the world rarely represented in the tech industry. Last week, the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) approved a series of recommendations for AI ethics, including regular impact assessments and enforcement mechanisms to protect human rights. Meanwhile, New York University’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives are studying the impacts and applications of AI algorithms, as are Khipu, Black in AI, Data Science Africa, Masakhane , and Deep Learning Indaba. Legislators, too, are taking a harder look at AI systems — and their potential to harm. The U.K.’s Centre for Data Ethics and Innovation (CDEI) recently recommended that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of human oversight. The European Union has proposed regulations that would ban the use of biometric identification systems in public and prohibit AI in social credit scoring across the bloc’s 27 member states. Even China, which is engaged in several widespread, AI-powered surveillance initiatives, has tightened its oversight of the algorithms that companies use to drive their business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pitfalls in mitigating bias PAI’s work cautions that efforts to mitigate bias in AI algorithms will inevitably encounter roadblocks, however, due to the nature of algorithmic decision-making. If optimizing for a goal that’s poorly defined, it’s likely that a system will reproduce historical inequity — possibly under the guise of objectivity. Attempting to ignore societal differences across demographic groups will work to reinforce systems of oppression because demographic data coded in datasets has an enormous impact on the representation of marginalized peoples. But deciding how to classify demographic data is an ongoing challenge, as demographic categories continue to shift and change over time. “Collecting sensitive data consensually requires clear, specific, and limited use as well as strong security and protection following collection. Current consent practices are not meeting this standard,” the PAI report coauthors wrote. “Demographic data collection efforts can reinforce oppressive norms and the delegitimization of disenfranchised groups … Attempts to be neutral or objective often have the effect of reinforcing the status quo.” At a time when relatively few major research papers consider the negative impacts of AI, leading ethicists are calling on practitioners to pinpoint biases early in the development process. For example, a program at Stanford — the Ethics and Society Review (ESR) — requires AI researchers to evaluate their grant proposals for any negative impacts. NeurIPS, one of the largest machine learning conferences in the world, mandates that coauthors who submit papers state the “potential broader impact of their work” on society. And in a whitepaper published by the U.S. National Institute of Standards and Technology (NIST), the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help identify problems. Requiring AI practitioners to defend their techniques can incentivize new ways of thinking and help create change in approaches by organizations and industries, the NIST coauthors posit. “An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended,” NIST scientist Reva Schwartz, a coauthor of the NIST paper, wrote. “All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where [a] model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital … step.” For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,220
2,021
"How to fend off cybersecurity burnout | VentureBeat"
"https://venturebeat.com/2021/11/14/how-to-fend-off-cybersecurity-burnout"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to fend off cybersecurity burnout Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Facing a worsening talent shortage and increasingly active and sophisticated attackers, cybersecurity practitioners are finding themselves stretched thin and overworked. We saw major security incidents occur in 2021 such as the cyberattacks on SolarWinds, Microsoft Exchange, and Kaseya, which exacerbated stress leading to burnout of security professionals. By adopting the following strategy, organizations can empower security teams to operate more effectively, helping to alleviate stress and ensuring resiliency. Elevate the CISO to report directly to the CEO One of the most important cybersecurity lessons of the past decade has been that organizations must view cybersecurity as a cost of doing business rather than a peripheral concern. CEOs are expected to assess risk and make decisions accordingly, but too often cybersecurity risk is not being factored into the equation. Cyberattacks can cost organizations millions of dollars in loss of productivity, IP, or even ransom payments. As every cybersecurity professional knows, it’s not a matter of if a company will be attacked but when. For CISOs without a direct line of communication to the CEO, communicating the seriousness of cybersecurity risks poses a major challenge. It is a difficult message to convey to an executive who might not be so amenable to the conversation in the first place. If a CISO is not able to impart a proper understanding of cybersecurity needs, it means they may not be able to secure the resources needed to run an effective security program. When security teams are strapped for resources, the load on each individual on the team increases. With the CISO reporting directly to the CEO, organizations can eliminate this barrier to communication, ensuring that CEOs are made aware of the full extent of cyber risk they face and allocate resources accordingly. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Improve relationships between security and developer teams It is commonly understood that breaking down silos between security practitioners, IT, and software developers is an essential ingredient in a successful cybersecurity program. Yet this is something organizations continue to struggle with. In fact, 52% of developers think security policies stifle innovation, according to a recent study by Forrester. And only 22% of developers “strongly agree” they understand which security policies they are expected to comply with. Overall, relationships are still strained, with 37% saying their organization’s teams are not effectively collaborating or taking strides to strengthen relationships between security and development teams. When developers and security teams are not on the same page, security risk multiplies. Networks can suffer from misconfigurations or inconsistent policy applications, and software can be released with vulnerabilities. These flaws become opportunities for hackers to breach a network and engage in a variety of costly attacks. One way to improve the relationship between security teams and developers is to place security advocates on development teams. These team members should have an understanding of both security and software development and should serve as the bridge in communicating security needs to developers. These individuals should also play a role in helping security teams better understand the challenges of implementing new security policies or initiatives for their developer teammates. In this way, security becomes collaborative and plans become realistic, rather than security handing down one way directives to already swamped developers and demanding compliance. Information-sharing, partnerships, and cooperation The Biden administration recently ordered the majority of federal agencies to patch hundreds of cybersecurity vulnerabilities that are known to be exploited, where patches are available. This directive is one of the first steps taken by the Cybersecurity and Infrastructure Security Agency (CISA) and its Joint Cyber Defense Collaborative (JCDC), and we’ll likely see more of this public and private sector collaboration in 2022. Across recent federal hearings, discussions, and consultations, the thread most consistently pulled through was that organizations need to improve cooperation and information sharing, not only with the federal government but with each other as well. The variety of threats an organization could fall victim to is too great for any one security team to guard against. Instead, strategies must be deployed with an eye towards efficiency, which means prioritizing threats that are more prevalent than others. This is where threat intelligence becomes vital, and threat intelligence is strongest when organizations communicate with each other about the types of attacks they are seeing “in the wild.” With cyberattacks growing in sophistication and frequency, hardening systems is paramount to improve the security ecosystem and defend cyberspace globally. It’s critical that federal agencies have the tools they need to protect themselves and that they have visibility into threats that put the federal government at risk. Equipped with this information, security teams will be able to focus their defense on likely intrusion points, meaning their efforts will be more efficient and effective than if they were left to guess what form an attack might take. The importance of a strategic vision Cybersecurity burnout is a complex issue with many contributing factors, and I’ve only scratched the surface here. There are a number of other strategies and tips that focus on different aspects of the overall problem: purchasing the right tools or setting up training programs, etc. But the power of the steps outlined above is that they require little monetary investment to achieve, and instead suggest a shift in strategic direction. Cybersecurity is now an element of corporate responsibility and should be viewed as a function of conducting business rather than an expense. Your brand depends on it. Cybersecurity burnout is a major challenge, and like any major challenge, strategic organizational directives will be key to helping fend it off. Tom Kellermann is Head of Cybersecurity Strategy at VMware. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,221
2,021
"CyCognito nabs $100M to fight cyberattacks with bots | VentureBeat"
"https://venturebeat.com/2021/12/01/cycognito-nabs-100m-to-fight-cyberattacks-with-bots"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CyCognito nabs $100M to fight cyberattacks with bots Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. CyCognito , a company developing bot technology to probe potential cyberattack vectors, today announced that it raised $100 million in a funding round led by The Westly Group with participation from Accel, LightSpeed Venture Partners, Sorenson Ventures, UpWest, and The Heritage Group. The new financing — which values CyCognito at $800 million — brings the company’s total capital raised to $153 million, which CEO and cofounder Rob Gurzeev says will be put toward further developing CyCognito’s platform and expanding the company’s workforce. Cyberattacks are on the rise as hackers target companies accelerating digital transformations during the pandemic. In the first half of 2020, data breaches exposed 36 billion records. And the amount that companies paid to hackers grew by 300%, in light of high-profile ransomware attacks against the Colonial Pipeline, the Steamship Authority of Massachusetts, JBS (the world’s largest meatpacker), and the Washington, D.C. Metropolitan Police Department. That’s perhaps why 68% of business leaders feel that their cybersecurity risks are increasing, according to a recent Accenture survey — and why the worldwide information security market is forecast to reach $170.4 billion in 2022. Gurzeev and Dima Potekhin founded CyCognito in 2017 to address a few of the cybersecurity challenges facing enterprises today. Potekhin, a three-time entrepreneur, was a founding engineer at AcceloWeb, a server-side website acceleration platform that was acquired by Limelight Networks in 2011. Gurzeev served as head of offensive security at C4, an intelligence-gathering platform, which was later snatched up by Israel-based defense company Elbit Systems. “Our number one priority is to help security teams worldwide maintain a strong security posture,” Gurzeev said in a statement. “That’s why we’ve developed the industry’s most dynamic, intelligent platform for comprehensive external attack surface management. Our proactive approach to discovering and prioritizing risk is unmatched in the industry and provides security teams with the insight and confidence to quickly preempt potential attacks.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bot-powered platform CyCognito uses a network of bots to scan, map, and fingerprint digital assets. Using analysis based on statistical methods, clustering, natural language processing, and the actions of its platform’s users, CyCognito learns how to classify assets by their business context and organizational association, producing a graph that captures relationships between organizations, subsidiaries, vendors, partners, cloud platforms, exposed on-premises assets, and third-party systems. CyCognito profiles billions of web apps, keyword and code fragments, logos and icons, and deployed software to identify potential attack vectors. Using risk evaluation methods like authentication and misconfiguration testing, network design analysis, and data exposure spotlighting, the company’s attack simulator orchestrates assessments without affecting business operations. Above: CyCognito’s monitoring dashboard. From a dashboard, IT teams can use CyCognito to view attacker-exposed assets. Those same teams can also see which department assets belong to and monitor for new assets while taking remediation steps recommended by CyCognito. “When mapping an organization’s attack surface, the first step we take is to understand its business structure: subsidiaries, organizational hierarchies, business relationships, and more,” Gurzeev explained. “We use natural language understanding (NLU) here in two ways: (1) semi-structured text comprehension to extract information from Wikipedia, financial reports, investor relations materials, Google Search results, and so on; and (2) domain-specific NLU to understand a variety of information such as the fact that “Acme Inc.” “Acme GmbH” and “Acme S.p.A” are different companies but almost certainly related and that “Acme Inc.” and “Acme Incorporated” are the same thing … As we collect and analyze data, have human analysts verify certain results both to validate findings and provide feedback when necessary on how to adjust our machine learning models.” Expanding market Palo Alto, California-based CyCognito’s customers include Colgate-Palmolive, Tesco, and government groups like the State of California. Over the past 12 months, the company claims to have grown revenue 400% year-over-year and tripled the size of its client base. “We have dozens of Fortune 500 companies, and a number of Fortune 100 as well,” Gurzeev said. “Demand for our technology has always been high; 2019 was our first full year of selling products, so our business has primarily operated during the pandemic and we practically don’t know anything else. Last year, the market saw one of the biggest cloud migrations ever, and companies learned the hard way that disruption can happen at any time. Accelerated adoption of cloud computing and remote- and hybrid-workforces accelerates cyber risk as well — making attack surface management even more essential.” Like its competitors, 130-employee CyCognito is benefiting from an influx of venture capital prompted by growing cybersecurity threats. According to Pitchbook, from January to August, cybersecurity startups raised $17.7 billion in venture capital — more than a 70% increase from last year’s record of $10.2 billion. Despite the fact that some experts view cybersecurity as a losing game , 91% of companies said that they increased their IT security budgets in 2021. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,222
2,021
"Deprecating the cookie: The impact of eliminating third-party identifiers | VentureBeat"
"https://venturebeat.com/2021/11/19/the-impact-of-eliminating-third-party-identifiers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Deprecating the cookie: The impact of eliminating third-party identifiers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Josh Perlstein, Chief Executive Officer of Response Media The digital marketing world is becoming even more complex: 62% of customers prefer personalized products and services , but almost the same number (61%) feel like they have lost control over how companies use their information. Today, the value in data to drive efficient marketing and relevant relationships with prospects and customers is more apparent than ever. At the same time, both regulatory bodies and consumers see the need for additional control over data because it’s so prolific. With the growth of data breaches , data misuse , and identity theft (there were 1,387,615 cases of identity theft in 2020; an increase of 53% over 2019), there is a greater sensitivity and fear about what happens if the wrong people get ahold of consumer data or use that information maliciously. The powers-that-be seem to agree with consumer concern over the usage of data. New privacy regulations are introduced and added almost daily. Changes to Apple’s Identifier for Advertisers (IDFA) and App Tracking Transparency (ATT) , the rise of ad-free browsers , and Google’s (almost here, but often delayed) elimination of third-party cookies bring sizable shifts to the market. Regardless of where they sit in the digital media ecosystem, companies realize people want more privacy and control of their data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Regardless, the future of removing third-party identifiers and the resulting marketplace changes will make it much more challenging to target users effectively. This loss will also make measurement difficult. Higher acquisition costs and lower revenues will likely result. What comes next in the industry is uncertain. Understanding the potential outcomes and implications will help marketers stay agile and provide the best chance of success. Here are the likely scenarios for publishers, technology providers, and brands with the elimination of third-party identifiers. The impact of eliminating third-party identifiers on publishers The deprecation of the third-party cookie in the not-so-distant future is a bad thing for publishers, right? Not so fast. Recently, the online publishing trade group Digital Content Next (which represents The New York Times, The Washington Post, The Wall Street Journal, and others) found “significant evidence that the current system with third-party cookies…has not served trusted publishers’ interests.” Yet they also claim, “while Google’s proposal to deprecate third-party cookies…could have a dire impact on the digital advertising landscape, preserving third-party cookies could have an equally dire impact on the ecosystem.” Prominent publishers have an advantage in the brave new cookieless world. Larger publishers leverage first-party data to target their audiences at scale. Smaller publishers are at a disadvantage; resources (development and sales) to build successful first-party data strategies are expensive. Further, direct advertising deals are harder to come by for publishers with smaller audiences. Non-premium publishers will see the effects the most because they are selling the most inventory within exchanges. Traditional publishers must recalibrate their measurement solutions to prepare for cross-channel engagement and consumer privacy. Publishers must learn more about their audience and deliver better, more personalized experiences to prosper in the new world. First-party data is key to all of this. At the same time, the rise of retail media networks will cut into traditional publisher income. Amazon, Walmart, and even Dollar General have started to monetize their high-quality first-party data. Closed-loop measurement solutions allow advertisers to leverage data to influence in-store and online conversions. Beyond the walled gardens, when retail data combines with programmatic media, it creates a compelling opportunity to drive awareness and lower funnel actions. Publishers stand to gain in the cookieless world if they do the following: Hone and package their first-party data offerings through careful strategy, collection, and productization. The focus for publishers must be on unique data that can be captured at scale and can add value to advertisers. Step up efforts to allow visitors to register and authenticate within their ecosystem. Think about how the brand can identify unique people within its digital properties. For instance, publishers have unique “walled gardens,” which can warrant a premium for advertisers looking to reach unique attributes and audiences. Ad tech players Since the invention of third-party cookies in the mid-1990s, ad tech vendors have discovered many ways to take advantage of them. Third-party identifiers track websites people visit and then link the information to massive datasets, including income, demographics, and addresses. Identifiers offer an easier way to reach the right audience and have brought scientific rigor to digital advertising. Today, advertiser reliance on tracking data has created an elaborate industry of hundreds of ad tech firms, each using third-party cookies to facilitate (and perhaps, complicate) the process of buying and selling digital ads. How big might the issue be? In 2019, advertisers in the United States spent nearly 60 million U.S. dollars on programmatic digital display advertising. By 2022, expenditures are expected to increase to nearly 95 million U.S. dollars. One of the reasons why the industry has grown so much is because it is measurable. Brands and agencies can target specific cohorts using third-party cookies as part of that ecosystem. Now, many ad tech business models relying on collecting and analyzing vast troves of user data are in jeopardy. The most apparent beneficiaries from the death of third-party identifiers are the web’s walled gardens, especially Google and Facebook, which have vast pools of data about users and their browsing habits and no one else can access it. The future for ad tech will depend on what system of identity wins. Should a browser-based model (like Google’s FLoC), ad tech vendors, especially those who run independent ad exchanges or rely on granular, individual-level user tracking, may become obsolete. If a new identity-based tracking solution emerges, it will likely benefit the ad tech industry more broadly. Regardless, there will be consolidation and clear winners and losers. The Ad Tech world stands to lose the most unless: They evolve to identity-based methodologies which can identify unique people using authentication or another tech not reliant on cookies or device IDs. They plan for how to deliver unique value to the digital advertising ecosystem by aligning both advertiser data with publisher data for new world cookieless targeting and tracking. Brands The brand marketing ecosystem is only just now coming to grips with a cookieless future. And it is looking as if the industry is not ready. There will be a lot of waste and a lot of inefficiency from a measurement perspective until brands and companies figure out how this new ecosystem works. Yet on the other side of the coin, when identifiers go away, brands will find the value of a direct engagement with consumers will increase greatly. Seventy-eight percent of brands say they struggle with “data debt” or not having enough quick data about their customers to not launch relevant personalization tactics. Marketers must act now to compete in a first-party data world and brands heavily investing in first-party data will be more prepared than others. Building a database with quality data and a comprehensive identity strategy is critical for brands. First-party data is a significant competitive advantage (and differentiator), regardless of the future of identity. First-party data is the fuel for personalization , customer retention, cross-sell, and up-sell opportunities. Beyond these essential applications, brands should incorporate data from all possible data sources (research, purchase, surveys, CRM, and partners). Enhancing and enriching proprietary data creates a solid foundation for future first-party data strategies that are consumer-friendly and built to last in a cookieless world. Brands have an urgent imperative to act now. Begin planning how to capture, store and utilize first-party data, and think about maximized growth in both declared and behavioral data. Here are a few questions for brands to think about: Why would the target audience want to give their information to the brand? What value can the brand provide in return? What data is most valuable to the brand and how can the brand leverage it? How would the brand utilize this data to maximize its value? Brands that invest in first-party data growth today to become ready for the near future digital world will build a very valuable asset and will get a significant jump on their competition as more changes begin to take place. Third-party identifiers: Where we are today Consumers are genuinely concerned. They have seen what happens when brands aren’t responsible with their data, yet they appreciate when data is used to create more personalized products and services. How does the industry find the magic point between being unfair to businesses leveraging data for good and protecting consumers? It is simple: Companies should follow a consent-based approach to acquire and use first-party data. If companies are collecting the proper level of consent, they are being transparent with data stakeholders and taking the appropriate steps to manage data appropriately. No matter if a publisher, ad tech player, or brand, everyone in the marketing ecosystem needs to be thinking ahead as to what a cookieless future looks like and the best strategies to engage and compete effectively. Those that take these necessary steps will typically be less impacted by the elimination of third-party identifiers and better set up for the changes ahead. By Josh Perlstein, Chief Executive Officer of Response Media DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,223
2,013
"Starling helps non-English speaking patients get heard in hospitals | VentureBeat"
"https://venturebeat.com/2013/08/26/starling-helps-non-english-speaking-patients-get-heard-in-hospitals"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Starling helps non-English speaking patients get heard in hospitals Share on Facebook Share on X Share on LinkedIn Bill Tan arrived in America when he was 15 years old. He knew just 500 words of English, but this was 500 more than his anyone else in his family. The experience inspired him to found Transcendent Endeavors , a company that uses technology to break down communication barriers. Twenty-six million Americans have limited English proficiency (LEP) , and this number is increasing fast. Transcendent Endeavors has seven projects designed to help people without strong English skills navigate in an English-speaking world. Transcendent designed its Starling product to improve communication in hospitals. It’s currently in place at the NYU Medical Center, and TE recently won a grant to deploy it in more health centers around the country. Starling is a reinvention of the hospital bedside call button. Tan said that the call button was first invented by Florence Nightengale 150 years ago, and the set-up hasn’t really changed. “It was a copper bell tied to a string. Now it’s a plastic button tied to a wire,” Tan said in an interview. “But call bells don’t articulate what the patients need. They are more of an obstacle to communication than a facilitator, and we realized that it is not the response that matters, so much as the request.” Nurses are an overburdened workforce, and hospital interpreters are a limited and precious resource that are frequently needed for more complicated tasks, like conveying a diagnosis or treatment program. Transcendent Endeavors collected data that found that on average, members of a care team answer 78 call light requests in a 12-hour shift, and that 53 percent of health care enter staff believe answering call bells hinders them from engaging in more critical care activities. Responding to these requests takes even longer when there is a language barrier involved. Starling’s system is a bedside unit with icon buttons for a drink of water, request to get out of bed, needing to use the bathroom, wanting more pillows, and such. Patients specify what their need is and nurses know right away what they want. Nurses can send a message back in the patents preferred language through Starling and prioritize/delegate tasks accordingly. “Nurse managers now have a much better way to understand what is going on in the unit,” Tan said. “They can see what is happening in real time, how many patents are making requests, how efficiently the staff responds and all these data points can help them make decisions and optimize the work flow.” Helping health staffs run more efficiently is valuable in-and-of itself. However this also has a significant impact for patients who are unable to verbalize what they want. Tan said that more than 90 million people in the US. have low functional health literacy. This means that when they are in a hospital or medical center, it is difficult to give and get information from doctors and nurses without an interpreter. Bridging these language gaps is frustrating and breeds anxiety for both the patient and the caretakers, and in certain situations this compromises the quality of care. Many immigrant families rely on their children who have developed strong English skills to guide them through the process, but this can mean taking kids out of school. This was the case with Tan’s family. “When I arrived here, I was usually the person taking my parents to hospitals and other services,” Tan said. “A lot of immigrant parents really value education, and the last thing they want to do is yank their kids out of school to take them to the hospital. This is so backwards, and I thought there if there are tools out there to help other industries deal with multilingual populations, we needed to do that with health and education as well.” Transcendent Endeavors was founded in 2001 and has received multiple NIH awards. Canopy, another one of its offerings, is an e-learning platform that promotes foreign-language proficiency among healthcare professionals so they engage more clearly with their patients and gain greater cultural sensitivity. Its SAGE product facilitates communication specifically surrounding complementary and alternative medicine (CAM), for interactions outside the hospital. MiniMiti Adventures is an animated series for children and their families that promotes healthy eating and physical activity. The Vocatta Messenger is a channel for healthcare professionals to deliver customized messages to recipients. The company is also working on a “communication genome project” to “break communication down into its most elemental components” and “render the communication process into a machine-readable format.” Many of its products are based on this platform. Tan said that tools and technology along these lines is “critical” now more than ever. Health care reform in the U.S. means millions of people who were previously uninsured or underinsured — many of them immigrants– will soon be brought into the mainstream healthcare system. He said that the obstacles to reaching these populations are often economic and social, and products like these can have a huge impact on productivity as well as care quality. Its Florence Nightingale for a digital, polyglot world. The company is based in New York. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,224
2,021
"The low-code ‘tipping point’ is here | VentureBeat"
"https://venturebeat.com/2021/09/22/the-low-code-tipping-point-is-here"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The low-code ‘tipping point’ is here Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Half of business technologists now produce capabilities for users beyond their own department or enterprise. That’s the top finding in a new report from Gartner, which cites “a dramatic growth” in digitalization opportunities and lower barriers to entry, including low-code tools and AI-assisted development, as the core factors enabling this democratization beyond IT professionals. What’s more, Gartner reports that 77% of business technologists — defined as employees who report outside of IT departments and create technology or analytics capabilities — routinely use a combination of automation, integration, application development, or data science and AI tools in their daily work. “This trend has been unfolding for many years, but we’re now seeing a tipping point in which technology management has become a business competency,” Raf Gelders, research vice president at Gartner, told VentureBeat. “Whether all employees will soon be technical employees remains to be seen. Do your best sales reps need to build new digital capabilities? Probably not. Do you want business technologists in sales operations? Probably yes.” The rise of low-code Low-code development tools — such as code-generators and drag-and-drop editors — allow non-technical users to perform capabilities previously only possible with coding knowledge. Ninety-two percent of IT leaders say they’re comfortable with business users leveraging low-code tools, with many viewing the democratization as helpful at a time when they’re busier than ever. With the rise of digital transformation, which has only been accelerated by the pandemic, 88% of IT leaders say workloads have increased in the past 12 months. Many report an increase in demand for new applications and say they’re concerned about the workloads and how this might stifle their ability to innovate. It makes sense that no-code — as well as AI-assisted development — is catching on. AI-assisted software development has been found to save time and reduce errors. In a June report , Gartner found that by 2024, 80% of tech products and services will be built by people who are not technology professionals. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now these latest findings offer more insight into how business technologists are using the tools. According to the report, they’re primarily responsible for building analytics capabilities (36%) but are also involved in building digital commerce platforms, AI and robotic process automation, and other technologies. The benefits and risks Aside from taking some weight off IT teams, low-code tools could offer several additional benefits. Gelders says moving the creation of new digital capabilities closer to customer, product, and business operations improves speed to value. Specifically, he said the researchers found enterprises that democratize digital delivery successfully are 2.6 times more likely to accelerate digital business outcomes. But there are significant challenges as well. If it’s not managed effectively, Gelders says this democratization can generate serious risks that all CIOs are already familiar with. This includes misaligned initiatives; inconsistencies in the customer experience; cost inefficiencies; and potential compliance, privacy, and security issues. It’s similar to the concerns around “shadow IT,” in which non-IT departments lead technology purchases. This can disrupt systems and workflows, and 26% of CISOs and C-suite executives recently cited shadow IT as the biggest hurdle posed by hybrid work. The solution is to work with IT and not perform low-code development in silos. According to the new Gartner survey, four out of five respondents reported finding value in collaborating with IT teams, rather than trying to circumvent them. They cite increased innovation, security, and speed when doing so. “A growing share of business leaders and employees are building or deploying technology and analytics capabilities in order to digitize their business capabilities or create market-facing offerings. This kind of work cannot be done solely by corporate IT. But it also cannot be done without corporate IT,” Gelders said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,225
2,021
"Low code/no code increases efficiency in retail and beyond | VentureBeat"
"https://venturebeat.com/2021/10/13/low-code-no-code-increases-efficiency-in-retail-and-beyond"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Low code/no code increases efficiency in retail and beyond Share on Facebook Share on X Share on LinkedIn Software developer writing code. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Low- and no-code platforms allow developers and non-developers alike to create software through visual dashboards instead of traditional programming. Adoption is on the rise, with a recent OutSystems report showing that 41% of organizations were using a low- or no-code tool in 2019/2020, up from 34% in 2018/2019. If the current trend holds, the market for low- and no-code could climb from between $13.3 billion and $17.7 billion in 2021 to between $58.8 billion and $125.4 billion in 2027. But the reasons that enterprises deploy solutions tend to vary from industry to industry. During a panel at VentureBeat’s Low-Code/No-Code Summit , executives from HubSpot, Starbucks, and growth equity firm WestCap shared their organizations’ motivations for embracing low- and no-code. They ranged from simplifying app creation workflows and analyzing real-time data to automating monotonous, time-consuming workloads. “[Low- and no-code has made my life and my team members’ lives easier,] giving my company a competitive advantage by simply being able to move faster using tools with standard models,” WestCap chief data scientist Erika Janowicz, a participant in the panel, said. “[W]e can all get out and try to build solutions ourselves [for] forecasting and predictive insights, [but] the reality is that a lot of these models have already been built, [are] readily available, [and are] able to connect to multiple systems that are generally not that expensive.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Janowicz’s experiences are reflected in the broader market, where companies that have adopted low- and no-code tools are experiencing a proliferation of non-programmer app development. Forty-one percent of businesses have active “citizen development” initiatives, Gartner estimates , while 20% of those who don’t are either evaluating or planning to start citizen development initiatives. HubSpot global VP of customer success Jonathan Corbin says that low- and no-code tools allowed his team to drill down into the history of customer interactions to understand where the pain points lie — and where new ones might arise. “[These platforms allow us to build] solutions and … service [customers] to [deliver] great customer experience. That’s something we’re working toward and excited to be able to do using this technology,” he said during the panel. Retail applications Starbucks is a massive operation, with close to 40,000 locations as of the first quarter of 2021. While retail might not be an obvious application of low- and no-code, brick-and-mortar businesses are increasingly embracing the technology to develop automations and apps that align with demands in their markets and workforces. Starbucks chief digital and analytics officer Jonathan Francis says that his company realized efficiency gains from low- and no-code tools as the pandemic put a strain on the IT department. The tools allowed Starbucks to work through a backlog of development tasks that normally would have taken far longer to finish, he said. “We need opportunities to [scale] quickly … You’ll never find enough data scientists,” Francis said. “We’re all competing for the same resources — we have limited budgets. So you [have to start] think[ing] about local solutions.” In a blog post , Iterate.ai cofounder and CTO Brian Sathianathan gives another example of how low- and no-code can bolster retail tech initiatives. A $60 billion retailer Iterative.ai works with was advised by a consulting firm that its mobile app project would take an entire year to prototype using traditional application development methods, according to Sathianathan. “The project involved a full ecommerce platform and very time-sensitive curbside pickup solution as COVID-19 restrictions took hold. However, with a low-code strategy, the retailer was able to build the complete solution in just 11 days,” he said. Identifying goalposts With upwards of 82% of firms saying that custom app development outside of IT is important, Gartner predicts that 65% of all apps will be created using low-code platforms by 2024. Another study reports that 85% of 500 engineering leads think that low-code will be commonplace within their organizations as soon as 2021. But Corbin believes that low- and no-code is still in the early days. That’s why it’s important for stakeholders at companies to identify a goal and invest in capabilities that help achieve it, he asserts — rather than adopt the technology for its own sake. “I think the important thing to remember [is that the technology is going to] get better over time. As you’re thinking about implementing a solution using [the tools], you need to commit to [a] goal and invest in the technology that helps you to get closer to that endpoint, and then … identify defects,” Corbin said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,226
2,021
"Deepgram raises $25 million to build custom enterprise speech recognition models | VentureBeat"
"https://venturebeat.com/2021/02/03/deepgram-raises-25-million-to-build-custom-enterprise-speech-recognition-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deepgram raises $25 million to build custom enterprise speech recognition models Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Deepgram , a Y Combinator graduate building custom speech recognition models, today announced that it raised $25 million in series B funding led by Tiger Global. CEO and cofounder Scott Stephenson says the proceeds will bolster the development of Deepgram’s platform, which enables enterprises to process meetings, calls, and presentations in real time. The voice and speech recognition tech market is anticipated to be worth $31.82 billion by 2025, driven by new applications in the banking, health care, and automotive industries. In fact, it’s estimated that one in five people in the U.S. interact with a smart speaker on a daily basis and that the share of Google searches conducted by voice in the country recently surpassed 30%. San Francisco-based Deepgram was founded in 2015 by University of Michigan physics graduate Noah Shutty and Stephenson, a Ph.D. student who formerly worked on the University of California Davis’ Large Underground Xenon Detector, a large and sensitive dark matter detector, and who helped to develop the university’s Davis Xenon dual-phase liquid xenon detector program. The company’s platform leverages a backend that eschews hand-engineered pipelines for heuristics, stats-based, and fully end-to-end AI processing, with hybrid models trained on PCs equipped with high-end GPUs. “Over the past year, we have seen the speech recognition market evolve like never before,” Stephenson wrote in a blog post. “When we first announced our series A in March 2020, enterprises were starting to recognize the impact a tailored approach to speech could have on their business. Yet, there was no ‘race to space’ moment driving companies to adopt a new solution, especially when their existing provider was working ‘fine.’ That quickly changed when COVID-19 hit. Companies were at an inflection point and forced to fast track digital transformation initiatives, compressing years of well-thought-out plans into mere months, and quickly transitioning teams to a remote workforce.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Each of Deepgram’s models is trained from the ground up and can ingest files in formats ranging from calls and podcasts to recorded meetings and videos. The platform processes the speech, which is stored in what’s called a “deep representation index” that groups sounds by phonetics as opposed to words. Customers can search for words by the way they sound; even if they’re misspelled, Deepgram can often find them. Stephenson says that Deepgram’s models pick up things like microphone noise profiles as well as background noise, audio encodings, transmission protocols, accents, valence (i.e., energy), sentiment, topics of conversation, rates of speech, product names, and languages. Moreover, he claims they can increase speech recognition accuracy by 30% compared with industry baselines while speeding up transcription by 200 times, and while handling thousands of simultaneous audio streams. Deepgram’s real-time streaming capability lets customers analyze and transcribe speech as words are being spoken. Meanwhile, its on-premises deployment option provides a private, deployable instance of Deepgram’s product for use cases involving confidential, regulated, or otherwise sensitive audio data. Deepgram currently has more than 60 customers including Genesys, Memrise, Poly, Sharpen, and Observe.ai. The company grew its headcount from 9 to 95 across offces in the U.S. and Philippines and processed more than 100 billion spoken words. Deepgram also launched a new training capability, Deepgram AutoML, to further streamline model development. Stephenson says that in 2020, Deepgram’s annual recurring revenue grew three times. He forecasts another three times gain from 2020 to 2021. “We have spent the last year investing in key capabilities across data acquisition, labeling, model training, our API and we’re ready to scale. Big data and cloud computing has allowed us to collect massive amounts of customer and employee data from emails, forms, websites, apps, chat, and SMS,” Stephenson said. “This structured data is what companies can currently see and use, and it’s just the tip of the iceberg. … We train our speech models to learn and adapt under complex, real-world scenarios, taking into account customers’ unique vocabularies, accents, product names, and background noise. This new funding will support our efforts to deliver higher accuracy, improved reliability, real-time speeds, and massive scale at an affordable price for our customers.” Citi Ventures also participated in Deepgram’s funding round announced today, along with Wing VC, SAP.io, and Nvidia Inception GPU Ventures. It brings the startup’s total raised to date to over $38.9 million. It’s worth noting that Deepgram is far from the only player in the burgeoning speech recognition market. Tech giants like Nuance, Cisco, Google, Microsoft, and Amazon offer real-time voice transcription and captioning services, as do startups like Otter. There’s also Verbit , which recently raised $31 million for its human-in-the-loop AI transcription tech; Oto Systems , which in December 2019 snagged $5.3 million to improve speech recognition with intonation data; and Voicera , which has raked in over $20 million for AI that draws insights from meeting notes. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,227
2,021
"MLCommons releases open source datasets for speech recognition | VentureBeat"
"https://venturebeat.com/2021/12/14/ml-commons-releases-open-source-datasets-for-speech-recognition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MLCommons releases open source datasets for speech recognition Share on Facebook Share on X Share on LinkedIn Woman using voice assistant on smartphone in the rain Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. MLCommons, the nonprofit consortium dedicated to creating open AI development tools and resources, today announced the release of the People’s Speech Dataset and the Multilingual Spoken Words Corpus. The consortium claims that the People’s Speech Dataset is among the world’s most comprehensive English speech datasets licensed for academic and commercial usage, with tens of thousands of hours of recordings, and that the Multilingual Spoken Words Corpus (MSWC) is one of the largest audio speech datasets with keywords in 50 languages. No-cost datasets such as TED-LIUM and LibriSpeech have long been available for developers to train, test, and benchmark speech recognition systems. But some, like Fisher and Switchboard , require licensing or relatively high one-time payments. This puts even well-resourced organizations at a disadvantage compared with tech giants such as Google, Apple, and Amazon, which can gather large amounts of training data through devices like smartphones and smart speakers. For example, four years ago, when researchers at Mozilla began developing the English-language speech recognition system DeepSpeech, the team had to reach out to TV and radio stations and language departments at universities to supplement the public speech data that they were able to find. With the release of the People’s Speech Dataset and the MSWC, the hope is that more developers will be able to build their own speech recognition systems with fewer budgetary and logistical constraints than previously, according to Keith Achorn. Achorn, a machine learning engineer at Intel, is one of the researchers who’s overseen the curation of the People’s Speech Dataset and the MSWC over the past several years. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Modern machine learning models rely on vast quantities of data to train. Both ‘The People’s Speech’ and ‘MSWC’ are among the largest datasets in their respective classes. MSWC is of particular interest for its inclusion of 50 languages,” Achorn told VentureBeat via email. “In our research, most of these 50 languages had no keyword-spotting speech datasets publicly available until now, and even those which did had very limited vocabularies.” Open-sourcing speech tooling Starting in 2018, a working group formed under the auspices of MLCommons to identify and chart the 50 most-used languages in the world into a single dataset — and figure out a way to make the dataset useful. Members of the team came from Harvard and the University of Michigan as well as Alibaba, Oracle, Google, Baidu, Intel, and others. The researchers who put the dataset together were an international group hailing from the U.S., South America, and China. They met weekly for several years via conference call, each bringing a particular expertise to the project. The project eventually spawned two datasets instead of one — the People’s Speech Dataset and the MSWC — which are individually detailed in whitepapers being presented this week at the annual Conference on Neural Information Processing Systems (NeurIPS). The People’s Speech Dataset targets speech recognition tasks, while MSWC involves keyword spotting, which deals with the identification of keywords (e.g., “OK, Google,” “Hey, Siri”) in recordings. People’s Speech Dataset versus MSWC The People’s Speech Dataset involves over 30,000 hours of supervised conversational audio released under a Creative Commons license, which can be used to create the kind of voice recognition models powering voice assistants and transcription software. On the other hand, MSWC — which has more than 340,000 keywords with upwards of 23.4 million examples, spanning languages spoken by over 5 billion people — is designed for applications like call centers and smart devices. Previous speech datasets relied on manual efforts to collect and verify thousands of examples for individual keywords, and were commonly restricted to a single language. Moreover, these datasets didn’t leverage “diverse speech,” meaning that they poorly represented a natural environment — lacking accuracy-boosting variables like background noise, informal speech patterns, and a mixture of recording equipment. Both the People’s Speech Dataset and the MSWC also have permissive licensing terms, including commercial use, which stands in contrast to many speech training libraries. Datasets typically either fail to formalize their licenses, relying on end-users to take responsibility, or are restrictive in the sense that they prohibit use in products bound for the open market. “The working group envisioned several use cases during the development process. However, we are also aware that these spoken word datasets may find further use by models and systems we did not yet envision,” Achorn continued. “As both datasets continue to grow and develop under the direction of MLCommons, we are seeking additional sources of high-quality and diverse speech data. Finding sources which comply with our open licensing terms makes this more challenging, especially for non-English languages. On a more technical level, our pipeline uses forced alignment to match speech audio with transcript text. Although methods were devised to compensate for mixed transcript quality, improving accuracy comes at a cost to the quantity of data.” Open source trend The People’s Speech Dataset complements the Mozilla Foundation’s Common Voice , another of the largest speech datasets in the world, with more than 9,000 hours of voice data in 60 different languages. In a sign of growing interest in the field, Nvidia recently announced that it would invest $1.5 million in Common Voice to engage more communities and volunteers and support the hiring of new staff. Recently, voice technology has surged in adoption among enterprises in particular, with 68% of companies reporting they have a voice technology strategy in place, according to Speechmatics — an 18% increase from 2019. And among the companies that don’t, 60% plan to in the next five years. Building datasets for speech recognition remains a labor-intensive pursuit, but one promising approach coming into wider use is unsupervised learning, which could cut down on the need for bespoke training libraries. Traditional speech recognition systems require examples of speech labeled to indicate what’s being said, but unsupervised systems can learn without labels by picking up on subtle relationships within the training data. Researchers at Guinea-based tech accelerator GNCode and Stanford have experimented with using radio archives in creating unsupervised systems for “low-resource” languages, particularly Maninka, Pular, and Susu in the Niger Congo family. A team at MLCommons called 1000 Words in 1000 Languages is creating a pipeline that can take any recorded speech and automatically generate clips to train compact speech recognition models. Separately, Facebook has developed a system, dubbed Wave2vec-U, that can learn to recognize speech from unlabeled data. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,228
2,021
"XaaS and subscription models build customer loyalty, increase engagement, boost revenue | VentureBeat"
"https://venturebeat.com/2021/06/10/how-xaas-and-subscription-models-can-build-customer-loyalty-and-engagement-and-increase-revenue-for-software-companies"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages XaaS and subscription models build customer loyalty, increase engagement, boost revenue Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dhaval Moogimane and Amy Fletcher, West Monroe Partners Technology companies are continuing their shift to As-a-Service (XaaS) models -– and for good reason: they’re lucrative, popular with investors, and allow software companies to deliver better service with more scale. This trend was clear in a West Monroe Partners survey distributed last year, in which 40 percent of private equity respondents said that between 50% and 70% of their tech portfolios sold and delivered products and services as a subscription. But there is a lot that companies should consider as they maximize their XaaS or subscription models. They especially need to think about customer centricity and scalability. In other words, it is way more than just a change in the billing model. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The market is growing If you need more convincing that the XaaS model is here to stay, consider this: Over the next half-decade, SaaS subscription services, a category within XaaS, are projected to see a compound annual growth rate of 12 percent, according to research from Gartner. Salesforce, a leading SaaS provider, reported revenue of $5.82 billion in the fourth quarter of 2020, up 20% year-over-year. The cloud communications company RingCentral reported a 32% increase in total revenue for the first quarter of 2021 to $352 million, in addition to a 34% annual increase in subscription revenue. Meanwhile, valuations for software companies employing a XaaS model have skyrocketed. Last year, Insight Partners paid $5 billion for the cloud management platform Veeam Software; Clayton, Dubilier & Rice bought Epicor Software from fellow private equity firm KKR for $4.7 billion ; and the Canadian buyout manager Onex paid New Mountain Capital $2.65 billion for the employee benefit platform OneDigital. Focused customer centricity The benefits of employing XaaS models are not limited to the valuation. To retain subscribers over time, companies using these models must continuously engage with their customers. If done well, this activity can make those customers “stickier,” empowering them as your advocates and increasing retention rates. Companies have tended to operate in a stovepipe, with customers invariably “handed-off” from one department to another. However, companies need to understand the importance of recognizing moments that matter and learn how to drive value at those critical stages of the journey. Roles and responsibilities must be clear so that each customer engagement builds on the prior one. Customer acquisition cost is a key metric that most tech companies closely manage. Typically, the cost to retain or grow a customer is a fraction of the cost to acquire a new one. That said, companies need to be cautious not to underinvest in retention and expansion -– and a smart investment includes deep analytics to understand customers, digital workflows to guide value-oriented engagement, and frictionless service and support. Product design also has an integral role to play in this process. Traditional B2B software product investments tend to focus on building the next best feature. These efforts need to be balanced with investments in data-driven customer engagement to maximize usage and adoption. Companies that can orient themselves to place customers and users first will thrive in the subscription model. Of course, it is easier said than done. It’s not for everyone There is constant debate on the merits of the pay-per-use model versus the subscription model. The predictability of the subscription model has its allure. Management teams and investors certainly like it, and even customers find it easier to budget. However, depending on a company’s products and customers, as well as its maturity and competitive dynamics, a pay-per-use model may make more sense and also might disrupt the market. For instance, a startup company marketing an entirely new genre of application could have difficulty enticing customers to subscribe. With a totally new kind of software, how are consumers going to know the product is worth the recurring expense? In that case, a pay-per-use model makes sense as the startup introduces itself to the marketplace and customers begin to learn the value of its products. Then, as customers come to understand the vendor’s unique value proposition, the company can explore longer-term subscription commitments. While t the investment community likes the revenue predictability of the subscription business, they likely wouldn’t dismiss a smart pay-per-use company with a high customer retention rate. In fact, investors might be attracted to such a company if they saw a clear opportunity for growth with a pivot to a subscription model. A major transition Companies transitioning from pay-per-use models to subscriptions should not underestimate the changes needed to be successful. Beyond the basics of designing thoughtful pricing strategies, a careful consideration of the customer and user journey is needed. On the pricing front, companies need to consider their customer’s usage patterns and supplement that with a deep understanding of their cost structure to design appealing subscription packages. Let’s say Netflix had a pay-per-use option charging $1 per movie. If a user averages six movies a month, they will likely not be inclined to move to $9.99 per month for unlimited access. However, they might be tempted with a $7.99 per month option. Defining the pricing breakpoints for a subscription requires a solid understanding of the customer value drivers and competitive dynamics. In addition, it also necessitates a fundamental understanding of the cost structure of delivering the service. Netflix would need to determine what the costs are to deliver the service for $7.99 per month. Outside of pricing, the customer engagement model also needs to be carefully designed. Companies must know when to offer the subscription to the customer that is buying on a per-use basis. Additionally, there needs to be a focus on ensuring continuous value to these customers. Netflix’s recommendation engine and email prompts are examples of engagement to ensure consistent renewals. It’s not just about the money — it’s about the customer Moving to subscription and XaaS models is far more than pricing changes and valuation, in part because a successful subscription business is centered around customer value. Deep understanding of the customer and their value drivers is key, and designing an engagement model and workflow to guide the customer to achieving those value drivers is even more important. This is a significant change for some organizations. Implemented thoughtfully with the customer always top of mind, these models can make a company smarter, faster, and more responsive to the needs of its users -– transforming it into a more successful business. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,229
2,013
"Tibco is 'the fastest growing enterprise software company,' CEO says -- despite sales missteps | VentureBeat"
"https://venturebeat.com/2013/03/28/tibco"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tibco is ‘the fastest growing enterprise software company,’ CEO says — despite sales missteps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Imagine this: You’re a founder of a Silicon Valley enterprise software company. It’s only 15 years old, but it’s already worth $3.5 billion. You’ve never lost money, and you’ve left your initial competition in the dust. The companies you’re now competing against are behemoths like IBM, Oracle, and SAP — all 35 or older. What’s more, you’ve used your relative nimbleness — your company only has 3,400 employees — to run circles around those bigger incumbents on the technology front. You’ve invested in more modern, efficient products. Seamless integration with public cloud? You offer that. ‘Big data’ analytics? You saw that coming years ago (yeah, you even wrote about book about it ). In-memory, all software processing, at millisecond speed? Check! And to top it off, you’re planning more fireworks soon. What’s not to like? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Forrester chart, April 2012 Well, last week, I went down to the offices of Palo Alto, Calif.-based Tibco to interview its founder and chief executive, Vivek Ranadive, who finds himself in that enviable position. However, it wasn’t a day of celebration. To the contrary, it came on the day before Tibco was to release its weak first quarter earnings: Tibco’s results showed a mere 5 percent growth in the first quarter, year on year, deeply disappointing analysts who expected much more. Moreover, on a conference call, Tibco lowered its forecast for the next quarter. Over two days, Tibco’s stock tanked about 20 percent — to just under $20, from close to $24. What happened? Sales leadership problems in North America and the U.K. contributed greatly to the weakness of Tibco’s results in those regions, Ranadive explained in the conference call to analysts. Growth in Tibco’s North America sales, which began slowing in the fourth quarter last year, continued to show weakness — even though Ranadive had made executive changes last quarter. “Well, it’s at all levels,” explained Ranadive during the analyst call , when asked how he was able to identify the “bad leadership” problem. Demand for Tibco product remained robust, Ranadive said, referring to a strong pipeline of deals in most regions. In Asia Pacific, sales were up 27 percent. So it was easy to identify the problem areas. Closure of only a couple more of the larger deals in our pipeline would have helped Tibco make its numbers. So when a few big deals didn’t get done in the U.S and the UK, Tibco went back to look at what happened. “There are all kinds of things I can point to,” he said. “So it’s… talking to a customer who was very happy and then going in and finding out that the customer doesn’t have a clue that we do these 4 other things,” Ranadive said. “Or not having the requisite expertise in a certain office in certain product categories or not doing — or not getting — in some cases, I mean, I’m embarrassed to say people not even being in the office.” A wider enterprise story: It’s sales, stupid! Tibco’s miss may embody a larger story that is playing out across Silicon Valley and beyond: As cloud and big data technology mature, mid-sized companies like Tibco and smaller ones must suddenly get more sophisticated about the way they sell, not just about what they sell, to the thousands of companies implementing this new technology. Turns out, it takes more than technology prowess and vision to succeed. Hell, even the big guys are having problems (Oracle referred to sales execution issues of its own, when it badly missed its earnings numbers last week ). Above: Tibco Spotfire on iPad There’s no doubt: Tibco’s technology is sleek, exemplified by fast-selling products like Spotfire, an in-memory tracker of key corporate data and transactions (analysts say Spotfire revenues are growing at 30 percent or more). “I’m bullish on Tibco,” says Mike Gualtieri of Forrester, who tracks the enterprise software sector, referring to Tibco technology. However, enterprise giants like Oracle, IBM, and SAP, which once sold only expensive “on premise” software (software not delivered over the web but downloaded onsite), are catching up. They’re starting to offer less expensive and more nimble cloud products as well as data analytic products — even if they’re often not as agile and cost-effective as Tibco’s. Armed with massive sales and marketing budgets, these larger players tell their existing customers that their offerings are “good enough.” A switch to Tibco’s solutions would be unnecessary and costly, they argue. So while Tibco tries hard to penetrate the market with those customers willing to experiment, the slightest misstep on the execution side with larger accounts can make the difference between hitting or missing earnings expectations in any given quarter. Exuding confidence Not that Ranadive is obsessed with the company’s stock price. “I’m not concerned with market cap right now,” he tells me. Ranadive, 57, is a slight man who likes to speak in confident, declarative statements. They leave little room for interpretation and can impress with their conviction — though they’re often clearly meant to provoke — a hallmark of a cocky entrepreneur. “We’re the fastest growing enterprise software company,” he boasted early in our talk, noting that for the last decade the company has grown 20 percent each year. And unlike Salesforce, which makes barely any profit, Tibco has spent $1 billion in profits buying up its own shares over the past six years, he said. Ranadive also likes to talk about the need to deliver value and invest to do that in the future. He’s proud that Tibco’s technology runs crucial businesses such as banks, exchanges, phone networks, and airlines — all of which would grind to a halt if Tibco were to go down, he said. By contrast, social networks, no matter how big, aren’t nearly so essential. “If Facebook goes down, what stops?” he asked me. “What would you not do? Maybe you wouldn’t know what some random friend ate for dinner.” And because Tibco has positioned itself for the future, the company’s growth is going to accelerate over the next few years, he said. The ‘visionary’ Granted, Ranadive has reason for his confidence. He trailblazed the enterprise software market from an early age, and he’s credited with digitizing Wall Street in the 1980s. That came in 1986, when he made a breakthrough innovation based on the computer “bus,” the system that governs data communication between the CPU, memory, and other I/O devices. Ranadive devised a software-based version of the bus that enabled messaging to happen on a computer network in real time. His first customers were big banks like Fidelity and Goldman Sachs, which used his technology to push market data such as stock quotes and news to their traders. By 1997, Ranadive had founded Tibco, naming it after his software bus innovation (The Information Bus, or “TIB”). But Ranadive kept innovating with the rise of the web in the 1990s. He built his messaging technology around the HTTP web server send-receive protocol. Tibco thus became the earliest software company offering customers real-time data communication in the cloud. From there, Tibco has continued to surge ahead, releasing a range of other software products nicely oriented for the future of real-time data analytics. The background stands in contrast with the IBMs and Oracles, which have older core architectures, and which were forced to acquire newer technology and tack it on — often in ways that are clunkier and more expensive. “They have all of the pieces,” said Forrester’s Gualtieri, of Tibco. “The thing I really like about them is that they’re so nimble. From a technology standpoint, and the way they serve customers, they can run circle around the IBMs and the Oracles.” Gualtieri says it helps to have a guy like Ranadive at the helm, who has written books about The Two-Second Advantage , or the need to for companies to make split-second decisions. “He’s a visionary,” says Gualtieri. ‘I know everything about you’ Take, for example, how the Golden State Warriors of the National Basketball Association is using Tibco’s software. Tibco created something called Fan Zone, which Ranadive calls a sort of “psychological router.” It detects what Warriors fans are doing in real time and then helps Golden State make offers to fans based on their behaviors, interests, or even moods. “I know everything about you,” Ranadive, who is a part owner of the Warriors, explained. “You’re eating pizza, I know it. You tweet, I see it. You’re going to be unhappy? I know it before you know it.” Tibco won’t comment about what specific cases are being used, because the Warriors want to keep that private. But the technology will let Golden State specify certain events it wants to respond to in real-time. If you tweet “my hot dog is cold,” “my seats are too far away,” or “parking costs too much here,” the Oakland Coliseum Arena would like to know that and respond accordingly. But it really gets more interesting with the application of geolocation data. If the Arena detects you’ve been standing in line at a hot dog stand for 15 minutes, the vendor can greet you with an offer, saying “Sorry, this one is on us!” Tibco is pushing the same sort of real-time analytics and messaging technology aggressively elsewhere, such as banking, hospitals, and retail. Macy’s is using Tibco to make real-time offers to customers based on their interests, Ranadive said. (The chart at right illustrates Tibco’s event processing approach.) Tibco is making other acquisitions to help build this out, especially for customers to track mobile customers. Just this week, Tibco acquired a French location analytics company Maporama , which Tibco will rolled into its Spotfire offering. Spotfire, along with Tibco’s messaging product, Tibbr, and some other cloud products are all growing quickly. In five years, half of Tibco’s revenue will come from products “I don’t have right now,” Ranadive said. When loyalty is a personal weakness For now, though, it’s the big sales to larger enterprise customers where Tibco needs to apply itself. These require more focus on the relatively mundane area of sales and marketing, where careful execution counts for more than the actual prowess of your technology. And its here where analysts grilled Ranadive for missteps during the quarter, where Tibco lost some deals to the bigger players, in part because of what Ranadive called “a void in leadership” in U.S. sales. Ranadive shouldered the blame for this during my interview. He responded that he’d wanted to make a leadership change in North American sales for some time, but he hadn’t moved fast enough. When I asked him what he thought his great personal weakness is, Ranadive said, “Sometimes I don’t go with my gut … that’s cost me from time to time.” He likes to give his executives relative autonomy, he said, and likens his leadership style to that of a jazz band conductor, with tolerance for a lot of improvisation — and he wants to let people come around to his point of view. However, he’s loyal, he said, and this can often let people stay in a job “when the company has grown bigger than them.” That’s not to say that Ranadive is touchy-feely. He bristles when I raised Google’s practice of allowing employees to spend 20 percent of their time to dream up projects. He says there’s a myth about innovation, that somehow it stems from having a cushy job, spare time to dream up projects, and having freedoms like bringing a dog to work. He disagrees. He says he wants his employees on site (he’s in full agreement with Yahoo CEO Marissa Mayer’s policy of forcing employees into the office ). He likes to drive his employees hard, he said. His version of innovation, he says, is to “shut the doors, lock them in a room, turn off the light, cut off the water, and if green shoots come out from under the door, you have innovation.” Sales and marketing help badly needed That’s the sort of focus that Ranadive needs to translate to his sales and marketing efforts. Ranadive has made changes (in October, he revamped both his sales and marketing teams ), but analysts like Forrester’s Gualtieri worry it hasn’t been fast enough. Today’s enterprise sales teams need to communicate clearly to IT executives who are overwhelmed by the plethora of technology options in front of them. Most customers have a mix of existing on-premise technology investments and a growing number of public cloud experiments. Integrating these, and calculating what makes sense to spend on improvement, can be as much art as science. So Tibco is forced to go the mat to make many of its big new sales. Take the case of a “travel and hospitality” client during the first quarter. IBM initially stole the deal even after Tibco had cultivated the customer for weeks, Ranadive explained on last week’s analyst call. But Tibco fought back, arguing that Tibco’s product is superior and more agile. Tibco won the deal. The problem is, Tibco hasn’t been in fighting mode near enough. Ranadive says he’s determined than ever to fix that, he says. “If we come against them,” he says of meeting Oracle in a sales battle, “we will crush them.” Just as urgent is the need to fix marketing, according to Forrester’s Gualtieri. Few people know, for example, that Tibco moved very early into big data analytics with its acquisition of a precursor technology to hot open source statistical computing language R. Instead, smaller companies have defined themselves and taken the lead as big data pioneers. “When you think of R, you think of Revolution Analytics,” Gualtieri said, referring to a private Silicon Valley company founded in 2007. Without more marketing clout, Tibco risks coming across too much like a me-too enterprise company, living in the shadow of IBM and Oracle. (This is one of the reasons an HP acquisition of Tibco would make sense. HP needs compelling cloud and big data technology, and would have brought marketing muscle to Tibco. A deal was rumored to be in the works last year, but fell through. ). “They have to do some more work,” said Gualtieri of Tibco. “They’ve succeeded in saying they’re ‘full-stack,’” he continued. “But now they have to make a perception transition, and they haven’t done it as fast as they need to.” Top photo credit: Tibco VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,230
2,021
"Citrix acquires project management platform Wrike for $2.25 billion | VentureBeat"
"https://venturebeat.com/2021/01/19/citrix-acquires-project-management-platform-wrike-for-2-25-billion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Citrix acquires project management platform Wrike for $2.25 billion Share on Facebook Share on X Share on LinkedIn Wrike's homepage Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Citrix has confirmed reports from earlier this week that it is buying enterprise-focused work management platform Wrike in an all-cash deal worth $2.25 billion. Founded in 2006, San Jose-based Wrike is a project management platform sold to businesses through a software-as-a-service (SaaS) model, allowing teams to monitor and track projects, workflows, deadlines, and more. The company claims some notable customers, including Google, Dell, Snowflake, Okta, and Airbnb, and had raised around $26 million in external funding, but it was acquired by Vista Equity Partners in 2018 for a reported $800 million. Citrix, a 30-year-old publicly traded software company based in Santa Clara, California, provides myriad tools spanning cloud computing, servers, networking, and more. Among these is Citrix Workspace , a virtualization platform that allows enterprises to deploy apps and desktops remotely. It enables them to oversee and secure all the devices that connect to its network from wherever they are, including controlling access to apps and files, and this appears to be the world Wrike will inhabit. Future of work The deal makes a great deal of sense, as both platforms are essentially built with the distributed workforce in mind. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Citrix will now be able to offer cloud-based collaborative work management tools to its thousands of customers, which include Deloitte, Hewlett Packard, and SAP. This is pertinent at a time when remote work has become the norm for millions of people. And Wrike gains instant access to thousands more customers, with the combined company serving more than 400,000 clients, according to Citrix. Other players operating in Wrike’s space include Asana , which went public just a few months ago with soaring shares giving it a current market cap of more than $6 billion. Citrix said the two companies will operate independently until the deal closes, which it expects to happen in the first half of 2021. Wrike founder and CEO Andrew Filev will continue to head up the Wrike platform post-acquisition and will report directly to Citrix CFO Arlen Shenkman. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,231
2,021
"Stacker lets anyone transform spreadsheets and databases into web apps | VentureBeat"
"https://venturebeat.com/2021/01/26/stacker-lets-anyone-transform-spreadsheets-and-databases-into-web-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stacker lets anyone transform spreadsheets and databases into web apps Share on Facebook Share on X Share on LinkedIn Stacker Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Low-code and no-code development tools were one of the big trends to emerge last year, with Google snapping up enterprise-focused AppSheet and Amazon’s AWS launching Honeycode. This trend seems to be accelerating into 2021, with no-code development platform Webflow raising $140 million at a $2.1 billion valuation earlier this month and newcomers such as automated web testing platform Reflect bolstering their coffers. Of course, the low-code movement is far from a new phenomenon. Microsoft’s omnipresent Excel is a good example, as it allows financial analysts and other non-coders to manipulate data and generate insights using the software’s built-in functions and cell structure. WordPress, which powers around a third of all websites , adheres to a similar philosophy for web developers. The fact that almost every company is now a software company on some level means this trend will continue to grow. According to some reports , the global low-code development market is expected to generate $187 billion in revenue annually by 2030, up from $10.3 billion in 2019, while Gartner predicts that “citizen developers” within large enterprises will outnumber professional developers by at least 4 times by 2023. It’s against this backdrop that Stacker is setting out to help businesses transform spreadsheets and databases into fully functional apps and portals in minutes, without typing a single line of code. The company today announced it has raised $1.7 million in a round of funding led by Initialized Capital, with participation from YC, Pioneer Fund, and Makerpad. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Stacking up Founded out of London back in 2017, Stacker was originally a drag-and-drop no-code app builder that could be used to create marketplaces, social networks, and any type of web app. Although users weren’t technically writing any code, the company found that they still needed knowledge of programming and a developer mindset to use the app builder to its full capacity, which is why the company pivoted and emerged from the Y Combinator (YC) summer 2020 program in its current form. Stacker is aiming to get to the heart of the humble spreadsheet’s biggest problem. They’re great for modeling and managing data, but spreadsheets are not particularly user friendly when it comes to sharing data with other people. Sharing spreadsheets with team members or customers can open a Pandora’s box of potential issues — users might not understand the data for starters, and there may be huge segments that contain confidential information. Stacker serves as an “app layer” on top of the spreadsheet, providing a friendly interface for the raw data alongside added features such as user logins, access controls, and more. Above: Stacker For now, Stacker officially connects with data in Google Sheets and Airtable, which is a sort of database platform with a spreadsheet interface and is firmly embedded in the low-code movement. “Stacker is like the app that goes on top of that database,” Stacker cofounder and CEO Michael Skelly told VentureBeat. “It’s suitable for giving to your customers, or for people in your organization who don’t understand how the spreadsheet works but still need to interact with it.” There are a bunch of similar tools on the market already, such as Stacker’s fellow YC alum Retool , which raised $50 million in a Sequoia-led round of funding a few months ago; Quickbase , which private equity firm Vista Equity acquired a $1 billion majority stake in back in 2019; and publicly traded Appian , whose shares soared more than sixfold during the pandemic, giving the company a value of $13 billion. Throw into the mix Google’s AppSheet, Amazon’s Honeycode, and a swathe of smaller players such as Bubble , and it’s clear the low-code space is busy. But Stacker believes most of the existing no-code app builders are either aimed at entrepreneurs trying to create a big company or engineers and IT departments looking to build apps faster or cheaper. Skelly said he hopes to set his platform apart by letting people without any coding knowledge create web apps for both internal and customer-facing scenarios. This could be a customized CRM, a self-serve customer account portal for tracking orders or viewing project updates, or a field worker app aimed at the mobile workforce. Above: Michael Skelly, cofounder and CEO of Stacker Among Stacker’s clients is Segment, a customer data infrastructure unicorn and another former YC alum. “Segment has created a fully branded portal for their customers to access deals from their startup program,” Skelly said. “People have [also] created learning platforms, CRMs, and even two-sided marketplaces using Stacker. ProjectN95 built out a marketplace for health care providers and PPE manufacturers to connect during COVID, powered by Stacker.” Stacker is also rolling out support for additional enterprise tools and platforms as part of an early alpha program, including messaging platform Intercom, Stripe, and SQL databases. “Over the next year, we’ll be expanding that to cover the most popular SaaS tools and data sources,” Skelly said. “The thing we’re really excited about is letting you bring in all the data from across your organization into one place so that your team and customers can interact with it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,232
2,021
"Amazon launches SageMaker Canvas for no-code AI model development | VentureBeat"
"https://venturebeat.com/2021/11/30/amazon-launches-sagemaker-canvas-for-no-code-ai-model-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches SageMaker Canvas for no-code AI model development Share on Facebook Share on X Share on LinkedIn The Amazon logo is seen at the Young Entrepreneurs fair in Paris Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During a keynote address today at its re:Invent 2021 conference, Amazon announced SageMaker Canvas, which enables users to create machine learning models without having to write any code. Using SageMaker Canvas, Amazon Web Services (AWS) customers can run a machine learning workflow with a point-and-click user interface to generate predictions and publish the results. Low- and no-code platforms allow developers and non-developers alike to create software through visual dashboards instead of traditional programming. Adoption is on the rise, with a recent OutSystems report showing that 41% of organizations were using a low- or no-code tool in 2019/2020, up from 34% in 2018/2019. “Now, business users and analysts can use Canvas to generate highly accurate predictions using an intuitive, easy-to-use interface,” AWS CEO Adam Selipsky said onstage. “Canvas uses terminology and visualizations already familiar to [users] and complements the data analysis tools that [people are] already using.” AI without code With Canvas, Selipsky says that customers can browse and access petabytes of data from both cloud and on-premises data sources, such as Amazon S3, Redshift databases, as well as local files. Canvas uses automated machine learning technology to create models, and once the models are created, users can explain and interpret the models and share the models with each other to collaborate and enrich insights. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “With Canvas, we’re making it even easier to prepare and gather data for machine learning to train models faster and expand machine learning to an even broader audience,” Selipsky added. “It’s really going to enable a whole new group of users to leverage their data and to use machine learning to create new business insights.” Canvas follows on the heels of SageMaker improvements released earlier in the year, including Data Wrangler, Feature Store, and Pipelines. Data Wrangler recommends transformations based on data in a target dataset and applies these transformations to features. Feature Store acts as a storage component for features and can access features in either batches or subsets. As for Pipelines, it allows users to define, share, and reuse each step of an end-to-end machine learning workflow with preconfigured customizable workflow templates while logging each step in SageMaker Experiments. With upwards of 82% of firms saying that custom app development outside of IT is important, Gartner predicts that 65% of all apps will be created using low- and no-code platforms like Canvas by 2024. Another study reports that 85% of 500 engineering leaders think that low- and no-code will be commonplace within their organizations as soon as 2021. If the current trend holds, the market for low- and no-code could climb to between $13.3 billion and $17.7 billion in 2021 and between $58.8 billion and $125.4 billion in 2027. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,233
2,021
"Log4j vulnerability opened the door to the ransomware operators | VentureBeat"
"https://venturebeat.com/2021/12/17/log4j-vulnerability-opened-the-door-to-the-ransomware-operators"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Log4j vulnerability opened the door to the ransomware operators Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For the cybercriminal operators who specialize in ransomware, business was already very good prior to the disclosure of the simple-to-exploit vulnerability in Apache’s widely used Log4j logging software. But numerous indicators suggest that due to the Log4j vulnerability, known as Log4Shell, the opportunities in the ransomware business are about to get even more abundant. To the detriment of everyone else. Defenders, of course, are doing all they can to prevent this from happening. But according to security researchers, signs have emerged suggesting that ransomware attacks are all but inevitable over the coming months thanks to the flaw in Log4j, which was disclosed just over a week ago. Selling access One troubling indicator in recent days is the activity of “initial access brokers” — cybercriminals whose specialty is getting inside a network and then installing a backdoor to enable entry and exit without detection. Later, they sell this access to a ransomware operator who carries out the actual attack — or sometimes to a “ransomware-as-a-service” outfit, according to security researchers. Ransomware-as-a-service operators lease out ransomware variants to other attackers, saving them the effort of creating their own variants. Microsoft reported this week that it has observed activities by suspected access brokers, linked to ransomware affiliates, who have now exploited the vulnerability in Log4j. This suggests that an “increase in human-operated ransomware” will follow against both Windows and Linux systems, Microsoft said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At cybersecurity giant Sophos, the company has spotted activity involving attempted installation of Windows backdoors that points to access brokers, said Sean Gallagher, a senior threat researcher at Sophos Labs. “You can assume they’re likely access brokers, or other cybercriminals who may sell access on the side,” Gallagher told VentureBeat. Ransomware gang activity Other concerning developments include a report from cyber firm AdvIntel that a major ransomware gang, Conti, has been found to be exploiting the vulnerability in Log4j to gain access and move laterally on vulnerable VMware vCenter servers. In a statement responding to the report, VMware said that “the security of our customers is our top priority” and noted that it has issued a security advisory that is updated regularly, while users can also subscribe to its security announcements mailing list. “Any service connected to the internet and not yet patched for the Log4j vulnerability (CVE-2021-44228) is vulnerable to hackers, and VMware strongly recommends immediate patching for Log4j,” the company said in the statement. It may still be weeks or months before the first successful ransomware attacks result from the Log4Shell vulnerability, Gallagher noted. Ransomware operators will often slowly export a company’s data for a period of time before springing the ransomware that encrypts the company’s files, Gallagher said. This allows the operator to later extort the company in exchange for not releasing their data on the web. “It could be a while before we see the real impact — in terms of what people have gotten access to and what the economic impact is of that access,” Gallagher said. A growing threat The ransomware problem had already gotten much worse this year. For the first three quarters of 2021, SonicWall reported that attempted ransomware attacks surged 148% year-over-year. CrowdStrike reports that the average ransomware payment climbed by 63% in 2021, reaching $1.79 million. Sixty-six percent of companies have experienced a ransomware attack in the previous 12 months, according to CrowdStrike’s recent report, up from 56% in the company’s 2020 report. This year’s spate of high-profile ransomware incidents included attacks against fuel pipeline operator Colonial Pipeline, meat processing firm JBS Foods, and IT management software firm Kaseya — all of which had massive repercussions far beyond their corporate walls. The disclosure of the Log4j vulnerability has been met with a herculean response from security teams. But even still, the likelihood of ransomware attacks that trace back to the flaw is high, according to researchers. “If you are a ransomware affiliate or operator right now, you suddenly have access to all these new systems,” Gallagher said. “You’ve got more work on your hands than you know what to do with right now.” Widespread vulnerability Many applications and services written in Java are potentially vulnerable to Log4Shell, which can enable remote execution of code by unauthenticated users. Researchers at cybersecurity giant Check Point said they’ve observed attempted exploits of the Log4j vulnerability on more than 44% of corporate networks worldwide. Meanwhile, a discovery by cyber firm Blumira suggests there may be an additional attack vector in the Log4j flaw, whereby not just vulnerable servers — but also individuals browsing the web from a machine with unpatched Log4j software on it — might be vulnerable. (“At this point, there is no proof of active exploitation,” Blumira said.) Ransomware delivery attempts have already been made using the vulnerability in Log4j. Bitdefender and Microsoft this week reported attempted attacks, using a new family of ransomware called Khonsari, that exploited the flaw. Microsoft also said that an Iranian group known as Phosphorus, which has previously deployed ransomware, has been seen “acquiring and making modifications of the Log4j exploit.” At the time of this writing, there has been no public disclosure of a successful ransomware breach that exploited the vulnerability in Log4j. “We haven’t necessarily seen direct ransomware deployment, but it’s just a matter of time,” said Nick Biasini, head of outreach at Cisco Talos, in an email this week. “This is a high-severity vulnerability that can be found in countless products. The time required for everything to be patched alone will allow various threat groups to leverage this in a variety of attacks, including ransomware.” What about Kronos? Thus far, there is still no indicator on whether last Saturday’s ransomware attack against Kronos Private Cloud had any connection to the Log4j vulnerability or not. The attack continues to be widely felt, with paychecks potentially delayed for workers at many companies that use the software for their payrolls. In an update Friday, the parent company of the business, Ultimate Kronos Group (UKG), said that the question of whether Log4j was a factor is still under investigation — though the company noted that it did quickly begin patching for the vulnerability. “As soon as the Log4j vulnerability was recently publicly reported, we initiated rapid patching processes across UKG and our subsidiaries, as well as active monitoring of our software supply chain for any advisories of third-party software that may be impacted by this vulnerability,” the company said. “We are currently investigating whether or not there is any relationship between the recent Kronos Private Cloud security incident and the Log4j vulnerability.” The company did not have any further comment when reached by VentureBeat on Friday. Hypothetically, even if the attack was enabled by the Log4j vulnerability, it’s “entirely possible” that UKG might never be able to pinpoint that, Gallagher noted. “There are lots of times when you have no way to know what the initial point of access for a ransomware operator was,” he said. “By the time they’re done, you’re poking through the ashes with a rake trying to find what happened. Sometimes you can find pieces that tell you [how it occurred]. And sometimes you don’t. It’s entirely possible that, if it was Log4j, they would not have any idea.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,234
2,022
"Softr, a no-code platform for building business apps on Airtable data, raises $13.5M | VentureBeat"
"https://venturebeat.com/2022/01/18/softr-a-no-code-platform-for-building-business-apps-on-airtable-data-raises-13-5m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Softr, a no-code platform for building business apps on Airtable data, raises $13.5M Share on Facebook Share on X Share on LinkedIn Softr: Building blocks Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The burgeoning no-code/low-code movement is showing little sign of slowing down, with companies across the industrial spectrum looking for ways to increase productivity by allowing anyone to create software through visual dashboards versus traditional code-based programming. One company that’s looking to capitalize on this trend is Softr , a no-code web-app building platform that launched a year ago, and which has gone on to attract more than 35,000 business and creator customers. The German company today announced that it has raised $13.5 million in a series A round of funding, as it sets out building what it calls “the world’s largest ecosystem for building no-code applications.” Going no-code Unlike popular template-based web building platforms such as Wix or Squarespace, which are more geared toward static websites, Softr is all about enabling feature-rich, interactive, web apps replete with CRUD (create, read, update, delete) operations and access controls around authentication, roles and permissions in business and consumer applications. Using Softr, anyone can transform their business data and manual workflows into custom apps, spanning marketplaces, online communities, internal tools, client portals, directories, and more. But to do so, they must be active users of Airtable , the popular business collaboration tool that sits somewhere between a spreadsheet and a database. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Softr can be used to build marketplaces for anything In truth, there are already countless similar-sounding tools out there already, from the heavily VC-backed Webflow through to newer entrants such as Stacker , Retool , Appian , Google’s AppSheet , Amazon’s Honeycode … the list goes on. So what is Softr’s core raison d’être , in a field seemingly overflowing with options? “Softr introduces a fundamentally different approach to building custom apps,” Softr CEO Mariam Hakobyan told VentureBeat. “Its unique approach is the Lego-like building experience and out-of-the-box business logic — instead of creating the apps pixel by pixel, it lets you create it like Lego.” Indeed, Softr constitutes building blocks , with each block representing a “logical piece” of the application. This includes frontend functionalities, business logic, and backend tools including authentication, payments, charts, lists, calendars, and more. The Kanban block , as you might expect, allows users to display Airtable data in a Kanban board layout. Above: Softr: A Kanban block With this approach, Hakobyan said that Softr has cracked “ the golden middle ,” making the web app development process easy without compromising on functionality. “Any business user can build customer portals, internal tools and dashboards within 30 mins, with zero learning curve, powered by their data,” she said. “This approach also makes all the applications created with Softr automatically responsive and work great on mobile and tablet, which is a hassle in every other no-code platform and has to be built from scratch.” But what if a company doesn’t use Airtable? Well, for now that would be a hurdle, but Softr is planning to become a “data source agnostic” platform, with support for additional sources such as Google Sheets, relational databases, Rest APIs, and more. Show me the money Founded out of Berlin in 2019, Softr had previously raised a small $2.2 million seed round of funding, and for its series A round the company has ushered in a slew of institutional and angel investors, including lead backer FirstMark Capital, AtlanticLabs, and Wunderlist cofounder Christian Reber. The company said that in addition to expanding its data source support, it also plans to extend Softr beyond its own platform by creating a template marketplace that enables anyone to create apps in the form of a template and sell them. Additionally, Softr also plans to launch a component marketplace for anyone to find and re-use third-party components built by the community. On the surface, Softr and its ilk might seem best suited for smaller businesses with fewer resources, but the truth of the matter is that a broader engineering shortage means that even larger businesses have to use their talent strategically, which means enabling non-technical employees — from marketing through to operations — to create their own applications. “We are facing a shortage of engineering talent, tech is moving faster than education and skills, and businesses need to digitize their operations and processes to be in par with the competition, especially during COVID-19 times,” Hakobyan said. “Engineers’ precious time goes into building the core product of the company, and business teams like sales, marketing, operations, customer success are left behind, and have no way to build and use the tools they need to get their work done internally or externally.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,235
2,022
"Low-code app development platform Crowdbotics raises $22M | VentureBeat"
"https://venturebeat.com/2022/01/20/low-code-app-development-platform-crowdbotics-raises-22m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Low-code app development platform Crowdbotics raises $22M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. As companies expanded their tech investments during the pandemic, developers became saddled with larger workloads. Fifty-five percent of people in IT services say that the amount of work they’re expected to perform increased in 2021, according to a Statista survey. Developers, among other employees who transitioned to remote work as a result of the pandemic, claim they work more hours during the week than they did before — which is perhaps why 83% say they’re suffering from burnout. Low-code development tools have been heralded as a potential solution, particularly as the worldwide shortage of developers continues to grow. Low-code tools enable non-developers from different business units like HR, finance, and procurement to build custom apps without having to write code, leveraging visual drag-and-drop interfaces and preconfigured templates. According to one source, low-code tools have the potential to reduce the development time by 90%. And using a low-code platform can save enterprises on average $1.7 million annually in operational costs, The New Stack found. One of the vendors benefiting from the low-code development boom is Crowdbotics , which today announced that it raised $22 million in funding (a combination of seed and series A) from Victor Echevarria at Jackson Square Ventures with participation from Homebrew, Bee Partners, UC Berkeley’s House Fund, Harrison Metal, and PacWest. Based in Berkeley, California, Crowdbotics’ platform is designed to help customers to create low-code apps even in highly regulated environments like health care, finance, education, and defense. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Low-code app development Crowdbotics was founded in 2016 by Anand Kulkarni, who previously started LeadGenius, a platform that crawls the web for sales lead opportunities. He got the idea for Crowdbotics after graduating from UC Berkeley with a Ph.D. in machine learning and operations research, as well as funding from the U.S. National Science Foundation. Crowdbotics allows over 1,400 customers to launch React Native and Django apps without having to learn programming languages. React Native is a JavaScript library used to develop apps for iOS and Android, while Django is a popular framework primarily for web app development. With Crowdbotics, users can iterate app screens and data models with a visual editor and deploy to hosting platforms like Heroku. Under the hood, the platform produces real code synced to a GitHub repository, allowing developers to audit and customize after the fact. “Product teams spend most of their time bogged down in the routine parts of software creation, not the interesting stuff. But the usual solution, low-code tools, aren’t extensible or developer-friendly enough to support serious businesses — they’re black boxes that are never as good as real code,” Kulkarni told VentureBeat via email. “The team started Crowdbotics to let domain experts from outside tech turn ideas into code. The company’s product … lets creators, innovators, and product teams build full-code applications using reusable building blocks and on-demand engineering pulled from all over the web.” Above: Crowdbotics’ app development dashboard. Crowdbotics also offers managed app development services, letting companies hire project managers and developers to estimate, scope, build, test, and launch apps. Customers can hire additional development resources from Crowdbotics’ dashboard as needed, or use tools that intelligently select software that’s the best fit for a project; set daily monitoring and backups of apps; and enable automatic error tracking and security updates. As of May 2021, Crowdbotics claims that over 20,000 apps have launched on its platform, including “mission-critical” health care apps, venture-backed software products earning millions in revenue, financial trading engines, learning management platforms, and government tools. “The Crowdbotics platform is at its core enablement of ideas to code at much cheaper price point,” Kulkarni continued. “Crowdbotics plays nice with IT by pairing the simplicity of low-code tooling with the transparency and customization of real, open source code. Executives can still enable their non-technical employees to build with low-code features like … storyboard tools, functional feature modules, and visual data mapping. However, developers will also find robust support for their existing workflows, including the ability to edit the project’s code directly, configure microservices, and sync changes to a linked GitHub repo. If they want to accelerate development, organizations can complement their internal team’s output via our platform’s integrated hiring cloud.” Expanding market Formstack found in a recent study that 20% of workers have adopted no-code tools — 66% within the past year and 41% in the past six months. Gartner forecasts that low- and no-code app platforms will account for 65% of all app development by 2024. Meanwhile, Forrester expects the market for low-code development platforms will increase to $21.2 billion by 2022, up from $3.8 billion in 2017. But low-code tools aren’t perfect. Organizations say that a lack of experience with low-code platforms remains one of the biggest hurdles to adoption. A study by OutSystems, meanwhile, found that 37% of organizations are concerned about “lock-in” with a low-code vendor and that 32% don’t believe they could build the types of app they need with low-code tools. Crowdbotics — whose subscribers include teams at Uber, McKinsey, Meta (formerly Facebook), and the U.S. Air Force — claims its platform solves the lock-in problem by allowing companies to access their app’s source code, which they can repurpose as they wish. “You can export your code, host it on your own, and develop on your own. This is easy to do with Crowdbotics,” the company writes on its website. Crowdbotics competes with companies including Webflow , which recently raised $140 million for its set of web app development tools and services. In March, no-code development startup Airtable raised $270 million at a $5.77 billion valuation post-money. Unqork , another startup creating a no-code enterprise app development suite, last year secured $207 million in a funding round that catapulted the company’s valuation to more than $2 billion. According to SpreadsheetWeb , low- and no-code vendors raised $2.3 billion during the first three quarters of 2021, more than doubling the total investments they received in 2020. Fifty-employee Crowdbotics’ total capital raised stands at $28 million. “We ended 2021 comfortably north of a $10 million run rate, which is 300% growth from a year ago. In 2022 we’ll triple revenues again,” Kulkarni added. “Crowdbotics raised the [latest capital] to extend its core … product, which will add more prefab features and architectures, improved design tooling … and a more powerful product management-as-a-service suite. We also intend to introduce niche feature sets for target industries, including health care, finance, education, tech, media, and Web 3.0.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,236
2,021
"Quantum computing takes important step with first public company | VentureBeat"
"https://venturebeat.com/2021/03/12/quantum-computing-takes-important-step-with-first-public-company"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Quantum computing takes important step with first public company Share on Facebook Share on X Share on LinkedIn This week saw the first step towards a public quantum computing hardware company with IonQ’s SPAC merger. IonQ should be applauded for its technical achievements that underpin this agreement. This moment recognizes the massive potential of quantum technology, despite the substantial R&D challenges that remain. The quantum computing industry still needs to address fundamental technology challenges in hardware and software for quantum computing to achieve its potential. Software developers must understand and build for the capacity of near-term hardware, limiting noise and errors, with slimmed-down algorithms constructed to the specifications of each hardware device. Hardware companies like IonQ must develop hardware that scales to greater qubit numbers and circuit depth that can run non-trivial applications. Both sides continue to advance, and it’s this combination that is needed to deliver meaningful commercial, industrial, and scientific applications. We’ve seen numerous significant advancements in quantum hardware and software that get us closer to realizing quantum computing’s promise. It is now possible to see a path towards quantum advantage where quantum computers can solve significant problems beyond classical computers’ capabilities. We can see a future state where quantum computing can advance science and industry in new ways, including simulating new materials for batteries, superconductors and more. These findings lead to better energy storage and clean energy faster in ways that are far more capital efficient than current approaches. Despite its tremendous promise, it’s important to remember that realizing quantum computing is not an overnight trip. It is a decades-long journey. Yet we could — and should — see promising applications on near-term quantum hardware within a few years. There are three critical caveats about quantum computing that we must address: 1. Quantum computing will not replace classical computing. Quantum computing is often mischaracterised as souped-up supercomputing. Potential investors need to understand that this is not a replacement or upgrade for classical supercomputing, nor will it replace desktop or mobile devices. Quantum computing is also not a drop-in replacement for existing components of a company’s data infrastructure. To make the best use of quantum computers, you need to have isolated the components of a problem that is well-suited to quantum speedups. 2. Quantum computing is not immediate. Despite some headlines, we are not a year away from quantum computing upending cybersecurity, finance, and other industries overnight. Quantum hardware is not robust enough to limit noise or errors, nor are there software programs and algorithms that are optimized to run on near-term hardware. Developing quantum hardware is capital intensive and requires patient investors who are willing to see it through for the long term. 3. Quantum computing isn’t for every use case. Quantum computing is uniquely suited to problems that relate to quantum mechanics. This is especially useful for numerous frontier science applications that currently require billions to develop and test in the laboratory. In the longer term, quantum computing has potential to speed up certain classes of optimization problems, and beyond. But it is not a magic solution for every big data, machine learning, finance, or other high-profile computing-challenge-of-the-year. As a co-founder of a quantum software startup and a leader at one of the most prestigious universities developing the theory and technology of quantum computers, I see the need for a robust ecosystem for quantum computing. Careful investment is critical to realize the potential and promise of quantum computing and its decades-long journey. The industry welcomes patient investment and encourages new talent to enter the field to help deliver the promise and the potential of quantum computing. Investors need to be aware of the long-term research and development required to bring quantum computers out of labs and into the world as a sustainable and high-growth frontier computing industry. Although we have not yet reached the point of quantum advantage, leading companies in fields as varied as materials, pharmaceuticals, and finance are setting up teams to work with quantum computing and evaluate its potential to transform their businesses. Given the first-mover advantage that could accrue to those who can best take advantage of this revolutionary technology, now is the time for executives to make this assessment. Engaging deeply with the scientific and technical experts could pay huge dividends. Ashley Montanaro is Co-founder of UK quantum startup Phasecraft and Professor of Quantum Computation at the University of Bristol. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,237
2,021
"The 2021 machine learning, AI, and data landscape | VentureBeat"
"https://venturebeat.com/2021/10/16/the-2021-machine-learning-ai-and-data-landscape"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The 2021 machine learning, AI, and data landscape Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Just when you thought it couldn’t grow any more explosively, the data/AI landscape just did: the rapid pace of company creation, exciting new product and project launches, a deluge of VC financings , unicorn creation, IPOs, etc. It has also been a year of multiple threads and stories intertwining. One story has been the maturation of the ecosystem, with market leaders reaching large scale and ramping up their ambitions for global market domination, in particular through increasingly broad product offerings. Some of those companies, such as Snowflake, have been thriving in public markets (see our MAD Public Company Index ), and a number of others (Databricks, Dataiku, DataRobot, etc.) have raised very large ( or in the case of Databricks, gigantic ) rounds at multi-billion valuations and are knocking on the IPO door (see our Emerging MAD company Index ). But at the other end of the spectrum, this year has also seen the rapid emergence of a whole new generation of data and ML startups. Whether they were founded a few years or a few months ago, many experienced a growth spurt in the past year or so. Part of it is due to a rabid VC funding environment and part of it, more fundamentally, is due to inflection points in the market. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the past year, there’s been less headline-grabbing discussion of futuristic applications of AI (self-driving vehicles, etc.), and a bit less AI hype as a result. Regardless, data and ML/AI-driven application companies have continued to thrive, particularly those focused on enterprise use trend cases. Meanwhile, a lot of the action has been happening behind the scenes on the data and ML infrastructure side, with entirely new categories (data observability, reverse ETL, metrics stores, etc.) appearing or drastically accelerating. To keep track of this evolution, this is our eighth annual landscape and “state of the union” of the data and AI ecosystem — coauthored this year with my FirstMark colleague John Wu. (For anyone interested, here are the prior versions: 2012 , 2014 , 2016 , 2017 , 2018 , 2019: Part I and Part II , and 2020. ) For those who have remarked over the years how insanely busy the chart is, you’ll love our new acronym: Machine learning, Artificial intelligence, and Data (MAD) — this is now officially the MAD landscape! We’ve learned over the years that those posts are read by a broad group of people, so we have tried to provide a little bit for everyone — a macro view that will hopefully be interesting and approachable to most, and then a slightly more granular overview of trends in data infrastructure and ML/AI for people with a deeper familiarity with the industry. Quick notes: My colleague John and I are early-stage VCs at FirstMark , and we invest very actively in the data/AI space. Our portfolio companies are noted with an (*) in this post. Let’s dig in. The macro view: Making sense of the ecosystem’s complexity Let’s start with a high-level view of the market. As the number of companies in the space keeps increasing every year, the inevitable questions are: Why is this happening? How long can it keep going? Will the industry go through a wave of consolidation? Rewind: The megatrend Readers of prior versions of this landscape will know that we are relentlessly bullish on the data and AI ecosystem. As we said in prior years, the fundamental trend is that every company is becoming not just a software company, but also a data company. Historically, and still today in many organizations, data has meant transactional data stored in relational databases, and perhaps a few dashboards for basic analysis of what happened to the business in recent months. But companies are now marching towards a world where data and artificial intelligence are embedded in myriad internal processes and external applications, both for analytical and operational purposes. This is the beginning of the era of the intelligent, automated enterprise — where company metrics are available in real time , mortgage applications get automatically processed, AI chatbots provide customer support 24/7, churn is predicted, cyber threats are detected in real time, and supply chains automatically adjust to demand fluctuations. This fundamental evolution has been powered by dramatic advances in underlying technology — in particular, a symbiotic relationship between data infrastructure on the one hand and machine learning and AI on the other. Both areas have had their own separate history and constituencies, but have increasingly operated in lockstep over the past few years. The first wave of innovation was the “Big Data” era, in the early 2010s, where innovation focused on building technologies to harness the massive amounts of digital data created every day. Then, it turned out that if you applied big data to some decade-old AI algorithms (deep learning), you got amazing results, and that triggered the whole current wave of excitement around AI. In turn, AI became a major driver for the development of data infrastructure: If we can build all those applications with AI, then we’re going to need better data infrastructure — and so on and so forth. Fast-forward to 2021: The terms themselves (big data, AI, etc.) have experienced the ups and downs of the hype cycle, and today you hear a lot of conversations around automation, but fundamentally this is all the same megatrend. The big unlock A lot of today’s acceleration in the data/AI space can be traced to the rise of cloud data warehouses (and their lakehouse cousins — more on this later) over the past few years. It is ironic because data warehouses address one of the most basic, pedestrian, but also fundamental needs in data infrastructure: Where do you store it all? Storage and processing are at the bottom of the data/AI “hierarchy of needs” — see Monica Rogati’s famous blog post here — meaning, what you need to have in place before you can do any fancier stuff like analytics and AI. You’d figure that 15+ years into the big data revolution, that need had been solved a long time ago, but it hadn’t. In retrospect, the initial success of Hadoop was a bit of a head-fake for the space — Hadoop, the OG big data technology, did try to solve the storage and processing layer. It did play a really important role in terms of conveying the idea that real value could be extracted from massive amounts of data, but its overall technical complexity ultimately limited its applicability to a small set of companies, and it never really achieved the market penetration that even the older data warehouses (e.g., Vertica) had a few decades ago. Today, cloud data warehouses (Snowflake, Amazon Redshift, and Google BigQuery) and lakehouses (Databricks) provide the ability to store massive amounts of data in a way that’s useful, not completely cost-prohibitive, and doesn’t require an army of very technical people to maintain. In other words, after all these years, it is now finally possible to store and process big data. That is a big deal and has proven to be a major unlock for the rest of the data / AI space, for several reasons. First, the rise of data warehouses considerably increases market size not just for its category, but for the entire data and AI ecosystem. Because of their ease of use and consumption-based pricing (where you pay as you go), data warehouses become the gateway to every company becoming a data company. Whether you’re a Global 2000 company or an early-stage startup, you can now get started building your core data infrastructure with minimal pain. (Even FirstMark, a venture firm with several billion under management and 20-ish team members, has its own Snowflake instance.) Second, data warehouses have unlocked an entire ecosystem of tools and companies that revolve around them: ETL, ELT, reverse ETL, warehouse-centric data quality tools, metrics stores, augmented analytics, etc. Many refer to this ecosystem as the “modern data stack” (which we discussed in our 2020 landscape ). A number of founders saw the emergence of the modern data stack as an opportunity to launch new startups, and it is no surprise that a lot of the feverish VC funding activity over the last year has focused on modern data stack companies. Startups that were early to the trend (and played a pivotal role in defining the concept) are now reaching scale, including DBT Labs, a provider of transformation tools for analytics engineers (see our Fireside Chat with Tristan Handy, CEO of DBT Labs and Jeremiah Lowin, CEO of Prefect ), and Fivetran, a provider of automated data integration solutions that streams data into data warehouses (see our Fireside Chat with George Fraser, CEO of Fivetran ), both of which raised large rounds recently (see Financing section). Third, because they solve the fundamental storage layer, data warehouses liberate companies to start focusing on high-value projects that appear higher in the hierarchy of data needs. Now that you have your data stored, it’s easier to focus in earnest on other things like real-time processing, augmented analytics, or machine learning. This in turn increases the market demand for all sorts of other data and AI tools and platforms. A flywheel gets created where more customer demand creates more innovation from data and ML infrastructure companies. As they have such a direct and indirect impact on the space, data warehouses are an important bellwether for the entire data industry — as they grow, so does the rest of the space. The good news for the data and AI industry is that data warehouses and lakehouses are growing very fast, at scale. Snowflake, for example, showed a 103% year-over-year growth in their most recent Q2 results, with an incredible net revenue retention of 169% (which means that existing customers keep using and paying for Snowflake more and more over time). Snowflake is targeting $10 billion in revenue by 2028. There’s a real possibility they could get there sooner. Interestingly, with consumption-based pricing where revenues start flowing only after the product is fully deployed, the company’s current customer traction could be well ahead of its more recent revenue numbers. This could certainly be just the beginning of how big data warehouses could become. Some observers believe that data warehouses and lakehouses, collectively, could get to 100% market penetration over time (meaning, every relevant company has one), in a way that was never true for prior data technologies like traditional data warehouses such as Vertica (too expensive and cumbersome to deploy) and Hadoop (too experimental and technical). While this doesn’t mean that every data warehouse vendor and every data startup, or even market segment, will be successful, directionally this bodes incredibly well for the data/AI industry as a whole. The titanic shock: Snowflake vs. Databricks Snowflake has been the poster child of the data space recently. Its IPO in September 2020 was the biggest software IPO ever (we had covered it at the time in our Quick S-1 Teardown: Snowflake ). At the time of writing, and after some ups and downs, it is a $95 billion market cap public company. However, Databricks is now emerging as a major industry rival. On August 31, the company announced a massive $1.6 billion financing round at a $38 billion valuation, just a few months after a $1 billion round announced in February 2021 (at a measly $28 billion valuation). Up until recently, Snowflake and Databricks were in fairly different segments of the market (and in fact were close partners for a while). Snowflake, as a cloud data warehouse, is mostly a database to store and process large amounts of structured data — meaning, data that can fit neatly into rows and columns. Historically, it’s been used to enable companies to answer questions about past and current performance (“which were our top fastest growing regions last quarter?”), by plugging in business intelligence (BI) tools. Like other databases, it leverages SQL, a very popular and accessible query language, which makes it usable by millions of potential users around the world. Databricks came from a different corner of the data world. It started in 2013 to commercialize Spark, an open source framework to process large volumes of generally unstructured data (any kind of text, audio, video, etc.). Spark users used the framework to build and process what became known as “data lakes,” where they would dump just about any kind of data without worrying about structure or organization. A primary use of data lakes was to train ML/AI applications, enabling companies to answer questions about the future (“which customers are the most likely to purchase next quarter?” — i.e., predictive analytics). To help customers with their data lakes, Databricks created Delta, and to help them with ML/AI, it created ML Flow. For the whole story on that journey, see my Fireside Chat with Ali Ghodsi, CEO, Databricks. More recently, however, the two companies have converged towards one another. Databricks started adding data warehousing capabilities to its data lakes, enabling data analysts to run standard SQL queries, as well as adding business intelligence tools like Tableau or Microsoft Power BI. The result is what Databricks calls the lakehouse — a platform meant to combine the best of both data warehouses and data lakes. As Databricks made its data lakes look more like data warehouses, Snowflake has been making its data warehouses look more like data lakes. It announced support for unstructured data such as audio, video, PDFs, and imaging data in November 2020 and launched it in preview just a few days ago. And where Databricks has been adding BI to its AI capabilities, Snowflake is adding AI to its BI compatibility. Snowflake has been building close partnerships with top enterprise AI platforms. Snowflake invested in Dataiku , and named it its Data Science Partner of the Year. It also invested in ML platform rival DataRobot. Ultimately, both Snowflake and Databricks want to be the center of all things data: one repository to store all data, whether structured or unstructured, and run all analytics, whether historical (business intelligence) or predictive (data science, ML/AI). Of course, there’s no lack of other competitors with a similar vision. The cloud hyperscalers in particular have their own data warehouses, as well as a full suite of analytical tools for BI and AI, and many other capabilities, in addition to massive scale. For example, listen to this great episode of the Data Engineering Podcast about GCP’s data and analytics capabilities. Both Snowflake and Databricks have had very interesting relationships with cloud vendors, both as friend and foe. Famously, Snowflake grew on the back of AWS (despite AWS’s competitive product, Redshift) for years before expanding to other cloud platforms. Databricks built a strong partnership with Microsoft Azure, and now touts its multi-cloud capabilities to help customers avoid cloud vendor lock-in. For many years, and still to this day to some extent, detractors emphasized that both Snowflake’s and Databricks’ business models effectively resell underlying compute from the cloud vendors, which put their gross margins at the mercy of whatever pricing decisions the hyperscalers would make. Watching the dance between the cloud providers and the data behemoths will be a defining story of the next five years. Bundling, unbundling, consolidation? Given the rise of Snowflake and Databricks, some industry observers are asking if this is the beginning of a long-awaited wave of consolidation in the industry: functional consolidation as large companies bundle an increasing amount of capabilities into their platforms and gradually make smaller startups irrelevant, and/or corporate consolidation, as large companies buy smaller ones or drive them out of business. Certainly, functional consolidation is happening in the data and AI space, as industry leaders ramp up their ambitions. This is clearly the case for Snowflake and Databricks, and the cloud hyperscalers, as just discussed. But others have big plans as well. As they grow, companies want to bundle more and more functionality — nobody wants to be a single-product company. For example, Confluent, a platform for streaming data that just went public in June 2021, wants to go beyond the real-time data use cases it is known for, and “unify the processing of data in motion and data at rest” (see our Quick S-1 Teardown: Confluent ). As another example, Dataiku* natively covers all the functionality otherwise offered by dozens of specialized data and AI infrastructure startups, from data prep to machine learning, DataOps, MLOps, visualization, AI explainability, etc., all bundled in one platform, with a focus on democratization and collaboration (see our Fireside Chat with Florian Douetteau, CEO, Dataiku ). Arguably, the rise of the “modern data stack” is another example of functional consolidation. At its core, it is a de facto alliance among a group of companies (mostly startups) that, as a group, functionally cover all the different stages of the data journey from extraction to the data warehouse to business intelligence — the overall goal being to offer the market a coherent set of solutions that integrate with one another. For the users of those technologies, this trend towards bundling and convergence is healthy, and many will welcome it with open arms. As it matures, it is time for the data industry to evolve beyond its big technology divides: transactional vs. analytical, batch vs. real-time, BI vs. AI. These somewhat artificial divides have deep roots, both in the history of the data ecosystem and in technology constraints. Each segment had its own challenges and evolution, resulting in a different tech stack and a different set of vendors. This has led to a lot of complexity for the users of those technologies. Engineers have had to stitch together suites of tools and solutions and maintain complex systems that often end up looking like Rube Goldberg machines. As they continue to scale, we expect industry leaders to accelerate their bundling efforts and keep pushing messages such as “unified data analytics.” This is good news for Global 2000 companies in particular, which have been the prime target customer for the bigger, bundled data and AI platforms. Those companies have both a tremendous amount to gain from deploying modern data infrastructure and ML/AI, and at the same time much more limited access to top data and ML engineering talent needed to build or assemble data infrastructure in-house (as such talent tends to prefer to work either at Big Tech companies or promising startups, on the whole). However, as much as Snowflake and Databricks would like to become the single vendor for all things data and AI, we believe that companies will continue to work with multiple vendors, platforms, and tools, in whichever combination best suits their needs. The key reason: The pace of innovation is just too explosive in the space for things to remain static for too long. Founders launch new startups; Big Tech companies create internal data/AI tools and then open-source them; and for every established technology or product, a new one seems to emerge weekly. Even the data warehouse space, possibly the most established segment of the data ecosystem currently, has new entrants like Firebolt , promising vastly superior performance. While the big bundled platforms have Global 2000 enterprises as core customer base, there is a whole ecosystem of tech companies, both startups and Big Tech, that are avid consumers of all the new tools and technologies, giving the startups behind them a great initial market. Those companies do have access to the right data and ML engineering talent, and they are willing and able to do the stitching of best-of-breed new tools to deliver the most customized solutions. Meanwhile, just as the big data warehouse and data lake vendors are pushing their customers towards centralizing all things on top of their platforms, new frameworks such as the data mesh emerge, which advocate for a decentralized approach, where different teams are responsible for their own data product. While there are many nuances, one implication is to evolve away from a world where companies just move all their data to one big central repository. Should it take hold, the data mesh could have a significant impact on architectures and the overall vendor landscape (more on the data mesh later in this post). Beyond functional consolidation, it is also unclear how much corporate consolidation (M&A) will happen in the near future. We’re likely to see a few very large, multi-billion dollar acquisitions as big players are eager to make big bets in this fast-growing market to continue building their bundled platforms. However, the high valuations of tech companies in the current market will probably continue to deter many potential acquirers. For example, everybody’s favorite industry rumor has been that Microsoft would want to acquire Databricks. However, because the company could fetch a $100 billion or more valuation in public markets, even Microsoft may not be able to afford it. There is also a voracious appetite for buying smaller startups throughout the market, particularly as later-stage startups keep raising and have plenty of cash on hand. However, there is also voracious interest from venture capitalists to continue financing those smaller startups. It is rare for promising data and AI startups these days to not be able to raise the next round of financing. As a result, comparatively few M&A deals get done these days, as many founders and their VCs want to keep turning the next card, as opposed to joining forces with other companies, and have the financial resources to do so. Let’s dive further into financing and exit trends. Financings, IPOs, M&A: A crazy market As anyone who follows the startup market knows, it’s been crazy out there. Venture capital has been deployed at an unprecedented pace, surging 157% year-on-year globally to $156 billion in Q2 2021 according to CB Insights. Ever higher valuations led to the creation of 136 newly minted unicorns just in the first half of 2021, and the IPO window has been wide open , with public financings (IPOs, DLs, SPACs) up +687% (496 vs. 63) in the January 1 to June 1 2021 period vs the same period in 2020. In this general context of market momentum, data and ML/AI have been hot investment categories once again this past year. Public markets Not so long ago, there were hardly any “pure play” data / AI companies listed in public markets. However, the list is growing quickly after a strong year for IPOs in the data / AI world. We started a public market index to help track the performance of this growing category of public companies — see our MAD Public Company Index (update coming soon). On the IPO front, particularly noteworthy were UiPath, an RPA and AI automation company, and Confluent, a data infrastructure company focused on real-time streaming data (see our Confluent S-1 teardown for our analysis). Other notable IPOs were C3.ai, an AI platform (see our C3 S-1 teardown ), and Couchbase, a no-SQL database. Several vertical AI companies also had noteworthy IPOs: SentinelOne, an autonomous AI endpoint security platform; TuSimple, a self-driving truck developer; Zymergen, a biomanufacturing company; Recursion, an AI-driven drug discovery company; and Darktrace, “a world-leading AI for cyber-security” company. Meanwhile, existing public data/AI companies have continued to perform strongly. While they’re both off their all-time highs, Snowflake is a formidable $95 billion market cap company, and, for all the controversy, Palantir is a $55 billion market cap company, at the time of writing. Both Datadog and MongoDB are at their all-time highs. Datadog is now a $45 billion market cap company (an important lesson for investors). MongoDB is a $33 billion company, propelled by the rapid growth of its cloud product, Atlas. Overall, as a group, data and ML/AI companies have vastly outperformed the broader market. And they continue to command high premiums — out of the top 10 companies with the highest market capitalization to revenue multiple, 4 of them (including the top 2) are data/AI companies. Above: Source: Jamin Ball, Clouded Judgement, September 24, 2021 Private markets The frothiness of the venture capital market is a topic for another blog post (just a consequence of macroeconomics and low-interest rates, or a reflection of the fact that we have truly entered the deployment phase of the internet?). But suffice to say that, in the context of an overall booming VC market, investors have shown tremendous enthusiasm for data/AI startups. According to CB Insights, in the first half of 2021, investors had poured $38 billion into AI startups, surpassing the full 2020 amount of $36 billion with half a year to go. This was driven by 50+ mega-sized $100 million-plus rounds, also a new high. Forty-two AI companies reached unicorn valuations in the first half of the year, compared to only 11 for the entirety of 2020. One inescapable feature of the 2020-2021 VC market has been the rise of crossover funds, such as Tiger Global, Coatue, Altimeter, Dragoneer, or D1, and other mega-funds such as Softbank or Insight. While those funds have been active across the Internet and software landscape, data and ML/AI has clearly been a key investing theme. As an example, Tiger Global seems to love data/AI companies. Just in the last 12 months, the New York hedge fund has written big checks into many of the companies appearing on our landscape, including, for example, Deep Vision, Databricks, Dataiku*, DataRobot, Imply, Prefect, Gong, PathAI, Ada*, Vast Data, Scale AI, Redis Labs, 6sense, TigerGraph, UiPath, Cockroach Labs*, Hyperscience*, and a number of others. This exceptional funding environment has mostly been great news for founders. Many data/AI companies found themselves the object of preemptive rounds and bidding wars, giving full power to founders to control their fundraising processes. As VC firms competed to invest, round sizes and valuations escalated dramatically. Series A round sizes used to be in the $8-$12 million range just a few years ago. They are now routinely in the $15-$20 million range. Series A valuations that used to be in the $25-$45 million (pre-money) range now often reach $80-$120 million — valuations that would have been considered a great series B valuation just a few years ago. On the flip side, the flood of capital has led to an ever-tighter job market, with fierce competition for data, machine learning, and AI talent among many well-funded startups, and corresponding compensation inflation. Another downside: As VCs aggressively invested in emerging sectors up and down the data stack, often betting on future growth over existing commercial traction, some categories went from nascent to crowded very rapidly — reverse ETL, data quality, data catalogs, data annotation, and MLOps. Regardless, since our last landscape, an unprecedented number of data/AI companies became unicorns, and those that were already unicorns became even more highly valued, with a couple of decacorns (Databricks, Celonis). Some noteworthy unicorn-type financings (in rough reverse chronological order): Fivetran, an ETL company, raised $565 million at a $5.6 billion valuation; Matillion, a data integration company, raised $150 million at a $1.5 billion valuation; Neo4j, a graph database provider, raised $325 million at a more than $2 billion valuation; Databricks, a provider of data lakehouses, raised $1.6 billion at a $38 billion valuation; Dataiku*, a collaborative enterprise AI platform, raised $400 million at a $4.6 billion valuation; DBT Labs (fka Fishtown Analytics), a provider of open-source analytics engineering tool, raised a $150 million series C; DataRobot, an enterprise AI platform, raised $300 million at a $6 billion valuation; Celonis, a process mining company, raised a $1 billion series D at an $11 billion valuation; Anduril, an AI-heavy defense technology company, raised a $450 million round at a $4.6 billion valuation; Gong, an AI platform for sales team analytics and coaching, raised $250 million at a $7.25 billion valuation; Alation, a data discovery and governance company, raised a $110 million series D at a $1.2 billion valuation; Ada*, an AI chatbot company, raised a $130 million series C at a $1.2 billion valuation; Signifyd, an AI-based fraud protection software company, raised $205 million at a $1.34 billion valuation; Redis Labs, a real-time data platform, raised a $310 million series G at a $2 billion valuation; Sift, an AI-first fraud prevention company, raised $50 million at a valuation of over $1 billion; Tractable, an AI-first insurance company, raised $60 million at a $1 billion valuation; SambaNova Systems, a specialized AI semiconductor and computing platform, raised $676 million at a $5 billion valuation; Scale AI, a data annotation company, raised $325 million at a $7 billion valuation; Vectra, a cybersecurity AI company, raised $130 million at a $1.2 billion valuation; Shift Technology, an AI-first software company built for insurers, raised $220 million; Dataminr, a real-time AI risk detection platform, raised $475 million; Feedzai, a fraud detection company, raised a $200 million round at a valuation of over $1 billion; Cockroach Labs*, a cloud-native SQL database provider, raised $160 million at a $2 billion valuation; Starburst Data, an SQL-based data query engine, raised a $100 million round at a $1.2 billion valuation; K Health, an AI-first mobile virtual healthcare provider, raised $132 million at a $1.5 billion valuation; Graphcore, an AI chipmaker, raised $222 million; and Forter, a fraud detection software company, raised a $125 million round at a $1.3 billion valuation. Acquisitions As mentioned above, acquisitions in the MAD space have been robust but haven’t spiked as much as one would have guessed, given the hot market. The unprecedented amount of cash floating in the ecosystem cuts both ways: More companies have strong balance sheets to potentially acquire others, but many potential targets also have access to cash, whether in private/VC markets or in public markets, and are less likely to want to be acquired. Of course, there have been several very large acquisitions: Nuance, a public speech and text recognition company (with a particular focus on healthcare), is in the process of getting acquired by Microsoft for almost $20 billion (making it Microsoft’s second-largest acquisition ever, after LinkedIn ); Blue Yonder, an AI-first supply chain software company for retail, manufacturing, and logistics customers, was acquired by Panasonic for up to $8.5 billion; Segment, a customer data platform, was acquired by Twilio for $3.2 billion; Kustomer, a CRM that enables businesses to effectively manage all customer interactions across channels, was acquired by Facebook for $1 billion; and Turbonomic, an “AI-powered Application Resource Management” company, was acquired by IBM for between $1.5 billion and $2 billion. There were also a couple of take-private acquisitions of public companies by private equity firms: Cloudera, a formerly high-flying data platform, was acquired by Clayton Dubilier & Rice and KKR, perhaps the official end of the Hadoop era; and Talend, a data integration provider, was taken private by Thoma Bravo. Some other notable acquisitions of companies that appeared on earlier versions of this MAD landscape: ZoomInfo acquired Chorus.ai and Everstring; DataRobot acquired Algorithmia; Cloudera acquired Cazena; Relativity acquired Text IQ*; Datadog acquired Sqreen and Timber*; SmartEye acquired Affectiva; Facebook acquired Kustomer; ServiceNow acquired Element AI; Vista Equity Partners acquired Gainsight; AVEVA acquired OSIsoft; and American Express acquired Kabbage. What’s new for the 2021 MAD landscape Given the explosive pace of innovation, company creation, and funding in 2020-21, particularly in data infrastructure and MLOps, we’ve had to change things around quite a bit in this year’s landscape. One significant structural change: As we couldn’t fit it all in one category anymore, we broke “Analytics and Machine Intelligence” into two separate categories, “Analytics” and “Machine Learning & Artificial Intelligence.” We added several new categories: In “Infrastructure,” we added: “ Reverse ETL ” — products that funnel data from the data warehouse back into SaaS applications “ Data Observability ” — a rapidly emerging component of DataOps focused on understanding and troubleshooting the root of data quality issues, with data lineage as a core foundation “ Privacy & Security ” — data privacy is increasingly top of mind, and a number of startups have emerged in the category In “Analytics,” we added: “ Data Catalogs & Discovery ” — one of the busiest categories of the last 12 months; those are products that enable users (both technical and non-technical) to find and manage the datasets they need “ Augmented Analytics ” — BI tools are taking advantage of NLG / NLP advances to automatically generate insights, particularly democratizing data for less technical audiences “ Metrics Stores ” — a new entrant in the data stack which provides a central standardized place to serve key business metrics “ Query Engines “ In “Machine Learning and AI,” we broke down several MLOps categories into more granular subcategories: “ Model Building “ “ Feature Stores “ “ Deployment and Production “ In “Open Source,” we added: “ Format “ “ Orchestration “ “ Data Quality & Observability “ Another significant evolution: In the past, we tended to overwhelmingly feature on the landscape the more established companies — growth-stage startups (Series C or later) as well as public companies. However, given the emergence of the new generation of data/AI companies mentioned earlier, this year we’ve featured a lot more early startups (series A, sometimes seed) than ever before. Without further ado, here’s the landscape: VIEW THE CHART IN FULL SIZE and HIGH RESOLUTION: CLICK HERE FULL LIST IN SPREADSHEET FORMAT: Despite how busy the landscape is, we cannot possibly fit in every interesting company on the chart itself. As a result, we have a whole spreadsheet that not only lists all the companies in the landscape, but also hundreds more — CLICK HERE Key trends in data infrastructure In last year’s landscape , we had identified some of the key data infrastructure trends of 2020: As a reminder, here are some of the trends we wrote about LAST YEAR (2020): The modern data stack goes mainstream ETL vs. ELT Automation of data engineering? Rise of the data analyst Data lakes and data warehouses merging? Complexity remains Of course, the 2020 write-up is less than a year old, and those are multi-year trends that are still very much developing and will continue to do so. Now, here’s our round-up of some key trends for THIS YEAR (2021): The data mesh A busy year for DataOps It’s time for real time Metrics stores Reverse ETL Data sharing The data mesh Everyone’s new favorite topic of 2021 is the “data mesh,” and it’s been fun to see it debated on Twitter among the (admittedly pretty small) group of people that obsess about those topics. The concept was first introduced by Zhamak Dehghani in 2019 (see her original article, “ How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh “), and it’s gathered a lot of momentum throughout 2020 and 2021. The data mesh concept is in large part an organizational idea. A standard approach to building data infrastructure and teams so far has been centralization: one big platform, managed by one data team, that serves the needs of business users. This has advantages but also can create a number of issues (bottlenecks, etc.). The general concept of the data mesh is decentralization — create independent data teams that are responsible for their own domain and provide data “as a product” to others within the organization. Conceptually, this is not entirely different from the concept of micro-services that has become familiar in software engineering, but applied to the data domain. The data mesh has a number of important practical implications that are being actively debated in data circles. Should it take hold, it would a great tailwind for startups that provide the kind of tools that are mission-critical in a decentralized data stack. Starburst, a SQL query engine to access and analyze data across repositories, has rebranded itself as “the analytics engine for the data mesh.” It is even sponsoring Dehghani’s new book on the topic. Technologies like orchestration engines (Airflow, Prefect, Dagster) that help manage complex pipelines would become even more mission-critical. See my Fireside chat with Nick Schrock (Founder & CEO, Elementl) , the company behind the orchestration engine Dagster. Tracking data across repositories and pipelines would become even more essential for troubleshooting purposes, as well as compliance and governance, reinforcing the need for data lineage. The industry is getting ready for this world, with for example OpenLineage , a new cross-industry initiative to standard data lineage collection. See my Fireside Chat with Julien Le Dem, CTO of Datakin *, the company that helped start the OpenLineage initiative. *** For anyone interested, we will host Zhamak Dehghani at Data Driven NYC on October 14, 2021. It will be a Zoom session, open to everyone! Enter your email address here to get notified about the event. *** A busy year for DataOps While the concept of DataOps has been floating around for years (and we mentioned it in previous versions of this landscape), activity has really picked up recently. As tends to be the case for newer categories, the definition of DataOps is somewhat nebulous. Some view it as the application of DevOps (from the world software of engineering) to the world of data; others view it more broadly as anything that involves building and maintaining data pipelines and ensuring that all data producers and consumers can do what they need to do, whether finding the right dataset (through a data catalog) or deploying a model in production. Regardless, just like DevOps, it is a combination of methodology, processes, people, platforms, and tools. The broad context is that data engineering tools and practices are still very much behind the level of sophistication and automation of their software engineering cousins. The rise of DataOps is one of the examples of what we mentioned earlier in the post: As core needs around storage and processing of data are now adequately addressed, and data/AI is becoming increasingly mission-critical in the enterprise, the industry is naturally evolving towards the next levels of the hierarchy of data needs and building better tools and practices to make sure data infrastructure can work and be maintained reliably and at scale. A whole ecosystem of early-stage DataOps startups that sprung up recently, covering different parts of the category, but with more or less the same ambition of becoming the “Datadog of the data world” (while Datadog is sometimes used for DataOps purposes and may enter the space at one point or another, it has been historically focused on software engineering and operations). Startups are jockeying to define their sub-category, so a lot of terms are floating around, but here are some of the key concepts. Data observability is the general concept of using automated monitoring, alerting, and triaging to eliminate “data downtime,” a term coined by Monte Carlo Data, a vendor in the space (alongside others like BigEye and Databand). Observability has two core pillars. One is data lineage, which is the ability to follow the path of data through pipelines and understand where issues arise, and where data comes from (for compliance purposes). Data lineage has its own set of specialized startups like Datakin* and Manta. The other pillar is data quality, which has seen a rush of new entrants. Detecting quality issues in data is both essential and a lot thornier than in the world of software engineering, as each dataset is a little different. Different startups have different approaches. One is declarative, meaning that people can explicitly set rules for what is a quality dataset and what is not. This is the approach of Superconductive, the company behind the popular open-source project Great Expectations (see our Fireside Chat with Abe Gong, CEO, Superconductive ). Another approach relies more heavily on machine learning to automate the detection of quality issues (while still using some rules) — Anomalo being a startup with such an approach. A related emerging concept is data reliability engineering (DRE), which echoes the sister discipline of site reliability engineering (SRE) in the world of software infrastructure. DRE are engineers who solve operational/scale/reliability problems for data infrastructure. Expect more tooling (alerting, communication, knowledge sharing, etc.) to appear on the market to serve their needs. Finally, data access and governance is another part of DataOps (broadly defined) that has experienced a burst of activity. Growth stage startups like Collibra and Alation have been providing catalog capabilities for a few years now — basically an inventory of available data that helps data analysts find the data they need. However, a number of new entrants have joined the market more recently, including Atlan and Stemma, the commercial company behind the open source data catalog Amundsen (which started at Lyft). It’s time for real time “Real-time” or “streaming” data is data that is processed and consumed immediately after it’s generated. This is in opposition to “batch,” which has been the dominant paradigm in data infrastructure to date. One analogy we came up with to explain the difference: Batch is like blocking an hour to go through your inbox and replying to your email; streaming is like texting back and forth with someone. Real-time data processing has been a hot topic since the early days of the Big Data era, 10-15 years ago — notably, processing speed was a key advantage that precipitated the success of Spark (a micro-batching framework) over Hadoop MapReduce. However, for years, real-time data streaming was always the market segment that was “about to explode” in a very major way, but never quite did. Some industry observers argued that the number of applications for real-time data is, perhaps counter-intuitively, fairly limited, revolving around a finite number of use cases like online fraud detection, online advertising, Netflix-style content recommendations, or cybersecurity. The resounding success of the Confluent IPO has proved the naysayers wrong. Confluent is now a $17 billion market cap company at the time of writing, having nearly doubled since its June 24, 2021 IPO. Confluent is the company behind Kafka, an open source data streaming project originally developed at LinkedIn. Over the years, the company evolved into a full-scale data streaming platform that enables customers to access and manage data as continuous, real-time streams (again, our S-1 teardown is here ). Beyond Confluent, the whole real-time data ecosystem has accelerated. Real-time data analytics, in particular, has seen a lot of activity. Just a few days ago, ClickHouse, a real-time analytics database that was originally an open source project launched by Russian search engine Yandex, announced that it has become a commercial, U.S.-based company funded with $50 million in venture capital. Earlier this year, Imply, another real-time analytics platform based on the Druid open source database project, announced a $70 million round of financing. Materialize is another very interesting company in the space — see our Fireside Chat with Arjun Narayan, CEO, Materialize. Upstream from data analytics, emerging players help simplify real-time data pipelines. Meroxa focuses on connecting relational databases to data warehouses in real time — see our Fireside Chat with DeVaris Brown, CEO, Meroxa. Estuary* focuses on unifying the real-time and batch paradigms in an effort to abstract away complexity. Metrics stores Data and data use increased in both frequency and complexity at companies over the last few years. With that increase in complexity comes an accompanied increase in headaches caused by data inconsistencies. For any specific metric, any slight derivation in the metric, whether caused by dimension, definition, or something else, can cause misaligned outputs. Teams perceived to be working based off of the same metrics could be working off different cuts of data entirely or metric definitions may slightly shift between times when analysis is conducted leading to different results, sowing distrust when inconsistencies arise. Data is only useful if teams can trust that the data is accurate, every time they use it. This has led to the emergence of the metric store which Benn Stancil, the chief analytics officer at Mode, labeled the missing piece of the modern data stack. Home-grown solutions that seek to centralize where metrics are defined were announced at tech companies including at AirBnB, where Minerva has a vision of “define once, use anywhere,” and at Pinterest. These internal metrics stores serve to standardize the definitions of key business metrics and all of its dimensions, and provide stakeholders with accurate, analysis-ready data sets based on those definitions. By centralizing the definition of metrics, these stores help teams build trust in the data they are using and democratize cross-functional access to metrics, driving data alignment across the company. The metrics store sits on top of the data warehouse and informs the data sent to all downstream applications where data is consumed, including business intelligence platforms, analytics and data science tools, and operational applications. Teams define key business metrics in the metric store, ensuring that anybody using a specific metric will derive it using consistent definitions. Metrics stores like Minerva also ensure that data is consistent historically, backfilling automatically if business logic is changed. Finally, the metrics store serves the metrics to the data consumer in the standardized, validated formats. The metrics store enables data consumers on different teams to no longer have to build and maintain their own versions of the same metric, and can rely on one single centralized source of truth. Some interesting startups building metric stores include Transform , Trace *, and Supergrain. Reverse ETL It’s certainly been a busy year in the world of ETL/ELT — the products that aim to extract data from a variety of sources (whether databases or SaaS products) and load them into cloud data warehouses. As mentioned, Fivetran became a $5.6 billion company; meanwhile, newer entrants Airbyte (an open source version) raised a $26 million series A and Meltano spun out of GitLab. However, one key development in the modern data stack over the last year or so has been the emergence of reverse ETL as a category. With the modern data stack, data warehouses have become the single source of truth for all business data which has historically been spread across various application-layer business systems. Reverse ETL tooling sits on the opposite side of the warehouse from typical ETL/ELT tools and enables teams to move data from their data warehouse back into business applications like CRMs, marketing automation systems, or customer support platforms to make use of the consolidated and derived data in their functional business processes. Reverse ETLs have become an integral part of closing the loop in the modern data stack to bring unified data, but come with challenges due to pushing data back into live systems. With reverse ETLs, functional teams like sales can take advantage of up-to-date data enriched from other business applications like product engagement from tools like Pendo* to understand how a prospect is already engaging or from marketing programming from Marketo to weave a more coherent sales narrative. Reverse ETLs help break down data silos and drive alignment between functions by bringing centralized data from the data warehouse into systems that these functional teams already live in day-to-day. A number of companies in the reverse ETL space have received funding in the last year, including Census, Rudderstack, Grouparoo, Hightouch, Headsup, and Polytomic. Data sharing Another accelerating theme this year has been the rise of data sharing and data collaboration not just within companies, but also across organizations. Companies may want to share data with their ecosystem of suppliers, partners, and customers for a whole range of reasons, including supply chain visibility, training of machine learning models, or shared go-to-market initiatives. Cross-organization data sharing has been a key theme for “data cloud” vendors in particular: In May 2021, Google launched Analytics Hub , a platform for combining data sets and sharing data and insights, including dashboards and machine learning models, both inside and outside an organization. It also launched Datashare , a product more specifically targeting financial services and based on Analytics Hub. On the same day (!) in May 2021, Databricks announced Delta Sharing , an open source protocol for secure data sharing across organizations. In June 2021, Snowflake announced the general availability of its data marketplace, as well as additional capabilities for secure data sharing. There’s also a number of interesting startups in the space: Habr, a provider of enterprise data exchanges Crossbeam*, a partner ecosystem platform Enabling cross-organization collaboration is particularly strategic for data cloud providers because it offers the possibility of building an additional moat for their businesses. As competition intensifies and vendors try to beat each other on features and capabilities, a data-sharing platform could help create a network effect. The more companies join, say, the Snowflake Data Cloud and share their data with others, the more it becomes valuable to each new company that joins the network (and the harder it is to leave the network). Key trends in ML/AI In last year’s landscape , we had identified some of the key data infrastructure trends of 2020. As a reminder, here are some of the trends we wrote about LAST YEAR (2020) Boom time for data science and machine learning platforms (DSML) ML getting deployed and embedded The Year of NLP Now, here’s our round-up of some key trends for THIS YEAR (2021): Feature stores The rise of ModelOps AI content generation The continued emergence of a separate Chinese AI stack Research in artificial intelligence keeps on improving at a rapid pace. Some notable projects released or published in the last year include DeepMind’s Alphafold, which predicts what shapes proteins fold into, along with multiple breakthroughs from OpenAI including GPT-3, DALL-E, and CLIP. Additionally, startup funding has drastically accelerated across the machine learning stack, giving rise to a large number of point solutions. With the growing landscape, compatibility issues between solutions are likely to emerge as the machine learning stacks become increasingly complicated. Companies will need to make a decision between buying a comprehensive full-stack solution like DataRobot or Dataiku* versus trying to chain together best-in-breed point solutions. Consolidation across adjacent point solutions is also inevitable as the market matures and faster-growing companies hit meaningful scale. Feature stores Feature stores have become increasingly common in the operational machine learning stack since the idea was first introduced by Uber in 2017 , with multiple companies raising rounds in the past year to build managed feature stores including Tecton , Rasgo , Logical Clocks , and Kaskada. A feature (sometimes referred to as a variable or attribute) in machine learning is an individual measurable input property or characteristic, which could be represented as a column in a data snippet. Machine learning models could use anywhere from a single feature to upwards of millions. Historically, feature engineering had been done in a more ad-hoc manner, with increasingly more complicated models and pipelines over time. Engineers and data scientists often spent a lot of time re-extracting features from the raw data. Gaps between production and experimentation environments could also cause unexpected inconsistencies in model performance and behavior. Organizations are also more concerned with governance, reproducibility, and explainability of their machine learning models, and siloed features make that difficult in practice. Feature stores promote collaboration and help break down silos. They reduce the overhead complexity and standardize and reuse features by providing a single source of truth across both training (offline) and production (online). It acts as a centralized place to store the large volumes of curated features within an organization, runs the data pipelines which transform the raw data into feature values, and provides low latency read access directly via API. This enables faster development and helps teams both avoid work duplication and maintain consistent feature sets across engineers and between training and serving models. Feature stores also produce and surface metadata such as data lineage for features, health monitoring, drift for both features and online data, and more. The rise of ModelOps By this point, most companies recognize that taking models from experimentation to production is challenging, and models in use require constant monitoring and retraining as data shifts. According to IDC, 28% of all ML/AI projects have failed , and Gartner notes that 87% of data science projects never make it into production. Machine Learning Operations (MLOps), which we wrote about in 2019 , came about over the next few years as companies sought to close those gaps by applying DevOps best practices. MLOps seeks to streamline the rapid continuous development and deployment of models at scale, and according to Gartner , has hit a peak in the hype cycle. The new hot concept in AI operations is in ModelOps, a superset of MLOps which aims to operationalize all AI models including ML at a faster pace across every phase of the lifecycle from training to production. ModelOps covers both tools and processes, requiring a cross-functional cultural commitment uniting processes, standardizing model orchestration end-to-end, creating a centralized repository for all models along with comprehensive governance capabilities (tackling lineage, monitoring, etc.), and implementing better governance, monitoring, and audit trails for all models in use. In practice, well-implemented ModelOps helps increase explainability and compliance while reducing risk for all models by providing a unified system to deploy, monitor, and govern all models. Teams can better make apples-to-apples comparisons between models given standardized processes during training and deployment, release models with faster cycles, be alerted automatically when model performance benchmarks drop below acceptable thresholds, and understand the history and lineage of models in use across the organization. AI content generation AI has matured greatly over the last few years and is now being leveraged in creating content across all sorts of mediums, including text, images, code, and videos. Last June, OpenAI released its first commercial beta product — a developer-focused API that contained GPT-3, a powerful general-purpose language model with 175 billion parameters. As of earlier this year, tens of thousands of developers had built more than 300 applications on the platform, generating 4.5 billion words per day on average. OpenAI has already signed a number of early commercial deals, most notably with Microsoft, which has leveraged GPT-3 within Power Apps to return formulas based on semantic searches, enabling “citizen developers” to generate code with limited coding ability. Additionally, GitHub leveraged OpenAI Codex, a descendant of GPT-3 containing both natural language and billions of lines of source code from public code repositories, to launch the controversial GitHub Copilot , which aims to make coding faster by suggesting entire functions to autocomplete code within the code editor. With OpenAI primarily focused on English-centric models, a growing number of companies are working on non-English models. In Europe, the German startup Aleph Alpha raised $27 million earlier this year to build a “sovereign EU-based compute infrastructure,” and has built a multilingual language model that can return coherent text results in German, French, Spanish, and Italian in addition to English. Other companies working on language-specific models include AI21 Labs building Jurassic-1 in English and Hebrew, Huawei’s PanGu-α and the Beijing Academy of Artificial Intelligence’s Wudao in Chinese, and Naver’s HyperCLOVA in Korean. On the image side, OpenAI introduced its 12-billion parameter model called DALL-E this past January, which was trained to create plausible images from text descriptions. DALL-E offers some level of control over multiple objects, their attributes, their spatial relationships, and even perspective and context. Additionally, synthetic media has matured significantly since the tongue-in-cheek 2018 Buzzfeed and Jordan Peele deepfake Obama. Consumer companies have started to leverage synthetically generated media for everything from marketing campaigns to entertainment. Earlier this year, Synthesia* partnered with Lay’s and Lionel Messi to create Messi Messages, a platform that enabled users to generate video clips of Messi customized with the names of their friends. Some other notable examples within the last year include using AI to de-age Mark Hamill both in appearance and voice in The Mandalorian, have Anthony Bourdain narrate dialogue he never said in Roadrunner , create a State Farm commercial that promoted The Last Dance, and create a synthetic voice for Val Kilmer, who lost his voice during treatment for throat cancer. With this technological advancement comes an ethical and moral quandary. Synthetic media potentially poses a risk to society including by creating content with bad intentions, such as using hate speech or other image-damaging language, states creating false narratives with synthetic actors, or celebrity and revenge deepfake pornography. Some companies have taken steps to limit access to their technology with codes of ethics like Synthesia* and Sonantic. The debate about guardrails, such as labeling the content as synthetic and identifying its creator and owner, is just getting started, and likely will remain unresolved far into the future. The continued emergence of a separate Chinese AI stack China has continued to develop as a global AI powerhouse, with a huge market that is the world’s largest producer of data. The last year saw the first real proliferation of Chinese AI consumer technology with the cross-border Western success of TikTok, based on one of the arguably best AI recommendation algorithms ever created. With the Chinese government mandating in 2017 for AI supremacy by 2030 and with financial support in the form of billions of dollars of funding supporting AI research along with the establishment of 50 new AI institutions in 2020, the pace of progress has been quick. Interestingly, while much of China’s technology infrastructure still relies on western-created tooling (e.g., Oracle for ERP, Salesforce for CRM), a separate homegrown stack has begun to emerge. Chinese engineers who use western infrastructure face cultural and language barriers which make it difficult to contribute to western open source projects. Additionally, on the financial side, according to Bloomberg , Chinese-based investors in U.S. AI companies from 2000 to 2020 represent just 2.4% of total AI investment in the U.S. Huawei and ZTE’s spat with the U.S. government hastened the separation of the two infrastructure stacks, which already faced unification headwinds. With nationalist sentiment at a high, localization (国产化替代) to replace western technology with homegrown infrastructure has picked up steam. The Xinchuang industry (信创) is spearheaded by a wave of companies seeking to build localized infrastructure, from the chip level through the application layer. While Xinchuang has been associated with lower quality and functionality tech, in the past year, clear progress was made within Xinchuang cloud (信创云), with notable launches including Huayun (华云), China Electronics Cloud’s CECstack, and Easystack (易捷行云). In the infrastructure layer, local Chinese infrastructure players are starting to make headway into major enterprises and government-run organizations. ByteDance launched Volcano Engine targeted toward third parties in China, based on infrastructure developed for its consumer products offering capabilities including content recommendation and personalization, growth-focused tooling like A/B testing and performance monitoring, translation, and security, in addition to traditional cloud hosting solutions. Inspur Group serves 56% of domestic state-owned enterprises and 31% of China’s top 500 companies, while Wuhan Dameng is widely used across multiple sectors. Other examples of homegrown infrastructure include PolarDB from Alibaba, GaussDB from Huawei, TBase from Tencent, TiDB from PingCAP, Boray Data, and TDengine from Taos Data. On the research side, in April, Huawei introduced the aforementioned PanGu-α, a 200 billion parameter pre-trained language model trained on 1.1TB of a Chinese text from a variety of domains. This was quickly overshadowed when the Beijing Academy of Artificial Intelligence (BAAI) announced the release of Wu Dao 2.0 in June. Wu Dao 2.0 is a multimodal AI that has 1.75 trillion parameters, 10X the number as GPT-3, making it the largest AI language system to date. Its capabilities include handling NLP and image recognition, in addition to generating written media in traditional Chinese, predicting 3D structures of proteins like AlphaFold, and more. Model training was also handled via Chinese-developed infrastructure: In order to train Wu Dao quickly (version 1.0 was only released in March), BAAI researchers built FastMoE, a distributed Mixture-of Experts training system based on PyTorch that doesn’t require Google’s TPU and can run on off-the-shelf hardware. Watch our fireside chat with Chip Huyen for further discussion on the state of Chinese AI and infrastructure. [Note: A version of this story originally ran on the author’s own website.] Matt Turck is a VC at FirstMark, where he focuses on SaaS, cloud, data, ML/AI, and infrastructure investments. Matt also organizes Data Driven NYC, the largest data community in the U.S. This story originally appeared on Mattturck.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,238
2,021
"Enterprise AI budgets are up 55% over 2020, Appen says | VentureBeat"
"https://venturebeat.com/2021/06/15/enterprise-ai-budgets-up-55-percent-over-2020-appen-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise AI budgets are up 55% over 2020, Appen says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are accelerating their AI strategies as a result of the COVID-19 pandemic and ballooning their budgets accordingly. The seventh annual Appen State of AI report , published today, reveals a significant year-over-year increase in AI budgets at companies of all sizes. Overall, survey respondents report budgets ranging from $500,000 to $5 million per year — a 55% increase over 2020. “This trend is a strong signal that the industry continues to grow and AI is becoming more critical to the success for companies large and small across all industries,” reads the report. The report also indicates that decision-makers are moving away from the idea of an AI silver bullet and toward the use of AI to support internal processes. Who is considered an AI decision-maker is also changing. In a reversal of 2020 trends , enterprises are increasingly moving AI responsibility out of the C-suite and into lower levels of the organization. The survey found top executives were responsible for AI initiatives at 39% of the companies surveyed, down from 71% just last year. For this research, Appen , which provides AI training data, tapped The Harris Poll to survey 501 business leaders and technical practitioners about their priorities, successes, and challenges for implementing AI. The respondents spanned industries and represented companies of all sizes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Big budgets deliver more deployments Aside from the increase in AI budgets, the report also sheds light on how companies’ AI funding impacts both their deployments and perceived market leadership. To break this down, the researchers grouped the organizations by AI budget: $500,000, between $500,000 and $1 million, between $1 million and $3 million, and over $3 million. The findings indicate a direct correlation between budget and market leadership. Those in the lowest bracket — which accounted for 26% of companies surveyed — were much less likely to consider themselves market leaders, suggesting budget is a significant factor in gaining AI leadership. Unsurprisingly, the research also suggests budget correlates with company size, with larger companies investing more. Overall, 53% of AI teams reported budgets in the $500,000 to $5 million range, compared to about one-third in 2020. The report also showed a significant correlation between budget and deployment. Almost half of companies with budgets over $1 million experienced deployment rates of 61% to 90%, which the report states is “significantly higher” than for those with lower budgets. Prioritizing AI A 55% boost in budgets might seem like quite a jump for one year, but enterprises are simply following through on business objectives they’ve already articulated. In last year’s Appen State of AI report , 75% of those surveyed said they believe AI is critical to their business. And while some thought the pandemic would slow business (and in many cases, it did), the 2020 report still anticipated an expansion of AI budgets in 2021. This year’s report has made good on that prediction, while other research shows that, overall, digital transformation is the top businesses initiative this year. Beyond budgets, scientific output also speaks to the prioritization of AI around the world. Last week, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) unveiled its latest Science Report — a massive undertaking published every five years to examine current trends in science governance. For the first time, the report includes a deep analysis of AI and robotics, going beyond just the global leaders and examining AI research, funding, and strategies in almost two dozen countries and global regions. Overall, the researchers — 70 authors from 52 countries, who compiled the report over 18 months — determined that AI “dominated scientific output” in recent years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,239
2,022
"How natural language processing helps promote inclusivity in online communities | VentureBeat"
"https://venturebeat.com/ai/how-natural-language-processing-helps-promote-inclusivity-in-online-communities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight How natural language processing helps promote inclusivity in online communities Share on Facebook Share on X Share on LinkedIn Presented by Cohere To create healthy online communities, companies need better strategies to weed out harmful posts. In this VB On-Demand event, AI/ML experts from Cohere and Google Cloud share insights into the new tools changing how moderation is done. Watch free, on-demand! Game players experience a staggering amount of online abuse. A recent study found that five out of six adults (18–45) experienced harassment in online multiplayer games, or over 80 million gamers. Three out of five young gamers (13–17) have been harassed, or nearly 14 million gamers. Identity-based harassment is on the rise, as is instances of white supremacist rhetoric. It’s happening in an increasingly raucous online world, where 2.5 quintillion bytes of data is produced every day, making content moderation, always a tricky, human-based proposition, a bigger challenge than it’s ever been. “Competing arguments suggest it’s not a rise in harassment, it’s just more visible because gaming and social media have become more popular — but what it really means is that more people than ever are experiencing toxicity,” says Mike Lavia, enterprise sales lead at Cohere. “It’s causing a lot of harm to people and it’s causing a lot of harm in the way it creates negative PR for gaming and other social communities. It’s also asking developers to balance moderation and monetization, so now developers are trying to play catch up.” Human-based methods aren’t enough The traditional way of dealing with content moderation was to have a human look at the content, validate whether it broke any trust and safety rules, and either tag it as toxic or non-toxic. Humans are still predominantly used, just because people feel like they’re probably the most accurate at identifying content, especially for images and videos. However, training humans on trust and safety policies, and pinpointing harmful behavior takes a long time, Lavia says, because it is often not black or white. “The way that people communicate on social media and games, and the way that language is used, especially in the last two or three years, is shifting rapidly. Constant global upheaval impacts conversations,” Lavia says. “By the time a human is trained to understand one toxic pattern, you might be out of date, and things start slipping through the cracks.” Natural language processing (NLP), or the ability for a computer to understand human language, has progressed in leaps and bounds over the last few years, and has emerged as an innovative way to identify toxicity in text in real time. Powerful models that understand human language are finally available to developers, and actually affordable in terms of cost, resources and scalability to integrate into existing workflows and tech stacks. How language models evolve in real time Part of moderation is staying abreast of current events, because the outside world doesn’t stay outside — it’s constantly impacting online communities and conversations. Base models are trained on terabytes of data, by scraping the web, and then fine tuning keeps models relevant to the community, the world and the business. An enterprise brings their own IP data to fine tune a model to understand their specific business or their specific task at hand. “That’s where you can extend a model to then understand your business and execute the task at a very high-performing level, and they can be updated pretty quickly,” Lavia says. “And then over time you can create thresholds to kick off the retraining and push a new one to the market, so you can create a new intent for toxicity.” You might flag any conversation about Russia and Ukraine, which might not necessarily be toxic, but is worth tracking. If a user is getting flagged a huge number of times in a session, they’re flagged, monitored and reported if necessary. “Previous models wouldn’t be able to detect that,” he says. “By retraining the model to include that type of training data, you kick off the ability to start monitoring for and identifying that type of content. With AI, and with these platforms like what Cohere is developing, it’s very easy to retrain models and continually retrain over time as you need to.” You can label misinformation, political talk, current events — any kind of topic that doesn’t fit your community, and causes the kind of division that turns users off. “What you’re seeing with Facebook and Twitter and some of the gaming platforms, where there’s significant churn, it’s primarily due to this toxic environment,” he says. “It’s hard to talk about inclusivity without talking about toxicity, because toxicity is degrading inclusivity. A lot of these platforms have to figure out what that happy medium is between monetization and moderating their platforms to make sure that it’s safe for everyone.” To learn more about how NLP models work and how developers can leverage them, how to build and scale inclusive communities cost effectively and more, don’t miss this on-demand event! Watch free on-demand now! Agenda Tailoring tools to your community’s unique vernacular and policies Increasing the capacity to understand the nuance and context of human language Using language AI that learns as toxicity evolves Significantly accelerating the ability to identify toxicity at scale Presenters David Wynn , Head of Solutions Consulting, Google Cloud for Games Mike Lavia , Enterprise Sales Lead, Cohere Dean Takahashi , Lead Writer, GamesBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,240
2,022
"Inclusive design will help create AI that works for everyone | VentureBeat"
"https://venturebeat.com/ai/inclusive-design-will-help-create-ai-that-works-for-everyone"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Inclusive design will help create AI that works for everyone Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A few years ago, a New Jersey man was arrested for shoplifting and spent ten days in jail. He was actually 30 miles away during the time of the incident; police facial recognition software wrongfully identified him. Facial recognition’s race and gender failings are well known. Often trained on datasets of primarily white men, the technology fails to recognize other demographics as accurately. This is only one example of design that excludes certain demographics. Consider virtual assistants that don’t understand local dialects, robotic humanoids that reinforce gender stereotypes or medical tools that don’t work as well on darker skin tones. Londa Schiebinger , the John L. Hinds Professor of History of Science at Stanford University, is the founding director of the Gendered Innovations in Science, Health & Medicine, Engineering, and Environment Project and is part of the teaching team for Innovations in Inclusive Design. In this interview, Schiebinger discusses the importance of inclusive design in artificial intelligence (AI), the tools she developed to help achieve inclusive design and her recommendations for making inclusive design a part of the product development process. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Your course explores a variety of concepts and principles in inclusive design. What does the term inclusive design mean? Londa Schiebinger: It’s design that works for everyone across all of society. If inclusive design is the goal, then intersectional tools are what get you there. We developed intersectional design cards that cover a variety of social factors like sexuality, geographic location, race and ethnicity, and socioeconomic status (the cards won notable distinction at the 2022 Core77 Design Awards). These are factors where we see social inequalities show up, especially in the U.S. and Western Europe. These cards help design teams see which populations they might not have considered, so they don’t design for an abstract, non-existing person. The social factors in our cards are by no means an exhaustive list, so we also include blank cards and invite people to create their own factors. The goal in inclusive design is to get away from designing for the default, mid-sized male, and to consider the full range of users. Why is inclusive design important to product development in AI? What are the risks of developing AI technologies that are not inclusive? Schiebinger: If you don’t have inclusive design, you’re going to reaffirm, amplify and harden unconscious biases. Take nursing robots, as an example. The nursing robot’s goal is to get patients to comply with healthcare instructions, whether that’s doing exercises or taking medication. Human-robot interaction shows us that people interact more with robots that are humanoid, and we also know that nurses are 90% women in real life. Does this mean we get better patient compliance if we feminize nursing robots? Perhaps, but if you do that, you also harden the stereotype that nursing is a woman’s profession, and you close out the men who are interested in nursing. Feminizing nursing robots exacerbates those stereotypes. One interesting idea promotes robot neutrality where you don’t anthropomorphize the robot, and you keep it out of human space. But does this reduce patient compliance? Essentially, we want designers to think about the social norms that are involved in human relations and to question those norms. Doing so will help them create products that embody a new configuration of social norms, engendering what I like to call a virtuous circle – a process of cultural change that is more equitable, sustainable and inclusive. What technology product does a poor job of being inclusive? Schiebinger: The pulse oximeter, which was developed in 1972, was so important during the early days of COVID as the first line of defense in emergency rooms. But we learned in 1989 that it doesn’t give accurate oxygen saturation readings for people with darker skin. If a patient doesn’t desaturate to 88% by the pulse oximeter’s reading, they may not get the life-saving oxygen they need. And even if they do get supplemental oxygen, insurance companies don’t pay unless you reach a certain reading. We’ve known about this product failure for decades, but it somehow didn’t become a priority to fix. I’m hoping that the experience of the pandemic will prioritize this important fix, because the lack of inclusivity in the technology is causing failures in healthcare. We’ve also used virtual assistants as a key example in our class for several years now, because we know that voice assistants that default to a female persona are subjected to harassment and because they again reinforce the stereotype that assistants are female. There’s also a huge challenge with voice assistants misunderstanding African American vernacular or people who speak English with an accent. In order to be more inclusive, voice assistants need to work for people with different educational backgrounds, from different parts of the country, and from different cultures. What’s an example of an AI product with great, inclusive design? Schiebinger: The positive example I like to give is facial recognition. Computer scientists Joy Buolamwini and Timnit Gebru wrote a paper called “ Gender Shades ,” in which they found that women’s faces were not recognized as well as men’s faces, and darker-skinned people were not recognized as easily as those with lighter skin. But then they did the intersectional analysis and found that Black women were not seen 35% of the time. Using what I call “intersectional innovation , ” they created a new dataset using parliamentary members from Africa and Europe and built an excellent, more inclusive database for Blacks, whites, men and women. But we notice that there is still room for improvement; the database could be expanded to include Asians, Indigenous people of the Americas and Australia, and possibly nonbinary or transgender people. For inclusive design, we have to be able to manipulate the database. If you’re doing natural language processing and using the corpus of the English language found online, then you’re going to get the biases that humans have put into that data. There are databases we can control and make work for everybody, but for databases we can’t control, we need other tools, so the algorithm does not return biased results. In your course, students are first introduced to inclusive design principles before being tasked with designing and prototyping their own inclusive technologies. What are some of the interesting prototypes in the area of AI that you’ve seen come out of your class? Schiebinger: During our social robots unit, a group of students created a robot called ReCyclops that solves for 1) not knowing what plastics should go into each recycle bin, and 2) the unpleasant labor of workers sorting through the recycling to determine what is acceptable. ReCyclops can read the label on an item or listen to a user’s voice input to determine which bin the item goes into. The robots are placed in geographically logical and accessible locations – attaching to existing waste containers – in order to serve all users within a community. How would you recommend that AI professional designers and developers consider inclusive design factors throughout the product development process? Schiebinger: I think we should first do a sustainability lifecycle assessment to ensure that the computing power required isn’t contributing to climate change. Next, we need to do a social lifecycle assessment that scrutinizes working conditions for people in the supply chain. And finally, we need an inclusive lifecycle assessment to make sure the product works for everyone. If we slow down and don’t break things, we can accomplish this. With these assessments, we can use intersectional design to create inclusive technologies that enhance social equity and environmental sustainability. Prabha Kannan is a contributing writer for the Stanford Institute for Human-Centered AI. This story originally appeared on Hai.stanford.edu. Copyright 2022 DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,241
2,022
"Addressing office “boy’s clubs”: How to create an inclusive company culture | VentureBeat"
"https://venturebeat.com/business/addressing-office-boys-clubs-how-to-create-an-inclusive-company-culture"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Addressing office “boy’s clubs”: How to create an inclusive company culture Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The rules of the workplace were written over 100 years ago by men for men. Today, we still see a “boy’s club” mentality throughout every industry. In fact, a recent Harvard study shows that male employees are promoted faster than their female counterparts while under male managers. Conversely, the study found that under female managers, all genders receive equal promotional treatment. Researchers predict that about 40% of the gender pay gap would be expunged if male-to-male promotion advantages were eliminated. But eliminating boy’s-club toxicity isn’t going to happen overnight. So how does a company change its culture ? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Let’s explore what it takes for a company to make a cultural shift toward more equality in the workplace. Company culture shift: Change the focus of recruiting Knowing what skills are needed to get a job done is the easy part of recruiting. Sifting through resumes for a candidate’s pedigree is as simple as scanning for the right text. Where they went to school, how much experience they have, and what titles they’ve held are all easily identifiable and quantifiable. It takes more skill to predict how the new team member may fit into the office dynamic and what contributions and new skills they might bring to the table. This requires a more personal approach in the interview process. Rather than spending time during an interview only on someone’s professional background, it’s important to spend time discovering whether the candidate is an emotionally intelligent individual who can not only contribute to but enhance the team. Changing the focus to hiring for passion as well as for skill can bring a different kind of employee to your organization. Skill can be taught, but passion is innate. It’s either there or it isn’t. From culture fit to culture add A lot has been made about “culture fit” in today’s hiring climate. Hiring managers are told to emphasize the likelihood of a prospective candidate to adapt to the core values and gel with the diverse personalities that make an organization. But improving company culture isn’t about maintaining the status quo. Looking for a culture “fit” may be more akin to fitting a square peg into a round hole. This approach requires much of the employee, setting the expectation that they must adapt to the culture, rather than the existing culture evolving and improving over time. A culture “add,” however, is someone who joins an existing organization’s culture and brings something to the table. This lets the whole become more than just the sum of its parts. A culture add lets a CEO say, “What is my current company culture missing?” and allows the employees to benefit from the new gifts brought to the team from a diverse hire. More importantly, it weeds out those who might bring an “ends justifies the means” approach to job performance. It takes everyone to create an inclusive company culture When you have kind, inclusive people on your team, it creates a kind culture. Hiring with an eye toward improving culture starts with leadership making emotional intelligence as much of a key performance indicator as any other professional requirement. This gives hiring managers and team leaders the ability to incorporate this into their recruiting, hiring and management practices. But CEOs can’t just dictate this from on high and expect the rank and file to march in line. CEOs need to display the kind of qualities they wish to see in the employees their hiring managers bring on board. Part of this means activating solutions for change — setting the tone by emphasizing core qualities or pillars that guide the organization. Communication CEOs need to communicate with their employees, not to them. Transparency makes employees feel more secure in their jobs. It’s the CEO’s job to make sure employees know what’s going on and make them feel informed. When leaders communicate at a high level, it helps team members feel more secure about communicating back to them and to each other. In a toxic workplace, open communication can be viewed as aggressive or pushy. In an empathetic workplace, communication fosters honesty and an open exchange of ideas. Sharing Sharing is more than just communicating. It means bringing something personal to your communication. When the CEO isn’t just a person constantly hiding in a corner office, it brings humanity to their leadership style. When employees share with each other, it means they’re talking about more than just work. They’re engaging on a more personal level, which goes a long way toward improving the culture of a workplace. Vulnerability Some view vulnerability as a weakness, but nothing could be further from the truth. To admit a mistake, acknowledge a fault, or ask for help takes courage, and that courage should be acknowledged. Workers should feel safe to share what is on their mind — from difficulty with a task, to issues with child care or hardships at home — without fear of it adversely affecting their job. The best leaders inspire people to do their best, and that works in both directions. When an executive shows their vulnerable side, it can inspire greater commitment from their employees. Showing compassion to team members who are having a tough time by providing support, whether it’s offering to help or words of encouragement, can go a long way towards employee dedication and satisfaction. When we create a space for vulnerability, we create a culture that feels safe, and a safe culture is a productive one. Company culture and the happiness factor The goal of corporate culture is to create a workplace where everyone feels valued and where all employees can contribute and succeed. An improved corporate culture can certainly make people happier in their personal relationships with their bosses and other team members. It can also make organizations more effective. Employees that communicate, share, and feel comfortable to express vulnerabilities can overcome obstacles that inhibit other teams’ performance. Companies that improve culture with an eye on diversity, empathy and kindness can turn a “Great Resignation” into a “Great Retention” by creating an environment where everyone wants to work, and no one wants to leave. Shelley Zalis is CEO of The Female Quotient (The FQ). DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,242
2,022
"Flexibility is the workplace benefit we want the most | VentureBeat"
"https://venturebeat.com/programming-development/flexibility-is-the-workplace-benefit-we-want-the-most"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs Flexibility is the workplace benefit we want the most Share on Facebook Share on X Share on LinkedIn As the dust settles on the return to work post-pandemic, one big sticking point is emerging for American workers: flexibility. So what does flexibility mean in this context? It is things like the ability to set your own schedule: you can go to a doctor’s appointment during the work day or pop out for groceries — yet still complete all your hours. Or it looks like a hybrid or fully remote work set up, with some days at home, and some in the office. McKinsey’s recent American Opportunity Survey discovered that 87% of people will take the chance to work flexibly when offered it, and that this is consistent across demographics, occupations and geographies. Right now, 58% of Americans are working from home at least one day a week, with 35% able to work from home up to five days a week. The most recent Survey of Working Arrangements and Attitudes, revealed that working from home is much more common in major cities than small cities and towns, and those who are able to work from home would like to be able to do so five days a week. We are at a tipping point with our work and how we want to do it. A recent Pew Research Center survey also discovered that flexibility really matters to Americans and if they don’t get it, they will vote with their feet. For 48% of respondents with a child younger than 18, child care issues were a reason they quit a job. Another 45% left a job over lack of flexibility regarding when they could put in their hours. Other factors for recent job leavers include 39% who said their reason was that they were working too many hours, while 30% said it was the opposite: they were working too few hours. Another 35% wanted the flexibility to relocate to a different area. It is clear that many people are now looking for jobs to suit the way they want to live their lives, and not the other way around. If that’s you, and you are seeking a new opportunity, we have three below, and you can check out the Job Board for many more too. Staff Software Engineer, Indeed In this remote position, as a Staff Software Engineer in Indeed ’s SMB Growth team, you will build software that guides employers to post high-quality jobs which achieve good performance in search results. You will build UIs and APIs to offer helpful guidance to employers and highlight problems with jobs that could prevent them from being filled. You will join a collaborative team of extremely talented engineers, UX designers, and product managers. If looking at data, slicing and dicing it in multiple ways and coming up with interesting insights is something you thrive upon, you will fit right in. You’ll need a Bachelor’s degree, five years’ experience programming with Java, JavaScript or Python, and three years’ of experience building full-stack applications with React and Redux. Read the full job description. UX Developer, Immuta Immuta is the leader in automated data access, helping organizations achieve data compliance. As a UX Developer , you will help shape and design a world-class user experience for Immuta’s entire platform. You will produce elegant front-end code helping your squad implement attractive UIs across and contributing to Immuta ’s design system’s pattern library. You’ll work with UX designers, researchers and product managers to understand the challenges and needs facing customers. You will need previous experience as a front-end developer as well as experience with prototyping, and you’ll have three years working in front end development. In return, the company offers many benefits including 100% employer paid healthcare, paid parental leave (maternity and paternity) and unlimited paid time off. Find out more here. Engineering Lead Manager — Addressability, Index Exchange Index is a global advertising technology company helping media owners monetize their digital content through advertising, so that consumers can continue enjoying free and open access to content online. The Engineering Lead / Manager is a critical role, and is responsible for building, growing and supporting privacy-safe products which support buyer addressability. You will need a Bachelor’s degree or higher in computer science, engineering or equivalent, demonstrated leadership and management of 5-15 person teams, and a passion for cutting edge technologies, leading teams and nurturing talent development. The role is remote, and the company also offers time off and flexible work schedules. See the full job spec. Want more flex in your day? Check out thousands of open roles on the VentureBeat Job Board VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,243
2,022
"Maximize your teams' productivity despite looming uncertainty | VentureBeat"
"https://venturebeat.com/programming-development/maximize-your-teams-productivity-despite-looming-uncertainty"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Maximize your teams’ productivity despite looming uncertainty Share on Facebook Share on X Share on LinkedIn Presented by Thunkable The tech industry is going through a mass upheaval, with layoffs happening at an alarming rate. According to Layoffs.fyi , as of mid-November, more than 45,000 workers in the U.S. tech industry, at giants such as Netflix and Twitter, have been laid off this month alone. A New York Times article addresses a cycle of aggressive expansions, corporate excess and wild talent wars to secure the best and brightest. Rapidly expanding headcount, fast growth over profits and being caught in a tech bubble have been cited as some of the causes for the ongoing layoffs. Unfortunately, the onslaught of layoff news comes with the economic uncertainty felt on a global scale following a worldwide pandemic. Ebbs and flows of the economy are not new, and each generation has experienced several in their lifetimes. But how can businesses prepare for such uncertain times and reduced workforces — particularly in tech? With the advancement of technology, are there investments or improvements that companies can implement to weather the storm of upcoming uncertainty? Equipping your team with an effective toolkit Companies are adding low-code and no-code tools to their tech stack and upskilling their employees. According to a 2021 Gartner report , 70% of new applications built by businesses will rely on low-code or no-code technologies by 2025. In 2020, it was less than 25%. These tools are designed to be accessible by people without a background in software development in order to create software without learning complex languages. While low-code and no-code solutions are not a silver bullet for all situations, they can provide organizations with a faster and more frictionless transition to go from idea to implementation with less headcount and impact on other teams. By harnessing low-code and no-code tools, companies can prepare for uncertainty by allowing their employees to produce the solutions they need effectively and efficiently. Instead of ramping up new teams or adding resources, the existing team can create or improve products without adding more resources. Addressing low- and no-code hesitancies While some companies might be on board, there are still challenges from organizational leaders. CEO of Stack Overflow , Prashanth Chandrasekar, touches on such hesitations of low-code/no-code tools , indicating their limitations along with warnings for engineers and developers. But while there are plenty of real concerns to acknowledge, as Prashanth does, we should reframe the problem to think about what comes next. If teams have tasks that need to be done but lack the necessary skill set to implement them, what then? As we have seen with this current landscape of market uncertainty, hiring isn’t an option for most businesses, especially ones already trying to make ends meet. Instead, many are harnessing the creativity and talent of their current teams, equipping them with the tools to solve their own problems and build what they need to make their roles more cross-functionally productive. Resource-constrained teams are already looking for ways to self-serve. Then comes the worry of, “If the company adopts low-code and no-code tools, will that significantly reduce developers’ workloads or make them obsolete?” Frank Fernandes, Director of Engineering at CreditKarma , had the perfect answer at the VentureBeat Low-Code/No-Code Summit, stating, “You’d think engineers would be apprehensive about using [low-code/no-code] tooling. But we see quite the opposite. Engineers like to build things once, build it in a scalable manner, and then not work on it again. You’re always replacing yourself, working on new and cool tech.” As Frank Fernandes explains, low-code and no-code tools make developers more efficient by allowing them to focus on higher-level tasks. The reality of low-code and no-code tooling Non-developers can benefit from low-code and no-code tools by creating solutions or new products without learning complex coding languages. Similarly, developers can focus on larger projects with more impact to advance business goals rather than being bogged down in routine maintenance or simple problems. Thus, low-code and no-code tools are a win-win for businesses and developers alike for the following reasons: 1. Solves the “developer drought” It’s no secret that we are in the midst of a “developer drought,” and there aren’t enough software engineers to keep up with the demand. With the slowdown in hiring across the tech sector, companies are scrambling to find ways to get the most out of their remaining employees. Creating a culture of self-service with low-code and no-code tools starts to foster the concept of Citizen Development within the organization. Equipping and empowering employees to create their own solutions for problems they or your customers face allows you to maximize productivity. Upskilling employees with no-code tools can help your organization be more responsive to issues previously stuck in the backlog. And it will help your citizen developer employees with opportunities for career advancement. Consider the example of one company, Mortgage Educators and Compliance (MEC). In providing online resources for mortgage training and education to professionals looking to get licensed, MEC wanted to offer a companion mobile application to expand reach with their audience. Of course, the app needed to serve their customers more efficiently and innovatively. The IT department tasked with building the app had never made a mobile app before, so their only solution was turning to no code. With the ease of use of no-code tools, MEC could develop the mobile app quickly and, as a result, increase traffic and revenue. Traffic increased to over a thousand active users a month and welcomed a 50% revenue boost in sales. By upskilling the team with no-code tools, MEC achieved a competitive edge in the industry and increased business sales, all while providing a custom customer experience. 2. Eliminates department silos and speeds up project cycles Low-code and no-code platforms have emerged as powerful tools that can help eliminate departmental silos and speed up project cycles. By bringing together technical and non-technical employees, no-code app development fosters collaboration between teams and allows projects to be completed more quickly and efficiently. This is particularly important in today’s rapidly changing business environment, where organizations are constantly pressured to move faster and produce tangible results. At Thunkable, we worked with the founders of an emotional health platform called Wave, which has a mission to make mental healthcare accessible to everyone — but they struggled to develop a mobile app. There were significant communication roadblocks between the creative and technical teams on what needed to be done to bring this app to life. To bridge the gap, the founding team member and Head of Product Operations turned to no-code mobile app development to create a comprehensive prototype to convey everything that needed to go into the app. Thanks to no-code tooling, the creative team was able to cut the project timeline down by half and demonstrate the functionality the engineering team needed to develop, create a shareable asset with key stakeholders, troubleshoot testing capabilities and enable easy implementation of feedback. 3. Creates the developer of the future The blending of skill sets is now a must-have for IT, development, marketing and product teams. While low-code and no-code discussions often revolve around developers and non-developers, instead, we should view these tools as creating the developer of the future. No longer will it be about who knows how to write this programming language versus that programming language. Preparing innovative teams for the future is less a discussion of coders versus non-coders. Instead, the best organizations find that developers of the future are the individuals who have the ideas, skills and access to the necessary tools to implement action and accelerate innovation. As Chris Wanstrath, the former CEO of the code-sharing repository Github put it, “The future of coding is no coding at all.” GitHub is the way that every software engineer collaborates. If you look at most software engineers, they do not start from square one. They start by visiting GitHub when they want to build something. They look at what else is out there. Going back to the point Frank Fernandes made, for modularized sets of code, if it has already been done once, no one should spend time reinventing the wheel. The vast majority of people are taking simple ideas and combining existing ones in a new format specific to their use case. That is precisely the future of software building, the future of coding, which is no coding at all. Don’t wait for predictions; prepare and plan Low-code and no-code tooling allow for an inclusive and blended global workforce. Today, low-code and no-code tools are equipping anyone with an idea to execute and bring that idea to life. No longer is innovation and invention restricted to only the product roadmap and engineering bandwidth. Instead, it belongs to whoever has the idea and the passion for getting it done. My advice to organizations of all sizes is to focus on planning and preparing for the future, not on predictions. The best way organizations and employees can weather turbulent times is to prepare and equip their teams with the necessary skills and toolkits to pivot and innovate. Digitally transform your business with an all-in-one solution for building mobile apps for Apple and Android devices, as well as the mobile web. Thunkable is the most powerful no-code platform for mobile app development. Start building mobile apps for free! Named to Forbes 30 Under 30 and a sought-after thought leader in the no-code space, Arun Saigal is leading the revolution in no-code mobile development. He is the Co-Founder and CEO of Thunkable, the no-code tool that allows anyone to design, develop, and deploy native Android, iOS, and mobile web applications. Through his innovations around app design, development, testing, and publishing processes, he makes it possible for individuals to better their communities, prototype ideas, launch businesses, and solve problems that would otherwise be unaddressed by existing tech providers. Through Thunkable, Arun has empowered more than three million people in 184 countries to build more than seven million apps with his company. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,244
2,022
"Top employee cybersecurity tips for remote work and travel | VentureBeat"
"https://venturebeat.com/security/top-employee-cybersecurity-tips-for-remote-work-and-travel"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Top employee cybersecurity tips for remote work and travel Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the holidays approaching, many remote workers, already at heightened risk of cyberattacks, will be traveling booking holiday travel to visit family and friends. This will likely exacerbate IT teams’ anxiety about cybersecurity , already heightened by the pandemic and its aftereffects. In a survey by the Ponemon Institute, 65% of IT and security professionals said they found it easier to protect an organization’s confidential information when staff were working in the office. Whether employees are working from home, a conference or even vacation, security pitfalls abound. The fact is that with every remote worker, an organization’s attack surface grows larger. Some employees let their cyber guard down while working from home. For others, traveling leads to tiredness and poor decision-making, including taking security shortcuts. This is a problem when 76% of CEOs admit to bypassing security protocols to get something done faster. While technology has made significant strides in protecting us from ourselves, working remotely can quickly go south if we don’t take basic cybersecurity precautions. This article covers a range of security best practices for remote work and travel. Obviously, not every tip applies to every situation. That said, it is crucial to understand your current and future surroundings, assess their relative risk and take steps to protect your credentials, devices and confidential data. Here are some tips to help improve your security posture during remote work or travel. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Do this first: Lock your SIM card Trip or no trip, lock your SIM card. SIM-jacking (or SIM-swapping, unauthorized port-out or “slamming”) is a real and underreported crime where threat actors pretend to be you, contact your wireless provider and “port over” your SIM card to your (their) “new phone.” Imagine someone stealing your entire online life, including your social media accounts. In other words, your phone number is now theirs. All your password resets now run through the threat actor. Considering how many work credentials, social media accounts and apps run through your phone number, the nightmare of this crime quickly becomes evident. If you haven’t already done so, lock down your SIM card with your wireless provider. Here is some information on Verizon’s “Number Lock ” feature. Cybersecurity tips for remote and traveling workers Back everything up all day, every day. If traveling, leave the backup at home or in the cloud. Use a password-protected WPA-enabled Wi-Fi (ideally WPA3) network. Create a strong password (with upper and lower case letters, distinctive characters, and several characters long). Never store passwords on your person or on the phone, including in the notes section. Ideally, your employer should be using a password manager , but chances are they’re not. According to SpecOps’ 2022 Weak Password Report , 54% of businesses do not use a password manager. Even more troubling, 48% of organizations don’t have user verification for calls to the IT service desk. Patch and update every device you are using, including apps. Do the same for the browsers and everything else you’re running on those devices. In August 2022, Apple put out the word that unpatched versions of iPads, iPhones and Macs could be essentially taken over by threat actors. Make sure everything is current as you step into an unfamiliar environment. Here’s how to update every app on your iPhone and iPad if you don’t have them set to automatically update — all at once: iPhone Go to the app store. Click on “Apps.” Click on Account (upper right). Click “Update All.” In addition to updating and patching everything, make sure browsers are running strict security settings, especially when outside your home office. If you don’t want to mess with settings, consider downloading Mozilla Firefox Focus and making it your travel browser. Firefox Focus defaults to purging the cache after every use, leaving behind zero breadcrumbs to exploit. Use two-factor authentication ( 2FA ) everywhere and with everything. When choosing how to receive the authentication code, always opt for token over text as it’s much more secure. At Black Hat 2022, a Swedish research team demonstrated exactly how insecure text authentications are. If a hacker has your login credentials and phone number, text-based authentication simply won’t protect you. Update your Zoom software. Ivan Fratric, a security researcher with Google Project Zero, demonstrated how a bug in an earlier version of Zoom (4.4) allowed remote code execution by exploiting the XMPP code in Zoom’s Chat function. Once the payload was activated, Fratric was able to spoof messages. In other words, he was able to impersonate anyone you work with. What could go wrong? Security and travel: Leaving the home office Whether headed to Starbucks, Las Vegas or overseas, digital nomads should pack lightly. Leave unneeded devices at home. Take just the essentials to get your job done without compromising your entire personal history. Bring a laptop lock to lock your computer to any workstation, as IBM instructs its traveling employees. Also, invest in a physical one-time password (OTP) authenticator. Some companies, like Google, require employees to use them. Employees cannot access anything without the physical device. Leave sensitive data at home. Don’t bring devices containing personally identifiable information (PII) or confidential company documents. Do you use a particular laptop for online banking and signing mortgage docs? Leave it at home. Want to take your work computer on holiday? Reconsider. What happens to your career if company secrets fall into the wrong hands? Of course, taking your laptop on a business trip is expected, but just make sure it’s free of your personally identifiable information. Use RFID blockers to shield your passport and credit cards from “contactless crime.” While contactless payments are convenient at grocery stores and toll booths, they can be quite problematic within range of threat actors employing radio frequency identification (RFID) scanners. An RFID scanner in the wrong hands allows hackers to simply walk past a group of people and unmask identifiable card information. The simple way to guard against this is to employ RFID blockers (basically card envelopes, or “sleeves”) that protect payment cards, room keys and passports from radio frequency attacks, or skimming attacks. There are now entire categories of wallets , bags and purses integrating RFID technology. Fortunately, more modern RFID chips make pulling off this caper much more difficult — but not impossible. Consider using a Privacy Screen for your laptop and phone. When traveling to a security-fraught location, turn off Wi-Fi, Bluetooth and Near Field Communication (NFC) on your phone, tablet and laptop. Funny things can happen when traveling to China or even an unsecured Starbucks. Choose a password-protected hotspot over hotel Wi-Fi. If you must use hotel Wi-Fi, pair with a VPN. Be wary of Bluetooth devices like your remote mouse, keyboard and AirPods. Use a VPN everywhere you go. According to Cloudwards , 57% of respondents say they don’t need a VPN for personal use, and 22% say they don’t need one for work. Encrypt text messages and chats and other communication by using Telegram, Signal or another encryption-based communication platform. Assume third parties are reading unencrypted apps. Wrapping up As you can see, most cybersecurity when traveling involves front-end preparation. Like everything else security-related, it’s crucial to keep systems, software and browsers updated and patched. When traveling abroad, understand that not everywhere is home of the free. Know where you’re going and what their local privacy laws are. In summary, keep a low profile when working remotely or traveling. Don’t take any chances or unnecessary risks. Roy Zur is CEO of ThriveDX’s enterprise division. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,245
2,022
"Zero trust unleashes the full potential of digital transformation | VentureBeat"
"https://venturebeat.com/security/zero-trust-unleashes-the-full-potential-of-digital-transformation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Zero trust unleashes the full potential of digital transformation Share on Facebook Share on X Share on LinkedIn Presented by Zscaler Against a backdrop of rapid digital transformation, zero trust has emerged as the ideal framework for securing enterprise users, workloads and devices in a distributed cloud- and mobile-centric world. IT leaders globally are waking up to this as zero trust moves into the mainstream, disrupting decades of legacy security and networking principles. More than 90% of IT leaders who have started their migration to the cloud have already implemented a zero trust security strategy, or are in the process of implementing one in the next year. That’s according to the findings of the global State of Zero Trust Transformation report, which Zscaler commissioned to seek insights from more than 1,900 C-level decision-makers from organizations that have begun migrating applications and services to the cloud. This is good progress, and the reasons behind it show optimism for implementing a zero trust architecture beyond the next year. However, the pressure is on enterprises to speed up this process as much as possible. As the world around us continues to take twists and turns and create challenging economic conditions, with supply chain uncertainty, ever-changing customer and employee demands as well as budget pressures, organizations are finding it increasingly difficult to succeed without the speed, agility, flexibility and efficiency afforded by the cloud. Securing the case for zero trust Digital transformation can never be a one-way street. Just as transforming the network inevitably leads to an evolved cybersecurity strategy, transformed security solutions can be catalysts for change in other areas of an organization. A cloud-based approach to zero trust, with the visibility and control it provides over a network’s users and traffic, plays an invaluable role in enabling an organization’s safe, seamless digital transformation. Truly transformative zero trust enables organizations to leave their legacy infrastructure behind. More than two-thirds (68%) of surveyed IT decision-makers either agree that secure cloud transformation is impossible with legacy network security infrastructure or that zero trust network access has clear advantages over traditional firewalls and VPNs with regard to securing remote access to applications. With only 22% of respondents fully confident their organization is leveraging the full potential of their cloud infrastructure, there’s an evident need to think beyond just security going forward. Approached from a holistic IT perspective, zero trust can unlock a wealth of opportunities in an overall digitization process. Yes, it can prevent large-scale cyberattacks, but it can also do much more, from driving greater innovation to supporting better employee engagement and delivering tangible cost efficiencies. Unlocking the full potential of zero trust The survey results also point to a disconnect between the business and IT teams alongside a critical misunderstanding of the rationale behind digital transformation. The report shows that organizations still see transformation as a technology issue — a way to move spending from infrastructure to the cloud –rather than an integral part of the business strategy. IT leaders focused on driving the business understand that transformation is not only about migrating apps to the cloud. They recognize that the network and its security must be transformed as well for the business to realize the full potential of digitization. As organizations grapple with providing a new hybrid workplace and digitized production infrastructure, they are relying on a range of emerging technologies such as IoT/OT, 5G and even the metaverse. They must broaden the lens through which they see zero trust and digital transformation. A zero trust platform has the power to redesign business and organizational infrastructure requirements: to become a business driver that enables companies not only to offer the hybrid work model employees are demanding, but also to become fully digitized organizations with all the benefits this entails, from agility and efficiency to future-proofed infrastructure. There is an incredible opportunity for IT leaders to educate business decision-makers on zero trust and bring it to the table as a high-value business driver. It’s the missing link helping businesses empower and ready themselves for future technologies today. Nathan Howe is Vice President, Emerging Technology and 5G at Zscaler. For more insights, read the full State of Zero Trust Transformation report. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,246
2,022
"Inside dark web marketplaces: Amateur cybercriminals collaborate with professional syndicates | VentureBeat"
"https://venturebeat.com/2022/07/21/darknet-marketplaces-amateur-cybercriminals-and-professional-syndicates-collaborating"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Inside dark web marketplaces: Amateur cybercriminals collaborate with professional syndicates Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One listing for a remote access trojan (RAT) setup and mentoring service promised “Make money. Fast. Simple. Easy.” For $449, amateur cybercriminals were provided with functionalities including a full desktop clone and control with hidden browser capability, built-in keylogger and XMR miner, and hidden file manager. “From cryptocurrency mining to data extraction, there’s [sic] many ways that you can earn money using my RAT setup service,” the seller promised, dubbing its listing a “NOOB [newbie] FRIENDLY MENTORING SERVICE!!” Rise of ‘plug and play’ This is just one example of countless in the flourishing cybercrime economy , as uncovered by HP Wolf Security. The endpoint security service from HP. today released the findings of a three-month-long investigation in the report “The Evolution of Cybercrime: Why the Dark Web Is Supercharging the Threat Landscape and How to Fight Back.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The report’s starkest takeaway: Cybercriminals are operating on a near-professional footing with easy-to-launch, plug-and-play malware and ransomware attacks being offered on a software-as-a-service basis. This enables those with even the most rudimentary skills to launch cyberattacks. “Unfortunately, it’s never been easier to be a cybercriminal,” said the report’s author, Alex Holland, a senior malware analyst with HP. “Now the technology and training is available for the price of a gallon of gas.” Taking a walk on the dark side The HP Wolf Security threat intelligence team led the research, in collaboration with dark web investigators Forensic Pathways and numerous experts from cybersecurity and academia. Such cybersecurity luminaries included ex-Black Hat Michael “MafiaBoy” Calce (who hacked the FBI while still in high school) and criminologist and dark web expert Mike McGuire, Ph.D., of the University of Surrey. The investigation involved analysis of more than 35 million cybercriminal marketplace and forum posts, including 33,000 active dark web websites, 5,502 forums and 6,529 marketplaces. It also researched leaked communications of the Conti ransomware group. Most notably, findings reveal an explosion in cheap and readily available “plug and play” malware kits. Vendors bundle malware with malware-as-a-service, tutorials, and mentoring services – 76% of malware and 91% of such exploits retail for less than $10. As a result, just 2 to 3% of today’s cybercriminals are high coders. Popular software is also providing simple entry for cybercriminals. Vulnerabilities in Windows OS, Microsoft Office, and other web content management systems were of frequent discussion. “It’s striking how cheap and plentiful unauthorized access is,” said Holland. “You don’t have to be a capable threat attacker, you don’t have to have many skills and resources available to you. With bundling, you can get a foot in the door of the cybercrime world.” The investigation also found the following: 77% of cybercriminal marketplaces require a vendor bond – or a license to sell – that can cost up to $3,000. 85% of marketplaces use escrow payments, 92% have third-party dispute resolution services, and all provide some sort of review service. Also, because the average lifespan of a darknet Tor website is only 55 days, cybercriminals have established mechanisms to transfer reputation between sites. One such example provided a cybercriminal’s username, principle role, when they were last active, positive and negative feedback and star ratings. As Holland noted, this reveals an “honor among thieves” mentality, with cybercriminals looking to ensure “fair dealings” because they have no other legal recourse. Ransomware has created a “new cybercriminal ecosystem” that rewards smaller players, ultimately creating a “cybercrime factory line,” Holland said. Increasingly sophisticated cybercriminals The cybercrime landscape has evolved to today’s commoditization of DIY cybercrime and malware kits since hobbyists began congregating in internet chat rooms and collaborating via internet relay chat (IRC) in the early 1990s. Today, cybercrime is estimated to cost the world trillions of dollars annually – and the FBI estimates that in 2021 alone, cybercrime in the U.S. ran roughly $6.9 billion. The future will bring more sophisticated attacks but also cybercrime that is increasingly efficient, procedural, reproducible and “more boring, more mundane,” Holland said. He anticipates more damaging destructive data-denial attacks and increased professionalization that will drive far more targeted attacks. Attackers will also focus on driving efficiencies to increase ROI, and emerging technologies such as Web3 will be “both weapon and shield.” Similarly, IoT will become a bigger target. “Cybercriminals have been increasingly adopting procedures of nation-state attacks,” Holland said, pointing out that many have moved away from “smash and grab” methods. Instead, they perform more reconnaissance on a target before intruding into their network – allowing for more time ultimately spent within a compromised environment. Mastering the basics There’s no doubt that cybercriminals are often outpacing organizations. Cyberattacks are increasing and tools and techniques are evolving. “You have to accept that with unauthorized access so cheap, you can’t have the mentality that it’s never going to happen to you,” Holland said. Still, there is hope – and great opportunity for organizations to prepare and defend themselves, he emphasized. Key attack vectors have remained relatively unchanged, which presents defenders with “the chance to challenge whole classes of threat and enhance resilience.” Businesses should prepare for destructive data-denial attacks, increasingly targeted cyber campaigns, and cybercriminals that are employing emerging technologies, including artificial intelligence, that ultimately challenge data integrity. This comes down to “mastering the basics,” as Holland put it: Adopt best practices such as multifactor authentication and patch management. Reduce attack surface from top attack vectors like email, web browsing and file downloads by developing response plans. Prioritize self-healing hardware to boost resilience. Limit risk posed by people and partners by putting processes in place to vet supplier security and educate workforces on social engineering. Plan for worst-case scenarios by rehearsing to identify problems, make improvements and be better prepared. “Think of it as a fire drill – you have to really practice, practice, practice,” Holland said. Cybersecurity as a team sport Organizations should also be willing to collaborate. There is an opportunity for “more real-time threat intelligence sharing” among peers, he said. For instance, organizations can use threat intelligence and be proactive in horizon scanning by monitoring open discussions on underground forums. They can also work with third-party security services to uncover weak spots and critical risks that need addressing. As most attacks start “with the click of a mouse,” it is critical that everyone become more “cyber aware” on an individual level, said Ian Pratt, Ph.D., global head of security for personal systems at HP Inc. On the enterprise level, he emphasized the importance of building resiliency and shutting off as many common attack routes as possible. For instance, cybercriminals study patches upon release to reverse-engineer vulnerabilities and rapidly create exploits before other organizations need patching. Thus, speeding up patch management is essential, he said. Meanwhile, many of the most common categories of threat – such as those delivered via email and the web – can be fully neutralized through techniques such as threat containment and isolation. This can greatly reduce an organization’s attack surface regardless of whether vulnerabilities are patched. As Pratt put it, “we all need to do more to fight the growing cybercrime machine.” Holland agreed, saying: “Cybercrime is a team sport. Cybersecurity must be too.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,247
2,021
"Elder care, wireless AI, and the Internet of Medical Things | VentureBeat"
"https://venturebeat.com/ai/elder-care-wireless-ai-and-the-internet-of-medical-things"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elder care, wireless AI, and the Internet of Medical Things Share on Facebook Share on X Share on LinkedIn As we age, we gradually agree to medical exams and medications that would have been unthinkable in our youth, until we become senior citizens — the point at which we frequently engage with doctors, and our health becomes a subject of constant concern. We’ve been trained to accept this as the cycle of life, but it’s increasingly clear that the next generation of seniors will have better experiences: Advancements in artificial intelligence and wireless technologies will enable massive streams of biometric data to be harvested and processed from wearables, internet of things (IoT) sensors, and chip-laden pills, prolonging and saving lives. At a time when there’s potential danger to seeing patients in person, and health care facilities are wary of becoming overwhelmed because of COVID-19 cases, these technologies are not merely beneficial, but incredibly important. Smarter sensors, software, and services will enable health monitoring to be less invasive and more automated than before, reducing the need for human caregivers while restoring dignity that seniors have lost over the years. Ten years ago, monitoring a senior for hip-breaking falls might have been impractical without the aid of a relative or personal nurse, but falls can now be detected and addressed immediately with smartwatches ; similarly, wearables targeting everything from swallowing problems to incontinence are now available from health startups. The next steps will be monitoring without wearables — wireless devices that reduce or eliminate human involvement in the monitoring process — and medically specific internet of medical things (IoMT) sensors that are specially designed to record human biometrics. One example: Origin Wireless has developed a “wireless AI” solution that uses Wi-Fi signals to map closed spaces. The wireless radio waves create an invisible “wave pool” in a room, and Origin’s Remote Patient Monitoring system uses AI to monitor the pool for ripples that signal disruptions. Without requiring either a camera or motion sensors, Origin RPM knows when a person abruptly shifts from standing to laying on the floor, and can trigger an alert to local caregivers or off-site family members. More subtle changes in the data streams can even indicate granular changes in a person’s activity, breathing, and sleeping. Japanese startup SakuraTech is using millimeter wave signals to wirelessly monitor up to four heart and respiration rates at once, promising to work through common impediments such as clothing and blankets, sending data to the AWS cloud for constant remote monitoring. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Without machine learning, interpreting room-scale, volumetric masses of wireless data in this way would be impractical — akin to a sonar system constantly seeing objects moving in the ocean without identifying their intent. But trained AI can understand the layout of a room as visualized with radio waves, then determine dangerously atypical patterns in the people who live in that room, all without violating personal privacy. Unlike AI image segmentation, Wi-Fi and millimeter wave scanning work like radar, and their data can be used to recognize patterns without the need for photo or video recording. Another company, Essence Group , recently introduced 5G PERS, a senior independent living solution that enables activity monitoring, fall detection, and voice connectivity. 5G PERS uses a collection of traditional IoT motion sensors for monitoring, but uniquely relies upon 5G cellular connectivity rather than Wi-Fi or 4G for infrastructure. Because it connects the IoT sensors to the cloud over a cellular connection, PERS 5G can operate in homes where seniors don’t have Wi-Fi routers — the solution is standalone, so it can be installed and then remotely monitored without depending on the senior to maintain separate hardware or services. General-purpose IoT sensors have used cameras and movement detectors to enable everything from smart refrigerators to industrial quality assurance systems, but medically focused IoMT sensors wirelessly connect to health clouds for individual biometric monitoring and data storage. Since they’re designed specifically for tracking specific human life signals, IoMT sensors can be far more “personal” than ever before: Their tiny chips can enable exterior motion tracking in always-on wearables or internal monitoring using ingestible wireless pills such as HQ, Inc.’s CorTemp — a core temperature probe that remains inside your body for 24-36 hours. While medical technologies keep improving, there’s no guarantee that they’ll be immediately or widely adopted. Proteus Digital Health successfully completed clinical validations last year for ingestible microchips that monitored adherence to medication schedules, but ultimately filed for bankruptcy. The problem wasn’t the practicality of the chips, but rather that they would double or triple a medication’s monthly cost. History suggests that the chip prices will continue to drop over time, giving the technology a greater chance of mass adoption and increasing the number of data streams from monitored patients. The trend is clear: IoMT sensors will only become more powerful, easier to use, and ubiquitous. New 5-nanometer chip fabrication has already yielded atomic-scale transistors that can be powered by barely any energy, and even smaller 3-nanometer chips will be commercially available next year, making microchipped pills literally easier to swallow. At the same time, mobile AI chips are nearly doubling in performance each year, such that tomorrow’s client devices could have AI capabilities superior to yesterday’s cloud and edge servers. Remote monitoring tasks that may have been too challenging two years ago will seem wholly within the power of even common smartphones two years from now. Society’s biggest challenge may be to make seniors comfortable with adopting these new technologies, as it may be easier for older users to shrug off wearables, room-scale monitors, and ingestible chips as “unnecessary” than accept them as the new normal. But as the tech keeps shrinking, it’s likely to fade into the background of our lives, eventually solving problems before we — or other human monitors — even realize what’s happening. That means today’s and tomorrow’s seniors can realistically look forward to a new era in medicine where we depend less on doctors yet benefit every day from more comprehensive health care, ultimately living longer and better than ever before. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,248
2,022
"Dropbox took the bait in recent phishing attack of employee credentials | VentureBeat"
"https://venturebeat.com/security/dropbox-took-the-bait-in-recent-phishing-attack-of-employee-credentials"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dropbox took the bait in recent phishing attack of employee credentials Share on Facebook Share on X Share on LinkedIn Dropbox Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dropbox has been added to the list of companies that have fallen prey to phishing attacks. The company announced this week that, on October 14, threat actors impersonating as CircleCI gained access to Dropbox employee credentials and stole 130 of its GitHub code repositories. GitHub alerted Dropbox to the suspicious behavior, which had begun the previous day. The code accessed contained some credentials, namely API keys used by Dropbox developers, the company said. The code and the surrounding data also included a few thousand names and email addresses belonging to Dropbox employees, current and past customers, sales leads and vendors. However, Dropbox emphasized in a blog post , that “no one’s content, passwords, or payment information was accessed, and the issue was quickly resolved.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company also reported that its core apps and infrastructure were unaffected, as their access is even more limited and strictly controlled. “We believe the risk to customers is minimal,” Dropbox said. However, the company said, “We’re sorry we fell short.” Sophisticated phishing The announcement indicates that, despite awareness and training, phishing remains a significant (and successful) method for cyberattackers. In fact, a new report from Netskope out today reveals that, while users are warier when it comes to spotting phishing attempts in emails and text messages, they are increasingly falling prey to phishing via websites, blogs and third-party cloud apps. “In today’s evolving threat landscape, people are inundated with messages and notifications, making phishing lures hard to detect,” Dropbox wrote. “Threat actors have moved beyond simply harvesting usernames and passwords, to harvesting multifactor authentication codes as well.” The best trained employees still fall prey Security leaders weighing in on the news emphasized the importance of continued training and awareness amidst increasingly savvier attacks and scaled-up techniques. “Attackers today seem to be moving towards compromising ‘ecosystems.’ They want to be able to compromise apps that have massive user bases (like Dropbox) and the way they are doing that is by attempting to compromise the people in power: The developers,” said Abhay Bhargav, CEO and founder of AppSecEngineer , a security training platform. This particular campaign targeted Dropbox developers and/or devops team members, he explained. Attackers set up phishing sites “masquerading” as CircleCI. The attack phished developers and stole their GitHub credentials. Attackers compromised a developer’s access and used that to steal their API token that could be used to access some metadata around Dropbox’s employees, customers and vendors. “This is an interesting evolution of phishing, as it is oriented towards more technical users,” said Bhargav. “This eliminates the myth that only non-tech users fall for phishing attacks.” Matt Polak, CEO and founder of the cybersecurity firm, Picnic Corporation , agreed that this sophisticated social engineering attack proves that even the most well-trained employees can be compromised. To reduce risk, organizations should, first, have the capability to monitor and reduce their company and employee OSINT framework exposure, as attackers need this data to craft their attacks, he said. Secondly, companies need to be able to “identify and block attacker infrastructure and accounts that impersonate them or a trusted third party before these can be leveraged against their people,” said Polak. What exactly happened? Millions of developers store and manage source code in GitHub. In September, the company’s security team learned that threat actors impersonating CircleCI — a popular continuous integration and code product — had targeted GitHub users via phishing to harvest user credentials and two-factor authentication. The same situation occurred with Dropbox, which uses GitHub to post its public and some of its private repositories. The company also uses CircleCI for select internal deployments. GitHub credentials can be used to log in to CircleCI. In October, multiple Dropboxers received phishing emails impersonating CircleCI with the intent of targeting GitHub accounts, Dropbox reported. Its systems automatically quarantined some of these emails, but others landed in inboxes. These “legitimate-looking” emails directed users to visit a fake CircleCI login page, enter their GitHub username and password, and then use their hardware authentication key to pass a one-time password (OTP) to the malicious site. Succeeding, threat actors got access to 130 Dropbox code repositories, which included copies of third-party libraries slightly modified for use by Dropbox, internal prototypes, and some tools and configuration files used by the security team. Immediately upon being alerted to the suspicious activity, the threat actor’s access to GitHub was disabled. The Dropbox security team immediately coordinated the rotation of all exposed credentials to determine whether customer information (and what kind) was accessed or stolen, the company said. A review of logs found no evidence of successful abuse. The company said it also hired outside forensic experts to verify these findings, while also reporting the event to the appropriate regulators and law enforcement. Implementing ‘phishing-resistant’ WebAuthn To prevent similar future incidents, Dropbox said it is accelerating its adoption of WebAuthn, “currently the gold standard” of MFA that is more “phishing-resistant.” Soon, the company’s whole environment will be secured by this method with hardware tokens or biometric factors. “We know it’s impossible for humans to detect every phishing lure,” the company said. “For many people, clicking links and opening attachments is a fundamental part of their job.” Even the most skeptical, vigilant professional can fall prey to a carefully crafted message delivered in the right way at the right time, said Dropbox. “This is precisely why phishing remains so effective — and why technical controls remain the best protection against these kinds of attacks,” the company said. “As threats grow more sophisticated, the more important these controls become.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,249
2,022
"Report: Phishing attacks jump 61% in 2022, with 255M attacks detected | VentureBeat"
"https://venturebeat.com/security/report-phishing-attacks-jump-61-in-2022-with-255m-attacks-detected"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Phishing attacks jump 61% in 2022, with 255M attacks detected Share on Facebook Share on X Share on LinkedIn Phishing fraud Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. >>Don’t miss our special issue: How Data Privacy Is Transforming Marketing. << According to the latest annual State of Phishing report from SlashNext , there has been an 80% increase in phishing threats originating from accounts on trusted services such as Microsoft , Amazon Web Services or Google , with nearly one-third (32%) of all threats now being hosted on trusted services. Additionally, the report found more than 255 million attacks in 2022 – a 61% increase in the rate of phishing attacks compared to 2021. The report data is taken from a sample of threats detected by SlashNext security products. SlashNext analyzed over a billion link-based, malicious attachments and natural language threats scanned in email, mobile and browser channels over six months in 2022. The findings highlight a dramatic increase in phishing scams, as well as a new surface of tactics, as hybrid work and the use of personal mobile devices for work continue to be a trend. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, the report found that 76% of the attacks found in 2022 were credential harvesting , which is still the number one cause of breaches. Additionally, 54% of threats detected by SlashNext in 2022 were zero-hour attacks, representing a 48% increase in zero-hour threats from the end of 2021. SlashNext’s research findings indicate that organizations must move from traditional security practices and last-generation tools to a modern security strategy including robust artificial intelligence (AI) phishing controls that address all variations of phishing attacks and provides a broad range of protections. Read the full report from SlashNext. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,250
2,022
"BYOD security rests on personnel accountability management | VentureBeat"
"https://venturebeat.com/security/byod-security-rests-on-personnel-accountability-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BYOD security rests on personnel accountability management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Mobile devices are part of the enterprise security puzzle that often gets overlooked. While security teams often focus on endpoint security , less focus is placed on how to protect employees’ personal devices. Defense startup, Adyton , which today announced it has closed a $10 million series A funding round led by Khosla Ventures, aims to address this oversight with a secure mobile-first solution called Mustr. Mustr is designed to enable distributed workforces to manage personnel, to perform real-time status checks on employees with secure document sharing, remote wipe and remote access-revocation capabilities. While Adyton’s approach is focused on protecting entities like the Department of Defense (DOD), it does highlight that personnel accountability management offers enterprises a new framework for mitigating human risk in bring your own device ( BYOD ) environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Securing BYOD environments in regulated industries One of the long-lasting legacies of the COVID-19 pandemic is the increased adoption of remote working. In September 2021, Gallup found that 45% of full-time employees were working partly or fully remotely. In these new remote environments, personal devices have led to the creation of BYOD environments, where employees are using a mix of their own devices and work devices to maintain productive. Ever since, securing personal devices has remained a challenge for organizations of all shapes and sizes, not least in highly regulated industries like the military. “Accountability is a core function in the military. In-person check-ins do not work when people are on the go, in field environments, or reacting to crisis situations. Leaders don’t always know if instructions are received and struggle to manually aggregate field data for reporting and decision-making,” said James Boyd, CEO and cofounder of Adyton. “Mustr has enabled defense leaders and individual service members to save countless hours of administrative work, identify service members in distress and make real-time, data-driven decisions to enhance operational readiness. Units use Mustr to improve the effectiveness of their training time, strengthen the command climate, and run operations more efficiently,” Boyd said. Mustr enables organizations in highly-regulated industries like the military to check on the status of remote workers while providing a mechanism to share documents under the protection of double encrypted data-at-rest, container-based encryption, FIPS 140-2 validated modules, and data-in-transit encryption. So far, the DoD uses the platform to maintain visibility over its distributed workforce of over three million personnel, which suggests this approach could also be used to increase transparency in other enterprises’ remote working environments. Mobile security solutions Mustr’s solution can loosely be described as a mobile security platform. The mobile security market is in a state of growth, with Fortune Business Insights valuing the market at $34.9 billion in 2019 and projecting it will reach $103.5 billion by 2027. While Mustr doesn’t have a direct competitor in terms of its feature-set, other providers in the market are also taking steps to mitigate the risks facing mobile devices. For instance, Palo Alto Networks GlobalProtect provides secure remote access and identity-based access controls to authenticate users before they can access sensitive resources. Palo Alto Networks recently announced raising fourth quarter revenue of $1.6 billion in 2022. Another potential competitor is Check Point with Harmony Mobile, which offers file protection capabilities to prevent users from downloading malicious files to mobile devices and real-time assessments designed to detect vulnerabilities and configurations that could leave devices at risk. Check Point announced raising $2.2 billion in revenue during 2021. At this stage, Mustr’s main differentiation is its use of double encryption, and its focus on BYOD security. “Mustr is a one-of-a kind mobile technology platform, making “bring your own device” a reality for the most highly regulated organizations. There is no other product like this on the market,” Boyd said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,251
2,022
"Access management, identity governance and privileged access features converge in new Okta cloud tools | VentureBeat"
"https://venturebeat.com/security/new-okta-cloud-tools-aim-to-elevate-identity-access-management-iam"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Access management, identity governance and privileged access features converge in new Okta cloud tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today’s workforce has no boundaries. Organizations are no longer stocked with full-time employees — they have a much larger ecosystem comprising a variety of contractors and partners. Ultimately, this means that more devices, from more locations, are accessing company resources than ever before. “In this boundaryless world, not only do you have to secure access to these resources from outside your own network and your own organizational boundary, but you have to be able to govern access to them,” said Sagnik Nandy, President and CDO for workforce identity at Okta. And, traditionally, companies have often invested in disjointed tools — and then quickly and unfortunately discovered that they aren’t able to keep pace with modern business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It requires too many custom integrations and handoffs, and also results in a poor user experience, which, in turn, hampers user adoption,” said Nandy. “Siloes and low-adoption rates limit visibility, and the security holes persist.” This is the quandary that Okta aims to solve with Okta Workforce Identity Cloud. The company unveiled the unified identity tool at its annual Oktane22 conference this week. Avoiding identity siloes The Workforce Identity Cloud provides a single control plane giving IT and security teams the ability to manage identity across enterprise resources and users, “which has become more and more challenging in a boundaryless world,” said Nandy. The tool unifies the “three pillars of modern identity management” into a single control pane: IAM, Identity Governance and Administration (IGA) and Privileged Access Management (PAM). It includes Okta Identity Governance, which simplifies the process of requesting and granting access to resources, allowing IT teams to ensure that only the right users have access to the specific resources. It also includes Okta Privileged Access, which secures highly-privileged credentials for admin and root accounts. And, it gives admins the necessary tools to bolster security for privileged resources, monitor and record privileged access, and run detailed compliance reports for auditors, said Nandy. The tool provides an orchestration layer that leverages automation and provides visibility and control of enterprise identities, said Nandy, and that can pull in third party signals. All told, Workforce Identity Cloud integrates across the security stack and helps IT teams govern access for all use cases, said Nandy. This can help eliminate trade-offs between user experience and security, and IT and workforces can become “more agile and productive.” Nandy emphasized the fact that Okta is independent and neutral, making it compatible with thousands of applications, users, devices, OS’ and infrastructure providers. And, the company continues to seek out new use cases for business challenges around identity, he said. “Given the rise of multi-cloud and the continued adoption of a broad and deep ecosystem of SaaS applications, identity solutions for privileged and non-privileged users really need to span the full landscape of technology choice, or risk creating the kinds of identity siloes that result in security holes,” said Nandy. The right access at the right level at the right time Identity access management (IAM) is a framework to ensure that the right users have the access they should have (or not) to an organization’s technology resources. And, with the average cost of a data breach at an all-time high of $4.35 million, demand for (IAM) tools like Okta’s continue to increase. Fortune Business Insights puts the market on track to reach $34.52 billion in 2028, up from $13.41 billion in 2021 (a CAGR of 14.5%). Okta — which competes for market share with Oracle, IBM, SailPoint and Azure, among others — seeks to elevate IAM to a new level, and also converge access management, identity governance, and privileged access. Okta Workforce Identity Cloud is aimed at a broad swath of identity needs, but Nandy particularly pointed to the many inherent risks posed by standing privileges. That is, when privileged accounts or users have standing access to critical infrastructure and resources. Ultimately, these create more security vulnerabilities because they extend access to users who may no longer require it, making their user credentials targeted assets for threat actors. “We’ve seen a ton of attacks that have their origins in these kinds of standing privileges,” said Nandy. By integrating IGA and PAM with IAM, IT has more power and control over access management without compromising security or user experience, he said. Today’s technology environment is heterogenous, so it is critical to integrate well with everything, said Nandy. However, most providers think of IGA, PAM and IAM as distinct, rather than a unified approach. This limits what devices and operating systems they can manage as part of a single platform, rather than one that spans multiple operating systems, applications, devices and user types. But, he said, organizations should recognize that they do have the ability to improve experience, keep customers secure and enable app builders to focus on what is most important: Innovating for their customers. It just takes the right mix of tools. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,252
2,022
"'Quiet quitting' poses a cybersecurity risk that calls for a shift in workplace culture  | VentureBeat"
"https://venturebeat.com/security/quiet-quitting-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Quiet quitting’ poses a cybersecurity risk that calls for a shift in workplace culture Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Are your employees mentally checked out from their positions? According to Gallup , “quiet quitters,” workers who are detached and do the minimum required as part of their roles, make up at least 50% of the U.S. workforce. Unengaged employees create new security risks for enterprises as it only takes small mistakes, such as clicking on an attachment in a phishing email or reusing login credentials to enable a threat actor to gain access to the network. Considering that 82% of data breaches last year involved the human element or human error, security leaders can’t afford to overlook the risks presented by quiet quitting, particularly amid the Great Resignation , where employees expect greater work-life balance. Quiet quitting and insider threats While quiet quitting and under-engaged employees constitute an insider risk, they’re not necessarily a threat. Gartner draws a distinction between the two by arguing that “not every insider risk becomes an insider threat; however, every insider threat started as an insider risk.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Under Gartner’s definition, every employee, contractor or third-party partner can be considered an insider risk if they have credentials to access to corporate systems and resources, because they have the ability to leak sensitive information and intellectual property. As a result, organizations need to be prepared to prevent insider risks from growing into threats that leak regulated data. Part of that comes down to identifying those employees that have checked out. “It’s important to be aware of quiet quitting, so a quiet quitter doesn’t become a loud leaker. Leading indicators for quiet quitting include an individual becoming more withdrawn becoming apathetic towards their work,” Forrester VP Principal Analyst Jeff Pollard. “If those feelings simmer long enough, they turn into anger and resentment, and those emotions are the dangerous leading indicators of insider risk activity like data leaks and/or sabotage,” Pollard said. Unfortunately, employee-facilitated data leaks are exceptionally common. A recent report released by Cyberhaven found that nearly one in 10 employees will exfiltrate data over a six-month period. It also found that employees are much more likely to leak sensitive information in the two weeks before they resign. CISOs and security teams can’t afford to overlook this threat either, due to the prolonged damage caused by insider incidents, which Ponemon Institute estimates take an average of 85 days to contain and cost organizations $15.4 million annually. Considering work-life balance Of course, when addressing quiet quitting, it’s important to remember that it’s often difficult to draw the line between employees who are pursuing greater work-life balance, and those that have checked out and are acting negligently. “While the term [quiet quitting] is conveniently alliterative and ripe for buzzworthyness, underneath it’s problematic and requires further definition. Are employees who are content with their current position and maintaining reasonable work-life boundaries quitting?,” said Tessian CISO, Josh Yavor. “A large portion of “quiet quitters may actually be some of our safest and most reliable employees, so let’s redefine “quiet quitters” as only those who are wilfully disengaged and apathetic but staying just above the thresholds that would potentially lead to their dismissal,” Yavor said. When looking to mitigate the threats caused by that minority of disengaged and apathetic employees, it’s important not to assign blame, but to consider that their working environment itself could be toxic, with unreasonable expectations and deadlines or even workplace bullying and harassment. In this sense, quiet quitting isn’t just a challenge for security teams to address, but requires a company-wide effort to support employee wellness and work-life balance. The problem is that this can be immensely challenging remote working environments with lack of clear separation between an employee’s home and professional life. Mitigating insider risks in remote working environments In remote and hybrid working environments, CISOs and other enterprise leaders need to be proactive about supporting employees to ensure that they’re not at risk of stress and burnout. “While quiet quitting is a relatively new term, it describes an age-old problem — workforce disengagement,” said CISO of (ISC)2 , Jon France. “The difference this time around is that in a remote work environment, the signs may be a little harder to spot. To prevent employees from quiet quitting, it is important for CISOs and security leaders to ensure and promote connection and team culture,” France said. To help maintain a fulfilling working environment, France recommends that leaders should have regular check-ins with their teams to maintain a strong work culture, providing access to regular social events and activities. This can help employees to feel more engaged in their work. At the same time, it’s important to ensure that employees aren’t being overburdened with work that can lead to burnout. Active communication with employees is critical for teams to ensure that employees are engaged and comfortably handling the tasks they’re expected to complete. Addressing human risk In addition to improving employee engagement, security leaders should also look to mitigate human risk throughout the organization to reduce the likelihood of data leaks. One of the simplest solutions is to implement the principle of least privilege , ensuring that employees only have access to the data and resources they need to perform their function. This means if an unauthorized user does gain access to the account or they attempt to leak information themselves, the exposure to the organization is limited. Another approach is for organizations to offer security awareness training to teach employees security-conscious behaviors, such as selecting a strong password and educating them on how to identify phishing scams. This can help to reduce the chance of credential theft and account takeover attempts. When implementing security awareness training, SANS Institute suggests that the program should be managed by a full-time dedicated individual, such as a Human Risk Officer or Security Awareness and Education Manager that sits within the security team and reports directly to the CISO. This individual can take charge of helping the organization to identify, manage, and measure human risk in all its forms and kickstart cultural change. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,253
2,022
"New privacy and regulatory requirements are an opportunity to build consumer trust | VentureBeat"
"https://venturebeat.com/automation/new-privacy-and-regulatory-requirements-are-an-opportunity-to-build-consumer-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight New privacy and regulatory requirements are an opportunity to build consumer trust Share on Facebook Share on X Share on LinkedIn Presented by Treasure Data The digital and regulatory landscape is constantly evolving, which impacts day-to-day marketing strategy and operations. In this VB On-Demand event, learn how to navigate regulatory changes while building consumer trust, leverage intelligent technology to meet marketing objectives, and more. Watch free on-demand! Across the world, the privacy legislative and regulatory landscape is in constant flux — and that’s not great for marketing strategy and operations. There’s Apple’s tracking transparency framework and Google’s third-party cookies drama. Comprehensive privacy legislation passed in California, Virginia, Colorado, Utah, and Connecticut. The American Data Privacy Protection Act (ADPPA) introduced in the U.S. Congress, plus the Federal Trade Commission proposing sweeping regulations to address commercial surveillance and lax security practices. Internationally, there’s the GDPR taking an interest in Twitter, privacy laws being passed in China and Australia, Brazil’s new cookie guidelines, and more. Marketers have to protect their brands by complying with these regulations, while at the same time find ways to ensure they’re meeting their company objectives. But that’s an opportunity, says Helen Huang, principal product manager, security and data privacy at Treasure Data. “Companies are taking a closer look and reevaluating data collection practices because it’s important to listen to what the consumer base is saying,” Huang says. “These changes are worrisome, but provides an opportunity for all of us to work together to be responsible custodians of first-party customer data and earn their trust.” Identity resolution and customer data platforms Identity recognition and resolution has become a particularly spicy topic since Google’s recent announcement about the deprecation of third-party cookies. It means that first-party data has become increasingly important to capture. “ First-party data holds more potential and more power than ever before,” said Jordan Abbott, chief privacy officer at Acxiom. “It has the chance to disrupt digital onboarders and substantially reduce what we call the ad tech tax by working directly with demand-side platforms, supply-side platforms and publishers.” First-party data will give brands more flexibility in the future no matter how regulatory issues play out, he added, and the ability to recognize users and tag them with a unique enterprise identifier will be critical to success going forward. Customer data platforms (CDPs) are crucial here, to collect and unify data about customers in real time, and to help create a more holistic customer view. “I think a foundation of identity recognition and resolution will ensure that brands and marketers will recognize the consumer earlier in their journey and allow them to treat the consumer in the best way possible at every step along the way, plus consistently deliver the types of personalized experiences that consumers increasingly come to expect,” he said. “I think it will also increase reach and accuracy, and optimize the MarTech investments.” The opportunity to secure customer trust Enforcement actions and class action lawsuits related to alleged privacy violations are incredibly expensive, as well as a substantial diversion of resources — and usually garner a lot of press. “No one wants to be on the front page of the New York Times,” Huang said. “It’s important for marketers to protect their brand and protect their reputation. When users or customers don’t trust a brand, the ramifications are quite drastic — a large percentage will just choose not to engage.” “Trust is key and it could stop a transaction dead in its tracks if the consumer doesn’t trust the business,” Abbott agreed. “Conversely, if the business is building trust, it can reduce, if not eliminate friction altogether, the speed in closing a transaction.” Building trust requires hyper transparency around what data is being collected, why it’s being used, for what purposes and with whom it’s being shared. Credential your data sources, so that you know the data that you’re licensing has been collected with appropriate permissions. Implement an ethical data use framework and privacy impact assessment program to objectively balance the benefits of using the data versus the potential risks and harms to the consumers that may arise, and then do everything possible to mitigate the risks that can’t be eliminated. “At the end of the day, trust should be about demonstrable accountability, not only saying what you do and doing what you say, but being able to prove it,” he said. “And if I do trust a brand, I’m going to have a higher propensity to buy more and spend more, so there are just so many benefits, from customer sentiment to bottom line and more,” Huang said. “Data privacy can be a competitive differentiator for the people that embrace it as an opportunity.” To learn more about the data privacy regulations and requirements that are impacting marketers, real-world examples and a look at the future of the regulatory landscape, don’t miss this VB On-Demand event! Watch free on-demand here! Agenda How the accelerating marketplace and regulatory changes will impact your marketing strategy How to build consumer trust and connected experiences with enterprise level data governance, safeguards and a smart CDP Top predictions on regulations and enforcements in 3-5 years Presenters Jordan Abbott , Chief Privacy Officer, Acxiom Helen Huang , Principal Product Manager – Security & Data Privacy, Treasure Data Victor Dey , Tech Editor, VentureBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,254
2,022
"A practical approach to building resilience with zero trust | VentureBeat"
"https://venturebeat.com/security/a-practical-approach-to-building-resilience-with-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community A practical approach to building resilience with zero trust Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ransomware has easily become one of the most notorious enterprises of the 21st century — gleaning unprecedented success in the past 24 months by targeting vulnerabilities in the cloud and across the software supply chain, attacking industrial processes and targeting unsuspecting victims on holidays and weekends. What’s worse, as our hyperconnected world breeds new and emerging threat vectors daily, we know that breaches today are inevitable and cyberattacks are the new norm — they’re happening as we speak. Research shows that 76% of organizations have been the victim of a ransomware attack in the past two years, and 82% have paid at least one ransom. Spending on cybersecurity is higher than ever, yet we’re still hemorrhaging losses to ransomware — and not just financially. Attacks like on Colonial Pipeline and SolarWinds reaffirm the societal and economic implications of ransomware, and we continue to witness one devastating attack after another on U.S. critical infrastructure and other essential civilian sectors (think education and healthcare). Far too many organizations are still sitting ducks in the eye of a cyber storm, so apathy and lack of action are unacceptable. Business leaders must act proactively to bolster cyber resilience before it’s too late. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Assume breach, improve resilience, control impact A decade ago, it was enough for business leaders to focus solely on bolstering prevention at the perimeter defenses (VPNs, firewalls). Now, in the wake of accelerated digital transformation efforts — largely spurred by the pandemic and today’s era of hybrid work — the attack surface has widened significantly, leaving more endpoints, cloud environments and potential exploitation avenues open and available for bad actors. With organizations now managing a hybrid workforce, sprawling hybrid IT estates, and widening supply chains, it’s no longer a question of if bad actors will defeat perimeter defenses; it’s a question of when. That’s why today’s industry-wide focus on “bolstering resilience” has never been more timely or essential. One of the resilience frameworks that’s been thrust even further into the cyber spotlight in the past 24 months is zero trust. This cybersecurity approach was first introduced by Forrester over a decade ago. It is a framework predicated on the principles of “assume breach” and “least privilege”. Under a zero trust approach, organizations are encouraged to restrict access to a select and necessary few (least privilege) and assume that everything will inevitably be breached (assume breach). The duality of the zero trust mindset recognizes the certainty of a breach, while ensuring that organizations are rigorously safeguarding access and mitigating exposure proactively. We like to call this “breach risk reduction.” With zero trust practices, technologies and policies in place, organizations are better positioned to address cyber incidents quickly (reducing downtime) and mitigate accompanying business and operational impacts. But there are still steps that agencies, organizations and the federal government must take in order to help the private and public sectors maximize resilience. Zero trust resilience starts with education and alliances In today’s hypercomplex, dynamic, cloud-first world, cyber resilience won’t work unless we come to a collective agreement on our best path forward. A great deal of confusion remains within the federal government regarding cybersecurity mandates and best practices. While President Joe Biden mandated a federal move to zero trust architecture in his Executive Order last May ( reiterating the significance of the zero trust framework earlier this year), multiple agencies, including the Cybersecurity and Infrastructure Security Agency (CISA), National Institute of Standards and Technology (NIST), and the U.S. Department of Defense have all adopted separate and varying zero trust best practices. Organizations are increasingly recognizing cybersecurity as a critical imperative, but there’s no unified agreement on what zero trust should look like in action. The lack of a single plan creates confusion and stunts our ability to educate, which ultimately hinders resilience efforts in general. In order to become more durable in cyberspace, we must build consensus on an effective plan — a playbook of sorts — and present a unified front for organizations to follow as they look to enhance foundational resilience efforts with zero trust. Continued cybersecurity education, at a more general level, is also essential to further ongoing resilience initiatives. In June, President Biden signed into law the “State and Local Government Cybersecurity Act of 2021”, which requires the National Cybersecurity and Communications Integration Center (NCCIC) to provide training, conduct exercises and promote cybersecurity education and awareness across all lower levels of government. Additionally, earlier this year, the “Cybersecurity Grants for Schools Act of 2022 ” was introduced, allowing CISA to award grants for cybersecurity education and training programs at elementary and secondary education levels. This is the federal cyber momentum we need. As the hybrid attack surface around us continues to evolve and widen, we need to continue taking steps in the right direction — and we need to move faster. The enemy of a good plan has always been a perfect plan. While we’re looking for perfection, the attacker is always moving. While we’re debating, they’re attacking. We must incrementally get safer and build resilience daily. The road ahead Ransomware and cyberattacks aren’t going away. In fact, the threat landscape is changing , with bad actors rebranding and innovating more aggressively than ever. But companies, government institutions and other organizations can catalyze resilience efforts by continuing to educate on cybersecurity best practices, issuing formalized guidance on zero trust and other core resilience frameworks — and ultimately, taking action. As our world becomes increasingly hyperconnected, resilience initiatives like zero trust are only as strong as the weakest link in our global chain. And as our adversaries continue to move more aggressively in cyberspace, there has never been a better time for all of us to get on the same page and shore up our resilience than right now. Andrew Rubin is CEO & cofounder of Illumio DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,255
2,022
"Protecting your organization from rising software supply chain attacks | VentureBeat"
"https://venturebeat.com/security/protecting-your-organization-from-rising-software-supply-chain-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Protecting your organization from rising software supply chain attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Attackers find it hard to resist the lure of software supply chains : They can all-too quickly and easily access a wide breadth of sensitive information — and thus gain juicier payouts. In just one year alone — between 2020 and 2021 — software supply chain attacks grew by more than 300%. And, 62% of organizations admit that they have been impacted by such attacks. Experts warn that the onslaught isn’t going to slow down. In fact, according to data from Gartner , 45% of organizations around the world will have experienced a ransomware attack on their digital supply chains by 2025. “Nobody is safe,” said Zack Moore, security product manager with InterVision. “From small businesses to Fortune 100 companies to the highest levels of the U.S. government — everyone has been impacted by supply chain attacks in the last two years.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Examples aplenty The SolarWinds attack and Log4j vulnerability are two of the most notorious examples of software supply chain attacks in recent memory. Both revealed how pervasive software supply chain attacks can be, and in both instances, the full scope of the ramifications is still yet to be seen. “SolarWinds became the poster child for digital supply chain risk,” said Michael Isbitski, director of cybersecurity strategy at Sysdig. Still, he said, Microsoft Exchange is another example that has been just as impacting, “but was quickly forgotten.” He pointed out that the FBI and Microsoft continue to track ransomware campaigns targeting vulnerable Exchange deployments. Another example is Kaseya , which was breached by ransomware agents in mid-2021. As a result, more than 2,000 of the IT management software provider’s customers received a compromised version of the product, and between 1,000 and 1,500 customers ultimately had their systems encrypted. “The immediate damages of an attack like this are immense,” said Moore. “Even more dangerous, however, are the long-term consequences. The total cost for recovery can be massive and take years.” So why do software supply chain attacks keep happening? The reason for the continued bombardment, said Moore, is increasing reliance on third-party code (including Log4j). This makes distributors and suppliers ever more vulnerable, and vulnerability is often equated with a higher payout, he explained. Also, “ransomware actors are increasingly thorough and use non-conventional methods to reach their targets,” said Moore. For example, using proper segmentation protocols, ransomware agents target IT management software systems and parent companies. Then, after breaching, they leverage this relationship to infiltrate the infrastructure of that organization’s subsidiaries and trusted partners. “Supply chain attacks are unfortunately common right now in part because there are higher stakes,” said Moore. “Extended supply chain disruptions have placed the industry at a fragile crossroads.” Low cost, high reward Supply chain attacks are low cost and can be minimal effort and have potential for high reward, said Crystal Morin, threat research engineer at Sysdig. And, tools and techniques are often readily shared online, as well as disclosed by security companies, who frequently post detailed findings. “The availability of tools and information can provide less-skilled attackers the opportunities to copycat advanced threat actors or learn quickly about advanced techniques,” said Morin. Also, ransomware attacks on the supply chain allow bad actors to cast a wide net, said Zack Newman, senior software engineer and researcher at Chainguard. Instead of spending resources attacking one organization, a breach of part of a supply chain can affect hundreds or thousands of downstream organizations. On the flip side, if an attacker is targeting a specific organization or government entity, the attack surface changes. “Rather than wait for that one organization to have a security issue, the attacker just has to find one security issue in any of their software supply chain dependencies,” said Newman. No single offensive/defensive tactic can protect all software supply chains Recent attacks on the supply chain highlight the fact that no single tool provides complete defense, said Moore. If just one tool in an organization’s stack is compromised, the consequences can be severe. “After all, any protection framework built by intelligent people can be breached by other intelligent people,” he said. In-depth defense is necessary, he said; this should have layered security policy, edge protection, endpoint protection, multifactor authentication (MFA) and user training. Robust recovery capabilities, including properly stored backups — and ideally, uptime experts ready to mobilize after an attack — are also a must-have. Without knowledgeable people correctly managing and running them, layered technologies lose their value, said Moore. Or, if leaders don’t implement the correct framework for how those people and technologies interact, they leave gaps for attackers to exploit. “Finding the correct combination of people, processes, and technology can be challenging from an availability and cost standpoint, but it’s critical nonetheless,” he said. Holistic, comprehensive visibility Commercial software is usually on security teams’ radar, but open-source is often overlooked, Morin pointed out. Organizations must stay on top of all software they consume and repurpose, including open-source and third-party software. Sometimes engineering teams more too quickly, she said, or security is disconnected from design and delivery of applications using open-source software. But, as was shown with issues in dependencies like OpenSSL, Apache Struts, and Apache Log4j, exploitable vulnerabilities quickly propagate throughout environments, applications, infrastructure and devices. “Traditional vulnerability management approaches don’t work,” said Morin. “Organizations have little to no control over the security of their suppliers outside of contractual obligations, but these aren’t proactive controls.” Security tooling exists to analyze applications and infrastructure for these vulnerable packages pre- and post-delivery, she said, but organizations have to ensure you’ve deployed it. But, “the other security best practices continue to apply,” she said. Expanded security focus Morin advised: Regularly update and improve detections. Always patch where — and as quickly — as possible. Ask vendors, partners and suppliers what they do to protect themselves, their customers and sensitive data. “Stay on top of them too,” she said. “If you see issues that could impact them in your regular security efforts, bug them about it. If you’ve done your due diligence, but one of your suppliers hasn’t, it’ll sting that much more if they get compromised or leak your data.” Also, risk concerns extend beyond just traditional application binaries, said Isbitski. Container images and infrastructure-as-code are targeted with many varieties of malicious code, not just ransomware. “We need to expand our security focus to include vulnerable dependencies that applications and infrastructure are built upon,” said Isbitski, “not just the software we install on desktops and servers.” Ultimately, said RKVST chief product and technology officer Jon Geater, businesses are beginning to gain greater appreciation for what becomes possible “when they implement integrity, transparency and trust in a standard, automated way.” Still, he emphasized, it’s not always just about supply chain attacks. “Actually, most of the problems come from mistakes or oversights originating in the supply chain, which then open the target to traditional cyberattacks,” said Geater. It’s a subtle difference, but an important one, he noted. “I believe that the bulk of discoveries arising from improvements in supply chain visibility next year will highlight that most threats arise from mistake, not malice.” Don’t just get caught up on ransomware And, while ransomware concern is front and center as part of endpoint security approaches, it is only one potential attack technique, said Isbitski. There are many other threats that organizations need to prepare for, he said — including newer techniques such as cryptojacking, identity-based attacks and secrets harvesting. “Attackers use what’s most effective and pivot within distributed environments to steal data, compromise systems and take over accounts,” said Isbitski. “If attackers have a means to deploy malicious code or ransomware, they will use it.” Common techniques necessary Indeed, Newman acknowledged, there is so much variety in terms of what constitutes a supply chain attack, that it’s difficult for organizations to understand what the attack surface may be and how to protect against attacks. For example, at the highest level, a traditional vulnerability in the OpenSSL library is a supply chain vulnerability. An OSS maintainer getting compromised, or going rogue for political reasons, is a supply chain vulnerability. And, an OSS package repository hack or an organization’s build system hack are supply chain attacks. “We need to bring common techniques to bear to protect against and mitigate for each and every type of attack along the supply chain,” said Newman. “They all need to be fixed, but starting where the attacks are tractable can yield some success to chip away.” In proactively adopting strong policies and best practices for their security posture, organizations might look to the checklist of standards under the Supply Chain Levels for Software Artifacts Framework ( SLSA ), Newman suggested. Organizations should also enforce strong security policies across their developers’ software development lifecycle. Encouraging software supply chain security research Still, Newman emphasized, there is much to be optimistic about; the industry is making progress. “Researchers have been thinking about solving software supply chain security for a long time,” said Newman. This goes back to the 1980s. For instance, he pointed to emerging technologies from the community such as The Update Framework (TUF) or the in-toto framework. The industry’s emphasis on software bills of materials (SBOMs) is also a positive sign, he said, but more needs to be done to make them effective and useful. For example, SBOMs need to be created at build-time versus after the fact, as “this type of data will be immensely valuable in helping prevent attack spread and impact.” Also, he pointed out, Chainguard co-created and now maintains one dataset of malicious compromises of the software supply chain. This effort revealed nine major categories of attacks and hundreds or thousands of known compromises. Ultimately, researchers and organizations alike “are looking at ways to solve these issues once and for all,” said Newman, “versus taking the common band-aid approaches we see today in security.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,256
2,022
"Upskilling: The best weapon tech workers have to stay competitive in a changing job market | VentureBeat"
"https://venturebeat.com/automation/upskilling-the-best-weapon-tech-workers-have-to-stay-competitive-in-a-changing-job-market"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Upskilling: The best weapon tech workers have to stay competitive in a changing job market Share on Facebook Share on X Share on LinkedIn Presented by PMI The boom of post-pandemic growth in the tech industry made plenty of companies rich, but it was never sustainable. With earnings weakening, along with a looming recession and the ongoing battle against inflation, companies are recalibrating their focus and tightening their belts. Salaries are often the first place companies look to cut costs, so recent layoff announcements aren’t surprising. Amidst the upheaval and uncertainty, there’s opportunity for tech workers. As companies adapt and change course, new demands for old roles are growing, new positions are being created and new skills are in high demand The talent gap is already a concern for many tech industries, with McKinsey reporting that 87% of companies are currently facing talent shortages , or expect to within a few years. This is where upskilling comes in — a powerful way in which job seekers and employees alike can prove their value to organizations that are urgently in need of new skills and organizational transformation, and to help workers weather impending layoffs and future-proof their careers. It’s how to stay competitive, too. Despite the high-profile layoffs in the tech sector, economists report that the job market is still strong and larger than ever , though it’s growing increasingly crowded. “In a rapidly changing world, it’s crucial that workers arm themselves with the mindset, skills, tools and customer understanding that make them powerful assets for companies in need of transformation,” says Sierra Hampton-Simmons, vice president of products at Project Management Institute (PMI). “But more importantly, upskilling is a strategic way for a worker to invest in their own growth and engagement, and take charge of their career trajectory.” Finding the right skills building programs To meet the demand for upskilling, the number of learning programs is growing. PMI offers resources that help individuals strategically upskill across a variety of areas, from collaborative problem-solving and spearheading positive organizational change, to deploying and scaling low-code and no-code software platforms. “We have a highly engaged global community of practitioners, inclusive of project, program and portfolio managers from some of the world’s largest and most impactful organizations,” Hampton-Simmons says. “By constantly listening and identifying trends through their first-hand experiences, we can stay ahead of the needs of the future workforce and offer opportunities for professionals to build the most impactful skills.” Here’s a look at some of the PMI products that can help future-proof professionals who are looking to gain a competitive edge for climbing the ladder or identifying their next opportunity. Project Management Professional (PMP) PMI’s 2021 Talent Gap report found the global economy needs 25 million new project professionals by 2030 as projects are key to solving future challenges. The Project Management Professional (PMP) ® certification proves a project manager’s proficiency in various project management practices, including spearheading tangible change, and driving business results. With a PMP ® certification, professionals can future-proof their careers and prove their expertise and resiliency in using technical project management skills to lead change and manage through uncertainty. “Projects are how work gets done, thus project managers and project professionals are critical to how strategy is executed as they play a significant role in helping organizations meet objectives and deliver value now and in the future,” Hampton-Simmons says. “PMI helps project professionals upskill themselves across industries and at any stage of their careers.” PMI Citizen Developer Citizen developers are transforming how employees work. Low-code and no-code platforms give non-technical employees with little to no coding experience, but significant organizational knowledge, the power to solve real business problems to build dynamic apps — putting the power of innovation directly into their hands. Companies are racing to find ways to brings new ideas to market faster or streamline internal processes but often face over-burdened IT teams and a talent shortage. By embracing citizen development in a safe, secure and scalable way, employees — both current and prospective — immediately make themselves more valuable. The PMI Citizen Developer™ education suite provides the guiding principles of citizen development, as well as the tools and methodologies needed to efficiently create effective and scalable applications using low-code and no-code platforms. Organizational Transformation Take a leading role in your company’s transformation efforts, or gain a competitive edge in the job marketplace, by learning the best practices associated with managing and supporting change. With the Organizational Transformation ® ecourse series, PMI has created a pathway for individuals to learn the skills needed to become transformation professionals. Organizational Transformation also gives businesses the tools they need to create a culture of change through educating their workforce on the best practices of managing and supporting organization-wide transformation. PMI Wicked Problem Solving Today’s increasingly complex problems make clear, efficient planning and decision-making a mounting challenge. PMI Wicked Problem Solving ® helps businesses and individuals elevate their ability to solve problems of all sizes through making “space” to visualize the problems and collaborate on their solutions, fostering more consistent innovation. With PMI Wicked Problem Solving, you’ll learn to apply a modern operating system for creative collaboration that drives tangible actions. These actions will provide better opportunities to turn problems into solutions, opportunities into innovations, and ideas into positive outcomes. To learn more about these all these programs and more, visit PMI.org. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,257
2,022
"Use digital transformation to seize market share while others go on the defensive | VentureBeat"
"https://venturebeat.com/enterprise-analytics/use-digital-transformation-to-seize-market-share-while-others-go-on-the-defensive"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Use digital transformation to seize market share while others go on the defensive Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As we combat the current inflation crisis and market downturn, it is becoming more and more difficult for businesses to remain competitive. Increasingly, organizations are looking for ways to cut costs and optimize existing assets and processes. For many, this will mean pausing all non-essential projects, digital transformation initiatives included. If your organization already has working digital strategies and processes, spending money on newer, shinier digital projects feels counterintuitive. Unfortunately, this intuition is flawed. It is a relic of a recently bygone age to assume that digital projects are anything but essential during a market downturn. The business world is becoming more digital every day, despite the market. With the threat of recession looming, Gartner still predicts worldwide IT spending to reach $4.4 billion this year. Moreover, in a 2020 survey, Deloitte found that more digitally mature companies reported 45% revenue growth compared to 15% for lower maturity organizations. If able, organizations must embrace digital change or risk being left behind. Addressing digital strategy While keeping up with digital trends is important, how organizations go about creating a digital strategy can be the difference between keeping up and getting ahead of competitors. However, what level and kind of digital work businesses should undertake will vary immensely between organizations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pragmatically, this will inevitably mean investing a lot of time into research. For example, some organizations will benefit immensely from investing in automating their internal processes to save on labor costs. For others, investing the bulk of the digital budget into digital brand building, demand and lead gen programs, building content leadership and leveraging social media platforms to generate traffic would be more worthwhile. Researching how competitors spend their digital budgets can also help inform your strategy, as you can gather market-tested data on what is and what is not worth the investment. This strategy can sit alongside broader internal and external data gathering. Although it can be painstaking to sift through, the more data collected on internal processes, staff satisfaction, digital customer experiences and end-user trends, the more targeted and effective the digital strategy will be. Taking advantage of expertise Something else to bear in mind is that many businesses do not effectively use their available expertise. This may have something to do with the fact that many business leaders aren’t aware of their knowledge gaps. In our own research, we have found that 73% of UK family firm board members have no digital competencies. Concerningly, Salesforce has found similar knowledge gaps. In a 2022 Salesforce survey of more than 23,000 workers across 19 countries, 54% of senior leadership respondents marked that they had the digital skills needed today. However, less than half of the managers and individual employee respondents agreed, meaning that senior leaders weren’t aware that they lacked adequate digital expertise. Part of being digitally aware is knowing that the digital world moves at a fast pace and most senior business leaders do not have the time or capacity to keep up constantly. From there, becoming digitally mature involves integrating digital components across the organization and its business strategy. For this to happen, organizations need to start consulting with CTOs and digital teams on a much more serious level. This could mean involving senior IT leaders in the company’s board or creating a separate digital board to manage digital projects at a senior level. It could even mean looping in external consultants. We are amid a digital skills crisis, but taking digital offerings seriously can help to retain IT staff and build out your customer offering while competitors pause their digital investments. Getting ahead There are clear benefits to digitalization. For example, automating your internal processes with AI can save on labor costs; conducting website testing can drive traffic; and improving digital marketing channels can improve customer engagement. And, if there was ever a moment for digitalizing, it’s when your competition is pulling back. Moreover, as valuations are under increased pressure, digital strength can make all the difference. Digitalization is also seen as a key value driver for investors, as we are currently seeing strong links between financial performance and digital maturity. Investors are increasingly conducting digital due diligence as a valuation factor, so the smallest step can make the most significant difference. As businesses battle factors like inflation, high-interest rates and low EBITDA arbitrage, investors are battling uncertainty as they look to allocate vital funds. In the long term, continuity is critical to investors as, at the end of the day, the safety of their investments is never guaranteed. Continuity is therefore a vote of confidence. Organizations that can continue building their digital strategies and innovating will encourage further essential investment. Assuming that there is a budget available, digital transformation can be a safe route to seizing market share while others go on the defensive. The goal could be improving internal and external digital services; encouraging a high valuation or investment figure; long-term cost savings; or any other combination of reasons. Regardless, a digital strategy can affect not only whether a business survives the downturn but how well it survives the downturn. Stefan Sambol is cofounder and Partner at OMMAX DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,258
2,022
"If you don’t take the lead on low-code/no-code, your employees might | VentureBeat"
"https://venturebeat.com/programming-development/if-you-dont-take-the-lead-on-low-code-no-code-your-employees-might"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event If you don’t take the lead on low-code/no-code, your employees might Share on Facebook Share on X Share on LinkedIn During VentureBeat’s latest Low-Code/No-Code Summit , ServiceNow CDIO Chris Bedi and GM/VP of ServiceNow App Engine, Marcus Torres, discussed how organizations can successfully build, scale and govern low-code programs, while also keeping them straightforward for everyday employees. One thing is certain: low-code/no-code isn’t going anywhere, and companies need to be prepared. Gartner predicts that by 2024, 65% of all app development will be done via low code. That’s because the demand for digital transformation is accelerating, and organizations need to find a way to keep pace, and keep competitive. “IDC has a stat out there that says 750 million new apps need to be built by 2025 ,” Bedi said. “Centralized tech organizations are not going to be able to manage it. The dev capacity of software engineers — that is a constraint on digital transformation. And low-code and no-code development is an unlock for that constraint because the demand to automate more, drive efficiency, drive productivity, create experiences to serve your customers and talent — the demand has never been higher.” In the new age of digital transformation and the new age of work, development is a team sport, bringing together the citizen developer and the traditional developers, Torres said. “When you get that, what do you create? You actually create teamwork,” he said. “You create context across different lines of business, and that helps your business scale as well. The only way we’re able to do that is to provide an amazing experience that allows people to easily learn, easily use, easily innovate, and then keep scaling, but scale in a way that’s really a benefit for the whole business.” For instance, companies like Bayer are streamlining the complexity of compliance and legal to give employees and business teams a single seamless experience, Torres said. The app they developed using ServiceNow’s technology, the Now platform and App Engine, produced over 30,000 requests to the compliance and legal teams that could be handled with automation at a rate of about 80%. “Out of those 30,000, there are 24,000 that require no intervention, and it just happens,” he explained. “That’s the way the world works, and that’s how we really want to see all of our customers going forward.” Another customer, Novant Health, set a team of citizen developers loose to go innovate, creating more than 80 apps, increasing their overall dev capacity by 40%. Launching a citizen developer initiative A formal citizen developer initiative is crucial, Bedi added, for a number of reasons. First, talent today in the workforce doesn’t want to wait in line for a centralized tech org to fix nagging issues, and companies need to empower talent to digitize their own work. Secondly, the citizen developer-traditional developer partnership is critical to drive modern business outcomes and accelerate their digital transformation. And third, if left unchecked and in the hands of any employee who learned about low-code and no-code citizen development apps, individual apps for individual teams will proliferate madly, slowing the speed of innovation and impeding growth. In other words, shadow IT, which breaks down a company’s ability to scale. “That’s the really the risk of not doing anything in regard to a low-code initiative because people don’t want to wait in line. People have options,” Torres agreed. “They can go to a website, swipe a credit card, and be off to the races, but at the end of the day, the biggest cost to developing applications isn’t the development. It’s the maintenance. It’s the support over time.” “You can try to block it and say, no, it’s too risky for us,” Bedi added. “That’s a losing strategy because employees will find a way to get stuff done because they have to get stuff done.” They urge organizations to partner with citizen developers, create the right development and innovation framework and put guardrails and governance in place, so that these developers are building the right way with the right visibility and the right model to scale and support over time. “That’s how you get the capacity to scale and, most importantly, the agility the business needs,” Torres said. “Nobody wants to wait in line because as a business owner, as a manager, as a worker, I just need to get certain things done, and I can’t wait in line because that’s how I support our customer and they can’t wait either.” “Organizations have to embrace it, and those that do will start to see their digital transformation accelerate and their employees be happier,” he added. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,259
2,022
"If we're to have a future filled with EVs, we need highly dependable electronics | VentureBeat"
"https://venturebeat.com/automation/if-were-to-have-a-future-filled-with-evs-we-need-highly-dependable-electronics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored If we’re to have a future filled with EVs, we need highly dependable electronics Share on Facebook Share on X Share on LinkedIn Presented by Infineon The motor vehicle industry is in the middle of a revolution, rapidly transforming from being largely electromechanical to being largely electronic. The semiconductor content in vehicles has been gradually increasing for quite some time, but in recent years demand for automotive-grade integrated circuits (ICs) and devices has increased exponentially with the rapid growth in driver assistance, multimedia, convenience features, and crucially, the growing popularity of electric vehicles (EVs) and the broader trend of vehicle electrification. Today’s automobile might contain anywhere from 1,400 to over 3,000 individual semiconductor devices depending on the type of vehicle and its features, and that number is rapidly growing as automakers add more processors, controllers, sensors and other devices. We estimate the value of power devices alone in each EV has increased from roughly $200 a few years ago to over $500 or $600 today, and still growing fast. Take a closer look at the new types of semiconductor components in electric vehicles, the most important of which are power semiconductors, especially the inverter at the heart of electric vehicles. As for driver assistance, leading-edge ICs and sensors are making advanced driver-assist systems (ADAS) possible, including features such as lane-change warnings, collision avoidance, and automated cruise control in limited circumstances. The incorporation of ever more sophisticated digital dashboards and in-cabin entertainment systems also increases the auto industry’s reliance on semiconductors. Changes in the market The skepticism that greeted modern EVs when they were introduced more than 20 years ago is rapidly being eroded by rising fuel prices, the need (and desire) for more sustainable means of transport, and the expansion of vehicle charging infrastructure. In response, governments are continuing to roll out policies and measures to encourage EV adoption. Two recent examples include the Biden Administration earmarking an initial $5 billion to help build EV charging infrastructure in 35 U.S. states, while the important market of California said it will ban the sales of new internal combustion engine (ICE) vehicles starting in 2035. EV and autonomous vehicle (AV) start-ups may have paved the way, but now all the major automotive companies are building EVs and most are experimenting with AVs. To increase acceptance of these new vehicles, auto makers must drive down costs, and must also build and maintain trust. What happens next To build on the initial success of EVs and emobility, vehicle electronics must be demonstrated to be secure and reliable. Power devices will perform key roles in creating those assurances. Most charging stations for EVs supply AC (alternating current) power (more expensive and higher-power fast-charging stations supply DC power), and EV motors use AC. However, EV batteries utilize DC (direct current) power, and EVs therefore require onboard chargers to convert source AC to DC to recharge batteries and inverters to convert battery DC to AC for vehicle motors and propulsion. EV batteries are made of multiple battery cells connected in series and in parallel. Electronic battery management systems (BMS) are critical to monitor and balance the charge of individual battery cells in a pack, to ensure safe operation and a long lifetime of the EV battery pack. Efficient power conversion and BMS improve overall EV performance, charging speed and battery life. Those qualities also directly affect range — how far a vehicle can go on before requiring a recharge. Driver assist and autonomous driving The industry is also focusing on ADAS features, relying on highly available systems which require dependable electronics and systems which always sense, always compute, always act and are always connected and powered. Demonstrating performance, reliability and safety will be key to earning the trust of the public. High-quality components that consistently deliver the best performance over the lifetime of a vehicle are prerequisites for the effectiveness and reliability not only of safety features but all automotive functions. This will allow EV makers to confidently assure their vehicles are safe, reliable, durable, while also assuaging range anxiety. A technology partner who can deliver dependable electronics is the foundation for consumer trust. Indeed, as the automotive leader in dependability, quality leader and functional safety, Infineon delivers the dependable electronics which critical systems need to rely on. The company also offers dependable semiconductors for many applications in the vehicle: from high-power IGBTs, and SiC, to dependable driver assistance systems, secure networking and safe microcontrollers. Trust also extends beyond the quality and reliability of the products Infineon provides; being a consistent supplier of these products also counts. In order to meet such high-quality requirements, the company’s automotive chips are evaluated with the most rigorous standards from design to mass production. After all, both customers and consumers must have complete faith that these products can be relied upon, and that users will be safe in motor vehicles now and in the future. Mathew Anil is Marketing Director, VMO at Infineon Technologies. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,260
2,022
"The world, and today's employees, need quantum computing more than ever | VentureBeat"
"https://venturebeat.com/datadecisionmakers/the-world-and-todays-employees-need-quantum-computing-more-than-ever"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The world, and today’s employees, need quantum computing more than ever Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Quantum computing can soon address many of the world’s toughest, most urgent problems. That’s why the semiconductor legislation Congress just passed is part of a $280 billion package that will, among other things, direct federal research dollars toward quantum computing. Quantum computing will soon be able to: Save businesses on fuel and lower carbon footprints by enabling optimal routing. Lead to materials science advances that enable homeowners to buy and install highly efficient solar cells to power their homes. This is valuable as extreme heat across the planet is driving record power consumption and straining power grids. Increase the accuracy of credit risk analysis, giving lenders additional confidence to loan money to underserved populations. Solving the unsolvable The economy and the environment are clearly two top federal government agenda items. Congress in July was poised to pass the most ambitious climate bill in U.S. history. The New York Times said that the bill would “pump hundreds of billions of dollars into low-carbon energy technologies — like wind turbines, solar panels and electric vehicles — and would put the United States on track to slash its greenhouse gas emissions to roughly 40% below 2005 levels by 2030.” This could help to further advance and accelerate the adoption of quantum computing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Because quantum technology can solve many previously unsolvable problems, a long list of the world’s leading businesses — including BMW and Volkswagen , FedEx , Mastercard and Wells Fargo , and Merck and Roche — are making significant quantum investments. These businesses understand that transformation via quantum computing, which is quickly advancing with breakthrough technologies , is coming soon. They want to be ready when that happens. It’s wise for businesses to invest in quantum computing because the risk is low and the payoff is going to be huge. As BCG notes : “No one can afford to sit on the sidelines as this transformative technology accelerates toward several critical milestones.” The reality is that quantum computing is coming, and it’s likely not going to be a standalone technology. It will be tied to the rest of the IT infrastructure — supercomputers, CPUs and GPUs. This is why companies like Hewlett Packard Enterprise are thinking about how to integrate quantum computing into the fabric of the IT infrastructure. It’s also why Terra Quantum AG is building hybrid data centers that combine the power of quantum and classical computing. Gearing up for the quantum computing revolution Amid these changes, employees should start now to get prepared. There is going to be a tidal wave of need for both quantum Ph.D.s and for other talent — such as skilled quantum software developers — to contribute to quantum efforts. Earning a doctorate in a field relevant to quantum computing requires a multi-year commitment. But obtaining valuable quantum computing skills doesn’t require a developer to go back to college, take out a student loan or spend years studying. With modern tools that abstract the complexity of quantum software and circuit creation, developers no longer require Ph.D.-level knowledge to contribute to the quantum revolution, enabling a more diverse workforce to help businesses achieve quantum advantage. Just look at the winners in the coding competition that my company staged. Some of these winners were recent high school graduates, and they delivered highly innovative solutions. Leading the software stack, quantum algorithm design platforms allow developers to design sophisticated quantum circuits that could not be created otherwise. Rather than defining tedious low-level gate connections, this approach uses high-level functional models and automatically searches millions of circuit configurations to find an implementation that fits resource considerations, designer-supplied constraints and the target hardware platform. New tools like Nvidia’s QODA also empower developers by making quantum programming similar to how classical programming is done. Developers will want to familiarize themselves with quantum computing, which will be an integral arrow in their metaphorical quiver of engineering skills. People who add quantum skills to their classical programming and data center skills will position themselves to make more money and be more appealing to employers in the long term. Many companies and countries are experimenting with and adopting quantum computing. They understand that quantum computing is evolving rapidly and is the way of the future. Whether you are a business leader or a developer, it’s important to understand that quantum computing is moving forward. The train is leaving the station — will you be on board? Erik Garcell is technical marketing manager at Classiq. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,261
2,022
"The future of quantum computing may lie in 'neutral' atoms | VentureBeat"
"https://venturebeat.com/programming-development/sumitomo-moves-to-advance-neutral-atom-quantum-computing-with-coldquanta-investment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The future of quantum computing may lie in ‘neutral’ atoms Share on Facebook Share on X Share on LinkedIn Quantum computer. Conceptual computer artwork of electronic circuitry with blue light passing through it, representing how data may be controlled and stored in a quantum computer. Trading giant Sumitomo announced that it has reached an agreement with quantum computing startup ColdQuanta to market and distribute ColdQuanta technology in Japan. This follows by one day ColdQuanta’s completion of $110 million in series B fundraising, which included funding from Sumitomo Corporation of Americas. Sumitomo’s interest goes beyond nascent quantum computing efforts. The company has forged deals in the area of quantum key distribution as well. Importantly, Sumitomo cites quantum sensors as an active area of interest. Such sensors promise much higher measurement sensitivity than conventional devices, and could find groundbreaking use in resource exploration as well as autonomous driving and navigation more generally. Quantum computing: Some like it cold Among ColdQuanta’s principals are individuals known for cutting-edge physics research at the University of Colorado and the University of Wisconsin. In fact, Colorado-based ColdQuanta arose from a business providing lab and test equipment to others’ quantum physics laboratories. Now it has joined the swelling ranks of quantum computing hardware and software startups. The company pursues what it has called quantum atomics, which uses lasers to manipulate and control ultra-cold — or “neutral” — atoms that have been cooled to a temperature near absolute zero. The company is not alone in the neutral-atom race, as Atom Computing, Pasqual and others are developing variations on the method. ColdQuanta says the neutral-atom method provides longer-running qubit coherence. Notably, in ColdQuanta’s neutral-atom technology, while the atoms must remain ultracold, the rest of the system can be kept at room temperature. That allows for smaller form factors than other quantum methods, and opens the door to new applications for embedded quantum sensors — a far cry from the large machines now chasing a quantum computing advantage. Quantum sensors under the radar The firm’s technology has already endured in challenging settings. It was essential in a sensing interferometer sent to the International Space Station in 2019. That equipment was described as an ultra-precise quantum sensor with uses ranging from fundamental research in general relativity and Earth science to future applications including GPS-free navigation. Of course, ColdQuanta is quite intent on commercializing quantum computing — not just quantum sensing. Much of that effort rides on its Hilbert neutral atom-based system, which has been in controlled beta. As with others’ offerings, it will be available via cloud and will support common software tools. Activities devoted to quantum computing futures garner great attention today. Quantum sensors on the other hand have remained something of an outlier, at least in terms of publicity. Yet by some measures they are the most advanced in terms of adaptations. Arguably, quantum sensors take their lead from SQUIDs , or superconducting quantum interference devices, which have been in production for many years. These find uses in sensor settings ranging from medical imaging to oceanographic sensing and space exploration. According to Markets and Markets research, the quantum sensor market in 2022 is $260 million, with a 16.8% CAGR expected to lead to a $565 million market by 2027. Cold atom rising? New processing approaches continue to emerge from quantum computing startups. If successful, these approaches could expand the quantum horizon, according to Bob Sutor, VP and chief quantum advocate at ColdQuanta. Sutor is something of a herald when it comes to leading-edge technology. In nearly 40 years at IBM he held key positions evangelizing Linux, Web Services and, more recently, blockchain and quantum computing. “What is going on with quantum technology is we are moving beyond the usual three suspects — that is, the three technologies: superconducting, ion trapping and photonics,” he told VentureBeat this summer at the Quantum Tech conference in Boston. He admitted that cold-atom approaches have been a little bit later in developing. “But that’s okay,” he said, “because this is a decades-long adventure.” Sutor said that even though the main target is to solve very large problems with quantum-style supercomputers, now is the time to think about other use cases for quantum computers too. “For classical computers, there are many different processors of many different sizes, many different places. You will see the same in quantum technology,” he said. While the timing of the arrival of supercomputer-level quantum computing remains hard to gauge, the global interest in preparing for its arrival remains strong, as the Sumitomo-ColdQuanta deal indicates. The deal also suggests there is potential for quantum technology outside the walls of specially-equipped data centers. Other investors in ColdQuanta’s latest funding round include the Australia-based Breakthrough Victoria fund, which said ColdQuanta would work with Swinburne University of Technology to build a training and education center in Victoria to advance production of glass cells used in so-called cold-atom methods. That can be seen as part of a global effort to miniaturize quantum computing components. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,262
2,022
"What the next 10 years of low-code/no-code could bring | VentureBeat"
"https://venturebeat.com/programming-development/what-the-next-10-years-of-low-code-no-code-could-bring"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest What the next 10 years of low-code/no-code could bring Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On my 12th birthday I got my first computer: an Amiga 500. And at 17, I founded my first company, making software that helped photographers serve their customers. As I reflect on my decades of coding, I’m reminded that low-code technology started with tools enabling users to build custom reports and applications with very little coding. When I started coding, low-code was somewhat analogous to the position artificial intelligence holds today: exciting, much hyped and poorly understood. In 2014, it was generating buzzy news items: “Some firms are turning to new, ‘low-code’ application platforms that accelerate app delivery by dramatically reducing the amount of hand-coding required,” declared Forrester. Now, after almost a decade of growing adoption, low-code — and increasingly also no-code — tools are becoming mainstream. The worldwide market for low-code tools ballooned almost 23% in the last year, and these days, 41% of non-IT employees are building or customizing applications, according to Gartner. Gartner also predicts that by 2024, three-quarters of large enterprises will be using at least four low-code tools, and low-code technology is likely to be responsible for upwards of 65% of total application development. Gartner further forecasted that half of all new low-code clients will be business buyers that are outside the IT organization by year-end 2025. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This explosive growth makes sense. Nowadays, the business world is driven by the idea that every organization must digitally transform its operations to keep up with rapid market change. “Adapt or die” is a common refrain. Since most organizations don’t have the developer manpower in place to conduct such digital transformation using solely the labor of the IT department, the help of other business professionals is required. No-code and low-code tools allow companies to recruit new participants in the digital transformation effort, enabling firms to advance their progress quickly and cost-effectively. The rise of the business technologist Indeed, low-code options have created a new persona in the workplace: the business technologist or business user, also known as “ citizen developer ,” who can helpfully participate in the application development process. The tools are constantly becoming simpler to use and more intuitive. Users are assisted by excellent training within the tools themselves and a growing library of online resources with business-focused pre-built components such as tutorials, use cases and how-to videos. Business users who can’t write a single line of code in a programming language, such as C++ or Python, used by professional software developers are able to independently create the bulk of their useful applications using low-code and no-code tools. As of now, the IT department still does the heavy lifting of application development, but as this market evolves, business users will increasingly be able to create applications end-to-end with relatively little intervention from developers. This shift will allow developers to focus on maintaining large-scale strategic projects, while monitoring the long tail of the applications being built by business technologists. Recently, developer responsibilities have shifted toward maintaining core systems, upholding best practices of application development, ensuring standards compliance, data governance, and security, and acting as enablers to the newly minted low-code application creators in diverse business departments. In their next evolution, these tools will be tailored to the specific needs of different business users, with pre-built content and components for business users making it possible for employees in technical areas to develop customized applications quickly. A customer service professional, for instance, may need an application for onboarding customers. Being able to create the applications themselves using tools that can be customized for this purpose will allow them to vastly accelerate the onboarding workflow and the resulting integration of new customers. Low-code meets AI Looking even further ahead, it’s likely that artificial intelligence (AI) will be brought more heavily into the equation, enabling software development processes that are more proactively guided and written by other software. This would allow business users to create new applications using text prompts with the assistance of the application development tool. Think of the auto-complete function in a Google search bar, but for code. Already there are signs of this, as with GitHub Copilot that builds on the GPT-3, the large language neural network from OpenAI. These are first-generation AI capabilities and will only grow more sophisticated over the next several years. While this prospect may cause anxiety among professional developers, the shift promises to create new opportunities within IT, rather than eliminate old ones. Software developers can become adept at enabling this evolution by learning how to provide the right prompts to an AI tool to generate the code that a no-code application developer will need. The most in-demand developers in coming years may well be those who can write elegant and efficient prompts , more so than those who are proficient in programming languages. The evolution toward increasingly easy-to-use application-creation tools is not only an opportunity for developers to build new strengths; it could also be a massive boon to their business colleagues and the business goals they serve. Businesses gain agility by using no- and low-code tools. Agility is the ability to respond quickly to change, and it is enhanced when business units can customize and maintain their applications themselves to stay immediately responsive to that change. One of the most important questions for any IT project, including those using low-code development, is: Who is going to maintain it? A poorly maintained application ties a poor customer experience to a brand, an outcome far more likely to occur when application creation and maintenance is outsourced and when a resource-strapped IT department is solely responsible for all maintenance. These tools are adapting and providing capabilities for enterprise monitoring and governance of low-code apps. Low-code/no-code’s next decade While the next decade is likely to bring an expansion of the role that business users have in application creation and maintenance, it’s important to note that the IT team is now — and always will be — the enabler for driving this innovation. It’s their job to figure out how to get the business users the tools they need to be as productive as possible, and the guidance to use them appropriately. This shift will happen through collaboration, with IT doing both the facilitation and the backend work that allows business users to best apply their skills and strategies via targeted applications. The next 10 years of no-code and low-code development are likely to bring just as much change as the last 10, if not more. There has long been a functional and cultural boundary between IT and business users. The rapid advancement of low-code and no-code tools with an assist from AI is breaking this down and enabling a more collaborative environment. It’s possible that over the next several years boundaries will not only blur but in some instances begin to vanish. That should be a plus for productivity and a boost to digital transformation efforts. And while that kind of rapid change can present challenges to individuals and businesses, these rapidly evolving tools offer mostly good news. With their help, business users can work more effectively and quickly, IT teams can focus on promoting business growth in high-value ways and companies can better succeed as the future comes at them fast. Juergen Mueller is chief technology officer and executive board member at SAP SE. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,263
2,012
"Here's the scorecard on our 2012 predictions for the game industry | VentureBeat"
"https://venturebeat.com/2012/12/28/2012-game-predictions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Here’s the scorecard on our 2012 predictions for the game industry Share on Facebook Share on X Share on LinkedIn Before you read our forecast for 2013, take a look at how we did on last year's predictions. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Before I make predictions on the new year, I’ll take a look at how I did on my predictions last year, with letter grades on them. In a separate post, I’ll let fly the predictions for 2013. If you have ideas of your own, please leave them in the comments and take our poll at the end. After you get a look at this story, be sure to check out our 2013 predictions. Of the 12 predictions I made last year, I’m giving myself a letter grade of A on four of them, a B on 5, a C on one, a D on one, and an F on one. Here’s this year’s predictions. Last year’s predictions 1. Social and mobile gaming will get stronger. I predicted, “Now that Zynga and Nexon have both raised a billion dollars, they’ll be able to use that money to accelerate acquisitions and expand their positions in the fastest-growing parts of the video game business. The good thing is that social and mobile games are already popular, but they’re in their infancy. They have plenty of room to evolve and grow.” Letter grade: B Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Mobile games took off as the number of smartphone and tablet users soared. But social games stalled as Facebook’s growth slowed and Zynga failed to execute on its strategy, disappointing Wall Street. 2. Console games become the Red Queen. “Like the Red Queen in Through the Looking Glass , console and PC games will continue to run just to stay where they are. With sales flat compared to 2009 and 2008, the console game sector appears to be stuck.” I got this one wrong because the retail game industry led by the consoles saw double-digit declines in the U.S. for just about every month in 2012. December sales haven’t come in yet, but I don’t expect it to have a lot of great news for publishers. Blame the slowdown on the aging consoles and the growth of digital games. Letter grade: B 3. World of Warcraft will continue its steady decline. “The question is whether Blizzard will be able to execute an orderly retreat. At some point, the company will launch Titan, its next massively multiplayer online game. But it sure doesn’t look like that game will arrive in 2012.” World of Warcraft has been losing players, falling from a peak of 12 million at the beginning of 2011 to 10 million in the most recent quarter ended Sept. 30. The losses haven’t been drastic, with the 10 million figure down only 200,000 from the beginning of the year. WoW lost a million players in the second quarter, but in the third quarter, WoW regained nearly a million subscribers thanks to the launch of its Mists of Pandaria expansion. Will the number go back down again in the fourth quarter? If it does, I’ll get an A on this prediction. Letter grade: B 4. Cloud gaming gains ground. “In 2010, OnLive launched its cloud-based game streaming service. But its impact hasn’t really been measurable yet. OnLive hasn’t released user numbers, retailers such as GameStop haven’t gone out of business, and publishers are still counting on retail sales for a big part of their revenues. In 2011, cloud gaming continued to expand, growing to the United Kingdom and spreading to mobile devices. As more websites embed cloud-gaming demos for free, users will start to see the benefits of the games-on-demand services. We can expect to see more game streaming in 2012, since Gaikai has teamed up with Wal-Mart and GameStop is prepping its own service. OnLive has a couple of hundred games available, making it a force to be reckoned with among those distributing games in digital form. If more titles and more exclusives land in the lap of OnLive, the gamers will follow. But the question is, will the growth be gradual, or will it pick up momentum in 2012?” Cloud gaming isn’t dead, but it suffered its worst setback as OnLive (led by Steve Perlman, pictured right) hit the wall and filed for a bankruptcy alternative. The company failed to raise funding and changed ownership. Letter grade: F 5. Tablets and smartphones will continue to steal gamers from dedicated handheld gaming devices. “The competition between Android and Apple will accelerate the rate of innovation in mobile, and that will grind up the dedicated gaming devices. Kids in particular will drive this transition as they move from the iPod Touch to iPhones to iPads, possibly skipping handhelds altogether. The PS Vita has plenty of cool new games, but it’s hard to beat the free-to-play or 99-cent prices on iOS and Android.” Even as console game sales fell at double-digit rates during 2012, the growth of smartphones and tablets paved the way for big gains in mobile game downloads. Letter grade: A 6. The platforms will multiply. In every part of the business where there isn’t enough competition, new platforms will emerge to provide it. Google+, for instance, will rise as a gaming platform to compete with Facebook, which has pretty much wiped out a lot of its competition. In mobile, Microsoft’s Windows Phone will mount a bigger challenge to Android and iOS. And within existing platforms, such as Android, we’ll see new kinds of gaming devices emerge. And where these new platforms arise, they will prominently feature games because games will help differentiate these platforms and show off what they can do. This year saw the launch of the Wii U, the expansion of the Amazon Kindle Fire, new smartphones, the iPad Mini, and the announcement of the Ouya Android-based game console. Letter grade: A 7. Nintendo launches a console in the fall of 2012, but Sony and Microsoft wait until 2013. “Nintendo’s new Wii U console is expected to debut in 2012 with its tablet-like controller. But I don’t expect it to set the world on fire the way the Wii did starting in 2006. That will give Microsoft and Sony some breathing room to create high-end machines that can run circles around the Wii U. By waiting a year, the heavy-duty console makers can stretch out the console cycle by one more year and then make more money on the current generation. They will also be able to launch advanced consoles with one more year of the cost learning curve under their belts. Having said that, I would bet that it is more than likely that Sony and Microsoft will try to dampen the enthusiasm for the Wii U by announcing new systems in 2012 that won’t ship until 2013. That’s the familiar vaporware tactic that helped the PlayStation 2 defeat Sega’s Dreamcast in the good old days.” This was right on the mark on the timing of the consoles as the console makers didn’t pull any big surprises. But Microsoft and Sony didn’t announce anything. Letter grade: B 8. Location-aware mobile gaming will gather momentum. “ Will Wright, the gaming legend who created SimCity, is exploring location-based entertainment because it can lead to what he calls “personal gaming,” where a game can be more easily customized to a person’s tastes because it makes use of data that it knows about that person. By tapping into location information, game creators can make their titles more and more relevant to consumers. To date, location-based games have had density problems, where not enough players are playing in one location. But it’s possible to design games that get around this issue, and developers will be able to keep you entertained based on where you are.” Unfortunately, Wright got into a lawsuit with a co-founder and spent much of the year dealing with the litigation. The lawsuit was finally settled, but personal gaming will have to wait. Meanwhile, Red Robot Labs scored big with its Life Is Crime location game in 2012. Letter grade: C 9. Family mobile data plans increase game consumption. “ I borrowed this prediction from Exent , which forecasted that numerous mobile carriers will lower their data fees per device and allow families to share plans. This means that you won’t have to pay a lot extra to enable your kid to play online games. That will enable more users to engage in connected social games on mobile devices that operate at satisfyingly high speeds. That could trigger sales of more data-enabled devices, which are ideal gaming machines.” Verizon Wireless followed through on its plan, but there is no indication whether this affected game downloads. Letter grade: D 10. HTML5 won’t be ready for prime time yet. “ This new format for web content wants to be the lingua franca of the web. But it isn’t so fast when it comes to running games. By contrast, native apps run much better on mobile devices. HTML5 games can’t make use of specific hardware in games such as a camera, and they don’t work well if the browser’s connection is weak. As devices get better and web speeds improve on mobile, HTML5′s performance will get better. But it has a long way to go, and native or hybrid solutions are likely to rule the day next year.” Some companies tried to launch HTML5 games, but there was a lot of pushback. Wooga created a HTML5 mobile game but decided not to launch it commercially. But advocates of HTML5 hybrid apps such as Ludei and Game Closure launched ways to speed gaming up. Letter grade: A 11. Monetization matters. “New revenues from ads and improved conversion rates will provide a bigger growth rate for the industry. Too many companies rely on one business model when they could embrace multiple ones. If you look at Zynga, the company gets only 5 percent of its revenue from advertising and 95 percent from virtual goods. With more than 200 million monthly active users, Zynga has amassed a huge audience for brand advertisers, yet it has shied away from ads for fear they could be intrusive in the game experience. But in-game ads can be crafted so that users like them. Just ask innovators such as Kiip , which offers promotional rewards in a mobile game at the moment when you achieve something. If Zynga made just $1 a month in ads from each user, it could generate an extra $2.4 billion in annual revenues. That’s an untapped opportunity. By the same token, Zynga generates revenue from only 2.5 percent of its users. If it could double that number to 5 percent , it could double its revenue. In 2012, I expect to see publishers take advantage of multiple monetization strategies.” The use of alternative monetization strategies expanded on mobile platforms, but Zynga was still very dependent on virtual goods revenue during 2012. Letter grade: B 12. Somebody will get hacked. “With embarrassing security breaches all over the place, game companies can expect more hacker attacks in 2012. The PlayStation Network suffered the ultimate embarrassment as it went down for six weeks after being hacked. Valve, Square Enix, and many others suffered the same indignity. It’s good to be prepared. Online game sites and companies that use virtual currency have a lot at stake.” A variety of firms were hacked, including Zynga, NCsoft, Gamigo, and others. Letter grade: A GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,264
2,013
"The DeanBeat: Game industry predictions for 2013 | VentureBeat"
"https://venturebeat.com/2012/12/28/the-deanbeat-game-industry-predictions-for-2013"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: Game industry predictions for 2013 Share on Facebook Share on X Share on LinkedIn Predictions are always embarrasing in hindsight, but we love going out on a limb. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Even though I have a terrible track record for predicting the future, I find making predictions irresistible. It’s such an exciting time in the game industry that just about anything can happen. Things that I never thought would occur — such as Apple dominating in the sheer number of games on its platform — have now come to pass. The industry is full of disruption and change, from the smallest startups to the biggest companies, as the digital revolution sweeps through the industry. We are, after all, in the crossover era , when game companies invade new platforms. With so much change, the predictions become harder, but they’re also more fun to make. We’re looking forward to announced games like Grand Theft Auto V and unannounced (but likely) titles such as Call of Duty: Modern Warfare 4. But many of these predictions below go beyond the impact of single launches. (Check out how I did with last year’s here ). I have tapped some of our staff for help with them. And please vote for your favorites in our poll (in the web version of this story) and leave comments. Also, check out where we’ve been in the past in our predictions about the road ahead in gaming. Mobile gets bigger This trend isn’t a bubble. The growth of smartphone and tablet gaming is an inexorable trend. It’s a no-brainer that it will gather momentum in 2013. Mobile games account for 42 percent of all new game investments, according to investment bank Digi-Capital. Mobile devices are growing fast throughout the world, and we’ll soon have multiple billions of devices that are capable of playing games. Every company is adapting to this change by launching new versions of mobile titles, and many startups are focused on a “mobile first” or “mobile only” strategy. So far, nobody dominates this market. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Hardcore online games will match the quality of console games Games like League of Legends and Hawken have shown that it’s possible to create great free-to-play hardcore online games as downloadable titles. Soon, you might be able to play these without downloads thanks to better browser technologies (such as WebGL or Chrome Native Client), which make use of 3D graphics hardware on a computer without the need for plug-ins. When the better rendering technology gains traction, then one of the last quality barriers will fall between online games and console titles. The ability to play high-end web games without delays will neutralize the advantage that console companies have had in the home. It will lower the cost of distribution and democratize gaming further. But once this new web publishing platform is more evenly distributed across the development community, the focus will have to be on making better games, not shoveling more out. The battle royale for gaming will commence To deal with the invasion of hardcore online titles, console makers will have to respond with something exciting and new. It’s going to be a world war of gaming in 2013 as big players duke it out. Traditional console makers Nintendo, Sony, and Microsoft will go to war with the likes of Apple, Amazon , Samsung, and Google. Microsoft has already set up a broader entertainment network on Xbox Live, but it could lose that position if it maintains its current high subscription fees and a dearth of free-to-play titles. Apple and Google in particular may not launch “consoles” per se. Rather, if they compete in gaming, it will be an accidental byproduct of a strategy to compete for the hearts and minds of consumers on whatever platform they use. Those companies haven’t designed specifically for gamers, but their platforms have been ideal for developers and publishers trying to reach larger audiences. An Apple television would naturally be a platform for games. The newcomers will use free or 99-cent titles — as well as technologies that transfer images from a smartphone or tablet to a TV — as their wedge into the console space. We’ll see a battle for the living room like never before. And this battle royale will happen on all platforms wherever they’re used. Microsoft and Sony will both announce and launch their next-generation systems Chasing the Nintendo Wii U, Microsoft and Sony will announce their new consoles at E3 2013 in June, and at least one of them will introduce the new machine in the fall of 2013. Microsoft may not launch in 2013 since it has a leadership position now. But it shouldn’t try to milk that position and should instead get out ahead of its new challengers such as Apple and Google. Sony has more motivation to get into the market sooner since it came in third place in units sold in the last generation. But developer activity suggests that Microsoft is moving ahead faster with a broader group of allies. Introducing a new console isn’t just a matter of willpower. It’s the result of mobilizing a whole ecosystem of suppliers, partners, developers, and publishers to support the effort. So while it may be a no-brainer to introduce sooner, the console makers may be forced to wait until 2014. They will also have to build their consoles around a new innovation, such as much better gesture controls, to make the gameplay more magical than it is today. And if Sony and Microsoft know what’s good for them, they’ll embrace cloud gaming to reduce the cost of their consoles and deal with backward compatibility. And they will give users plenty of options for free-to-play, platform-agnostic gaming. (GamesBeat writer Kat Bailey is a fan of this idea.) They will both embrace free-to-play and cloud gaming. Any new console will thus have to be a hybrid of tradition and the new digital platforms. (Dan “Shoe” Hsu, the editor-in-chief of GamesBeat, fed me a version of this prediction.) Nintendo’s Wii U console will fail So far, the Wii U is selling out, but not in astounding numbers. And while games such as ZombiU are fine, they’re getting weak reviews on Metacritic. In fact, the highest rated game on the Wii U this season is Mass Effect 3: Special Edition, a retread of a game that has already been available on the Xbox 360 and the PlayStation 3. This isn’t the right way to launch a brand new console. While initial supplies may sell out, what will happen three months or six months from now as the novelty wears off and gamers await better-looking titles on the upcoming consoles or the PC? Nintendo embraced the consumer love for tablets with its tablet-like controller, but that hasn’t put a dent in demand for tablets. In the context of a more competitive industry with multiplying choices, I don’t see the Wii U as a survivor. The best thing Nintendo can do is cut the price, and that’s never been a winning formula for long-term success. I’m hoping Nintendo surprises us, but I am not counting on it. Smart TV games will finally become a reality This prediction crystallizes several of the trends already mentioned. Apple still hasn’t introduced its television. If and when it does, apps will come to the connected TV, nicknamed the Smart TV, in a very big way. But while Apple may eventually validate this trend, the market won’t wait for one company. In the meantime, Google will keep pushing on Google TV. Manufacturers like LG , Samsung, and Vizio are pushing hard on cloud gaming on the TV, where all you need to play is a web connection and a game controller. Ouya hopes to enter this market with a $99 box in the spring that will move Android games onto this same screen. The business model enabled by apps on the TV is very attractive for consumers. Free-to-play or 99 cent apps may be good enough for a lot of players, particularly if there is a path to high-end hardcore games as well. Gesture or voice-recognition technologies will add novelty to the Smart TV gaming experience. Given the option to forgo the expense of $60 games and $300 consoles, many consumers are likely to prefer playing unified apps on TVs that can also be played on tablets and smartphones. That could ignite a huge wave of gaming consumption. Gesture technologies will take off Motion-sensing technologies like the Nintendo Wii remote, Microsoft’s Kinect for the Xbox 360, and the Sony PlayStation Move were just the start. Intel calls the new era of gestures “perceptual computing.” Controlling a game or computing device with your hands, body, or voice could become much more accurate in the next generation. Kinect’s accuracy trailed off if you got too close to the TV. But startups such as Softkinetic have demonstrated that gestures work well just inches away from a laptop’s webcam now. This helps make general computing easier just as touch screens do with Windows 8. But the possibilities for gaming become much more interesting with precise technologies that can detect small finger movements as well as the activity of everyone in a room. And these gestures don’t have to be limited to PCs or consoles. They could also be integrated over time into smartphones and tablets. Intel itself promises to launch perceptual computing in 2013. First-person shooter games will grab market share on tablets Based on what I’ve seen so far of The Drowning (pictured right), which is being developed by Scattered Entertainment and will be published in 2013 by DeNA’s Ngmoco, I’m convinced that first-person shooter games will finally take hold on tablets. The Drowning has a clever gesture-based user interface that works with touch screens — something that shooters haven’t done well so far. If you tap two fingers on the screen, your weapon will fire at the midpoint. Tap one on the screen, and your character will move to where you tap. And swipe the screen to turn your characters head. It’s simple, and it is just one example of how the multibillion-dollar shooter business could make its way onto tablets. And it means that mobile games will be playable without a game controller. The game controller will become the king maker Startups like PrimeA (maker of Moga controllers) and Green Throttle Games want to turn the humble game controller into something more important. They can do so if the above trend doesn’t work out so well and gamers prefer to play their mobile games with traditional handheld controllers. Green Throttle is creating a cool user interface app that allows you to use its controller to play Android apps on a TV, connected to your smartphone or tablet via a HDMI cable. (If better Wi-Fi technology comes along, this will get easier to do.) With controllers used in this way, Android games can invade the realm of the $60 console game. The controller could blast a hole through the barrier between two segments of the game market. The barriers will come down between real-money gambling and social casino games Likewise, the barriers are coming down between real-money online gambling and social casino games, where the winnings are merely virtual casino chips. Startups like Betable enable social casino games to be converted to real-money online gambling in territories where it’s legal. Zynga is counting on changes in U.S. laws to enter the real-money gambling market, but it will launch in the crowded United Kingdom market first. Facebook has also embraced real-money gambling by allowing Gamesys to launch such games in the U.K. This will set up a clash. The social casino game operators have the biggest games like Zynga Poker but the lowest revenues per paying user. Online gambling firms make a lot of money from relatively few heavy gamblers. But marrying the two businesses will make it much easier for the online gambling firms to recruit potential high rollers for low costs. Meanwhile, U.S. states and the federal government are loosening the restrictions on online gambling. This trend may take years to play out, but it will gather more momentum in 2013. eSports will take off Gaming has been a professional sport for a while, but with releases such as League of Legends from Riot Games, it is gathering momentum. Rising in parallel with this trend is the livestreaming of games enabled by Twitch, which has been integrated into the online shooter Planetside 2 (pictured right). Community has also become a much bigger deal in ensuring the success of a game, according to gamer social network Raptr. That’s because the improved engagement that comes with promoting the community around a game leads to higher awareness, revenues, and profits. ESports have been growing in countries such as China and South Korea, but Major League Gaming, WCG, and other leagues are offering bigger prizes and more venues for gigantic tournaments. And it’s no surprise that Activision built livestreaming, shoutcasting (narrated games), spectating, and league play into Call of Duty: Black Ops II. Gamemakers should already know that you don’t just milk your users. Give them what they want, and they will pay you back many times over. Let’s hope that more companies are going to learn the lesson: It’s the community, stupid. (GamesBeat writer Rus McLaughlin suggested a version of this prediction.) Gamification will continue to spread the influence of games Gamification , or the use of game mechanics to increase engagement in nongame applications, is the path to spread game thinking far and wide. It has applications in everything from fitness devices that encourage you to exercise to enterprise applications that reward you for completing tutorials. Badgeville , Big Door, and Bunchball are providing the services for all kinds of companies to embrace gamification, either by rewarding consumers and employees with achievements or leading them to compete with rivals through leaderboards. A lot of these efforts will fail as many games do. But the ones that are done right could lead to a big expansion in engagement and usage. Gamification is in its hype stage, but the reality will start setting in during 2013 as to what works and what doesn’t. And it will carry the flag for gaming into all circles of business. [polldaddy poll=6796180] GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,265
2,013
"Steven Spielberg's Halo: The Television Series coming to Xbox One | VentureBeat"
"https://venturebeat.com/2013/05/21/halo-the-television-series-coming-to-xbox-one"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Steven Spielberg's Halo: The Television Series coming to Xbox One Share on Facebook Share on X Share on LinkedIn Halo: the video game is about to get a live-action component with Halo: The Television Series, backed by no less than Hollywood legend Steven Spielberg. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. 343 Industries general manager Bonnie Ross has just announced a new live-action television series based in the Halo universe, and Steven Spielberg is on board as a creative partner. “The Halo universe is an amazing opportunity to be at that intersection where technology and myth-making meet to produce something ground-breaking,” said Spielberg in a prerecorded message. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! As yet, no further information exists on whether Halo: The Television Series will involve the Master Chief, when in the continuity it’ll take place, when it will premiere, or even how many episodes have been ordered. But it’s likely to be similar in design to the Forward Unto Dawn web series and, given its origins, exclusive to the new Xbox One console. [youtube http://www.youtube.com/watch?v=p1kCgQ5Bonc&w=560&h=315] For more on the Xbox One reveal, check out our complete coverage. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,266
2,014
"The DeanBeat: More terrible predictions for the game industry in 2014 (poll) | VentureBeat"
"https://venturebeat.com/2014/01/03/the-deanbeat-more-terrible-predictions-for-the-game-industry-in-2014-poll/view-all"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: More terrible predictions for the game industry in 2014 (poll) Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As the author of the DeanBeat column, I enjoy exploiting my high office to make terrible predictions about the future of the game business. The time has come again, as the new year has arrived and the Consumer Electronics Show is about to begin next week in Las Vegas. Kaz Hirai, chief executive of Sony, will paint his picture of a vision for technology on Tuesday at CES, but I will preempt him in making predictions about what Gartner says was a $93 billion industry in 2013. I am sure that I will be proven wrong as early as his speech. I continue to believe that the game business will be unpredictable. I didn’t expect, for instance, that current-generation games would make a huge comeback as evidenced by Grand Theft Auto V selling $1 billion worth in three days of sales. I also predicted Activision would launch Call of Duty: Modern Warfare 4, but instead, it came up with Call of Duty Ghosts. Nor did I expect the biggest surprise: SoftBank and GungHo would buy 51 percent of Supercell for $1.53 billion, a transaction that valued a company with two mobile games — Clash of Clans and Hay Day — at $3 billion. One of the bets that GamesBeat’s team has already made (you see how I shifted responsibility to our broader team?) is that games will compete with other forms of entertainment for “total world domination.” That is the theme we chose for GamesBeat 2014 , happening next September. We believe game companies will compete on the world stage across all platforms for dominance, and that competition within gaming will be global. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Still, the uncertainty is high, as we know very little about the slate of games that will hit us in the new year. There will be comebacks, bubble inflations, and crashes across what has become a very expansive game business. For fun and embarrassment, here’s my predictions from 2013 and 2012. 1. The next-generation war will be a dead heat Above: The Microsoft Xbox One and the Sony PlayStation 4. I believe that Sony and Microsoft are in the midst of a ground war akin to World War I. They’re equally matched in this round of competition. If either of them gains 100 yards of territory on the other, they’ll celebrate it as a decisive victory. We’ll see the long-term market share take shape in 2014 in the competition between the Xbox One and the PlayStation 4. Last time around, both companies sold more than 80 million units a piece. Microsoft can outspend Sony on marketing, and it has Titanfall coming in March as an exclusive. But Sony has a more powerful system, and it is selling its machine for $100 less. While Sony has an edge right now, that may be just because it launched a week earlier than Microsoft. Meanwhile, one of the most promising next-generation games is Ubisoft’s Watch Dogs, which will be on both systems. We’ll see which of the two makes big moves on the online front. Sony expects to launch its cloud-based Gaikai service in 2014. But Microsoft has a big advantage with its Xbox Live service and will likely have its own digital services to unveil this year. 2. Nintendo will dump the Wii U Above: The Wii U has a tablet controller. Nintendo did the best job of launching new exclusive games in support of a console during the holidays, with the debut of Super Mario 3D World, which was one of the highest rated games of the season. But Nintendo clearly misread the market this time coming up with an under-powered box that tried to incorporate the appeal of tablet technology into a home console. Gamers were not impressed. The Wii U has sold around 5 million units in its first year. That’s a paltry sum, considering the huge results Nintendo saw after the 2006 launch of the Wii. In the past two quarters, Wii U sales have crawled forward in the hundreds of thousands of units. Michael Pachter, analyst at Wedbush Securities, predicts the Wii U will sell under 20 million units by the end of 2016. That’s not bad. But he expects Sony to sell 37.7 million PS 4s by that time, and Microsoft to sell 29 million Xbox One consoles — even though the Wii U had a one-year head start on those systems. The financial liability of having the third-place console will weigh down Nintendo’s progress. It has a strong position with the portable 3DS handheld, but it may very well need to cut its losses on a doomed console. The opportunity cost is too great. Nintendo might try to keep the Wii U and then expand to new platforms like iOS, Android, and perhaps even rival consoles. But the effect will be the same for the Wii U. 3. Steam Machines will grab a foothold among PC gamers Above: Valve hopes the Steam Machines will soon take over living rooms. Valve is debuting its SteamOS, Steam Controller, and Steam Machines in 2014. The latter will be launched by hardware partners, some of whom will announce systems at Valve’s press conference at CES. With only 300 employees, Valve may seem puny as a challenger to Microsoft’s Windows gaming ecosystem and the consoles. But Valve has a huge library of Steam games, a following of indie game fans, and 65 million users. Some of those fans will migrate from the PC to the living room by playing their games on Steam Machines. But it’s not clear if Valve will build a big enough market for Steam Machines to really matter. A lot depends on whether it truly embraces the “open” alternative strategy. If it simply wants to make sure that Microsoft doesn’t shut down the Steam store on Windows, then that’s not a good enough reason to launch an alternative machine. Valve has built up enough goodwill to be able to have a successful launch. But sustaining an alternative won’t be easy. The big questions are whether PC gamers are really fed up with Microsoft or not and if they are willing to pay for yet another machine to avoid the Microsoft tax. A lot will also depend on the ultimate price for Steam Machines. If the Steam Machine hardware partners price them at more than the PlayStation 4 or Xbox One consoles, then Valve will lose. If Valve’s system is ultimately closed, then it’s not going to be a better alternative. 4. Virtual reality will gather momentum Above: A Virtuix 360-degree treadmill plus an Oculus Rift headset makes for a terrifyingly immersive video game experience at GamesBeat 2013. Oculus VR has enough momentum, attention, and funding to launch its virtual-reality goggles, the Oculus Rift. The $75 million vote of confidence from investor Marc Andreessen and other venture capitalists will help Oculus execute on its promises and build a larger ecosystem of developers, who need to provide the critical VR games to make it take off. There will probably copycats and new examples of cool applications for the technology, which could spread beyond games. The good thing for VR is that other companies are stepping forward with cool innovations, like Sixense (a maker of gesture control systems), and Virtuix, the maker of the Omni virtual reality treadmill. That means that Oculus isn’t the only company pouring money into virtual reality. Oculus’s recruitment of John Carmack, the tech guru of id Software, also brings it instant credibility. As Oculus VR chief executive Brendan Iribe observed, Carmack specializes in getting more performance out of a platform than others think is possible. And Carmack has set his sights on making virtual reality work on mobile platforms. I wouldn’t bet against him. But Oculus should recall the missteps of Ouya, which launched quickly but didn’t have a killer application for its micro-console. You only get one chance to make a good impression. 5. Alternatives to game controllers will grow stronger Above: Tobii tracks the position of your eyes in 3D and determines where you are looking. Tobii and SteelSeries just proclaimed they will launch an eye-tracking control system for games in 2014. That should tell you that the decades-old game controller may soon be on its way out. Computing is going through a revolution in user interfaces, and game controls are certain to go through a similar kind of change. The worst prediction is the one that technology will stand still and everything will stay the same. When it comes to game controllers, technology has stayed the same for too long. We’ve already embraced touchscreens in a big way with tablets and smartphones. Voice commands are just getting a start with the Xbox One’s new version of Kinect. Google Glass has inspired game developers to create games for wearable devices.The number of new user interfaces is blossoming. One of them will be right for gaming. But eye-tracking could very well enable both better precision and ease of use, as everybody knows how to aim with an eye. All it requires is a small piece of hardware that can detect your eye movements. The technology has been in refinement for a decade now. Its time may have come. 6. The MOBA market will have a shakeout Above: League of Legends in action. The multiplayer online battle arena (MOBA) game League of Legends created a whole new genre and built a company, Riot Games, with more than 1,000 people. LoL has fueled the eSports craze and helped free-to-play take off in Western countries. And it has fueled the rise of livestreaming on Twitch, the popular game video broadcasting service. But League of Legend is getting a lot of competition from Valve’s Dota 2 and will face more from Blizzard’s upcoming Heroes of the Storm. But it remains to be seen if MOBA can support more than one company. The copycat competitors are assuming that it will be an entire market behind the MOBA craze. But if it’s a fad or a one-hit market, there will be a lot of dead bodies on the floor. The MOBA market may very well be a winner-take-all market, or close to it. That’s because of the strong network effects that happen when a community or eSports fans get behind a title and stay with it. 7. Europe and Asia will stay ahead of the U.S. in mobile games Above: Clash of Clans cardboard cutout. This trend has already happened, with the rise of games like Candy Crush Saga, Clash of Clans, and Puzzle & Dragons. But the signs are strong that the Europeans and Asians will capitalize on their success and use it to move in the U.S. market in a big way. Companies like Supercell, GungHo Entertainment, and King are raising big piles of capital based on valuations that are at historic highs. They have cracked the nut for monetizing mobile games, and that is a big deal because there are a billion mobile devices. Meanwhile, U.S. rivals such as Zynga have stalled. And some companies, like Epic Games (maker of the Infinity Blade series) have already sold chunks of their companies to Asian investors such as Tencent. You can expect the Europeans and Asians to go shopping among their lesser-valued U.S. rivals. In Asia alone, there are 30 companies that have more than $1 billion in cash and have a history of buying game companies. The U.S. is strong in consoles, with big companies like Activision Blizzard, Electronic Arts, and Take-Two Interactive. But here’s an interesting fact. Even after the huge success of Grand Theft Auto V, which generated $1 billion in sales in its first three days, parent company Take-Two Interactive is valued at $1.56 billion, as of today’s stock market price. By contrast, Supercell is valued at $3 billion. But the Europeans and Asians shouldn’t get too confident. Indie successes can come from anywhere. 8. Rising user acquisition costs will bring victory to the brands Above: John Riccitiello, former CEO of EA The cost of acquiring a new user has risen to crazy levels. We know that because market researchers such as SuperData have found that the cost per install (CPI, which is the measure of what it costs in advertising to get a user) had risen to $2.73 for mobile games as of October. And SuperData found that the average revenue per user was $1.96. With the holidays, competition was expected to drive the cost of user acquisition even higher. And that means there will be a bloodbath with many mobile game developers spending more for users than they bring in from those users. The well-known brands and viral mobile hits can get more users without spending a lot. The unknowns will get squeezed in this kind of market. And if Disney comes in with a massive brand, it will be hard to overcome that brand, no matter how much you spend. Former EA chief executive John Riccitiello predicted the inevitable rise of brands in mobile in a recent talk. But it pays to remember. The “brands” aren’t necessarily traditional brands. They can come from anywhere, in part because mobile hits beget new mobile brands among users who are spending all of their time with mobile devices. Disney recognized this, using the Temple Run brand in mobile to launch some successful endless runner games such as Temple Run: Brave and Temple Run: Oz. Another important thing for the platform owners: platform owners can push around game developers when they’re small. But when those developers create huge brands, the power shifts. Consumers won’t allow a platform owner to push around brands that they love. Apple, for instance, competes with Netflix, but it tolerates Netflix’s presence on iOS devices because consumers want it. This doesn’t assure brands of victories in 2014. But the momentum of the market is in their favor. 9. Narrative will matter in games for all platforms Above: An action sequence from Xbox One title Quantum Break. If there’s a lesson from my picks for the top games of 2013 , it is that a strong story matters. The Last of Us, BioShock Infinite, and even Grand Theft Auto V had strong narrative threads that tied the gamers to the games for long hours. With consoles games, the platform was stable for seven years and that allowed developers to pour their energy into the craftsmanship of making a game, rather than learning a new technology. The result was a bunch of wonderful games with memorable scenes and strong endings. The makers of online games and mobile games should also learn this lesson. Telltale has learned how to tell great stories in bite-sized episodes, much the same way that TV shows do. They put the cliffhangers at the end so that players can’t wait for the next episode. Remedy Entertainment is pursuing the same strategy with its Quantum Break game , which will be accompanied by a TV show. With the next-generation consoles, the opportunity to build engaging games is even bigger, and the technology isn’t as hard to wrestle with at the outset. I’m not saying games should be more movie-like. But game developers will use their medium to tell memorable stories in their own way. I think that mobile game developers should pay attention to narrative too. Their platforms are younger, but they are becoming more capable. And narrative is still a great way to distinguish your game from somebody else’s. 10. Personal gaming will thrive Above: Will Wright at GamesBeat 2013. The combination of a treasure trove of data about users and the ability to create different versions of a game for different users means that game developers will be able to create personalized games for just one person. These kinds of games are possible given the kind of data that mobile devices and online sites can collect about users. We’ve seen that purely data-driven games aren’t by themselves going to be successful. Personal gaming will likely require a balance of smart game design and big data analysis to deliver games that are much more addictive than anything we’ve seen to date. Will Wright, the founder of the game startup Syntertainment , has hinted that personal gaming will be the wave of the future. He sees these kinds of games as blending reality, entertainment, and mobile location services. 2014 might show us some of the early successes in where this category can go. But in the long term, the encouraging thing is that indies, such as Wright’s new company, might very well use this targeting technology to deliver games that outdo the blockbusters, which try to target a single mass market. But the usual warning remains. Keep sacred the user’s wish for privacy, or all else fails. And please take our poll about which prediction is your favorite, and leave a comment explaining your choice below. Or use the comments to make your own predictions. [polldaddy poll=7687672] Related articles The DeanBeat: The 10 most anticipated games of 2014 (poll) The DeanBeat: Creativity thrives in the top 10 games of the year (poll) GamesBeat’s 2013 Game of the Year: The Last of Us The DeanBeat: With Valve’s Steam Machines and Oculus VR’s funding, gaming innovation is getting interesting The DeanBeat: Why Nintendo should have bought Ouya and other might-have-beens The DeanBeat: Will you choose the Xbox One over the PS4? Depends on who you are GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "