id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
3,913 | 2,021 |
"Why IT needs to lead the next phase of data science | VentureBeat"
|
"https://venturebeat.com/ai/why-it-needs-to-lead-the-next-phase-of-data-science"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why IT needs to lead the next phase of data science Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Most companies today have invested in data science to some degree. In the majority of cases, data science projects have tended to spring up team by team inside an organization, resulting in a disjointed approach that isn’t scalable or cost-efficient.
Think of how data science is typically introduced into a company today: Usually, a line-of-business organization that wants to make more data-driven decisions hires a data scientist to create models for its specific needs. Seeing that group’s performance improvement, another business unit decides to hire a data scientist to create its own R or Python applications. Rinse and repeat, until every functional entity within the corporation has its own siloed data scientist or data science team.
What’s more, it’s very likely that no two data scientists or teams are using the same tools. Right now, the vast majority of data science tools and packages are open source, downloadable from forums and websites. And because innovation in the data science space is moving at light speed, even a new version of the same package can cause a previously high-performing model to suddenly — and without warning — make bad predictions.
The result is a virtual “Wild West” of multiple, disconnected data science projects across the corporation into which the IT organization has no visibility.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To fix this problem, companies need to put IT in charge of creating scalable, reusable data science environments.
In the current reality, each individual data science team pulls the data they need or want from the company’s data warehouse and then replicates and manipulates it for their own purposes. To support their compute needs, they create their own “shadow” IT infrastructure that’s completely separate from the corporate IT organization. Unfortunately, these shadow IT environments place critical artifacts — including deployed models — in local environments, shared servers, or in the public cloud, which can expose your company to significant risks, including lost work when key employees leave and an inability to reproduce work to meet audit or compliance requirements.
Let’s move on from the data itself to the tools data scientists use to cleanse and manipulate data and create these powerful predictive models. Data scientists have a wide range of mostly open source tools from which to choose, and they tend to do so freely. Every data scientist or group has their favorite language, tool, and process, and each data science group creates different models. It might seem inconsequential, but this lack of standardization means there is no repeatable path to production. When a data science team engages with the IT department to put its model/s into production, the IT folks must reinvent the wheel every time.
The model I’ve just described is neither tenable nor sustainable. Most of all, it’s not scalable, something that’s of tantamount importance over the next decade, when organizations will have hundreds of data scientists and thousands of models that are constantly learning and improving.
IT has the opportunity to assume an important leadership role in creating a data science function that can scale. By leading the charge to make data science a corporate function rather than a departmental skill, the CIO can tame the “Wild West” and provide strong governance, standards guidance, repeatable processes, and reproducibility — all things at which IT is experienced.
When IT leads the charge, data scientists gain the freedom to experiment with new tools or algorithms but in a fully governed way, so their work can be raised to the level required across the organization. A smart centralization approach based on Kubernetes, Docker, and modern microservices, for example, not only brings significant savings to IT but also opens the floodgates on the value the data science teams can bring to bear. The magic of containers allows data scientists to work with their favorite tools and experiment without fear of breaking shared systems. IT can provide data scientists the flexibility they need while standardizing a few golden containers for use across a wider audience. This golden set can include GPUs and other specialized configurations that today’s data science teams crave.
A centrally managed, collaborative framework enables data scientists to work in a consistent, containerized manner so that models and their associated data can be tracked throughout their lifecycle, supporting compliance and audit requirements. Tracking data science assets, such as the underlying data, discussion threads, hardware tiers, software package versions, parameters, results, and the like helps reduce onboarding time for new data science team members. Tracking is also critical because, if or when a data scientist leaves the organization, the institutional knowledge often leaves with them. Bringing data science under the purview of IT provides the governance required to stave off this “brain drain” and make any model reproducible by anyone, at any time in the future.
What’s more, IT can actually help accelerate data science research by standing up systems that enable data scientists to self-serve their own needs. While data scientists get easy access to the data and compute power they need, IT retains control and is able to track usage and allocate resources to the teams and projects that need it most. It’s really a win-win.
But first CIOs must take action. Right now, the impact of our COVID-era economy is necessitating the creation of new models to confront quickly changing operating realities. So the time is right for IT to take the helm and bring some order to such a volatile environment.
Nick Elprin is CEO of Domino Data Lab.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,914 | 2,021 |
"GPT-3: We’re at the very beginning of a new app ecosystem | VentureBeat"
|
"https://venturebeat.com/ai/gpt-3-were-at-the-very-beginning-of-a-new-app-ecosystem"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest GPT-3: We’re at the very beginning of a new app ecosystem Share on Facebook Share on X Share on LinkedIn An app has 2 GPT-3 based bots debate each other. This is one of 14 GPT-3 apps profiled by YouTuber Bakz T. Future.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The most impressive thing about OpenAI’s natural language processing (NLP) model, GPT-3 , is its sheer size. With more than 175 billion weighted connections between words known as parameters, the transformer encoder-decoder model blows its 1.5 billion parameter predecessor, GPT-2, out of the water. This has allowed the model to generate text that is surprisingly human-like after only being fed a few examples of the task you want it to do.
Its release in 2020 dominated headlines, and people were scrambling to get on the waitlist to access its API hosted on OpenAI’s cloud service. Now, months later, as more users have gained access to the API (myself included), interesting applications and use cases have been popping up every day. For instance, Debuild.co has some really interesting demos where you can build an application by giving the program a few simple instructions in plain English.
Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.
GPT-3 and the NLP arms race As I’ve described in the past , there are really two approaches for pre-training an NLP model: generalized and ungeneralized.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! An ungeneralized approach has specific pretraining objectives that are aligned with a known use case. Basically, these models go deep in a smaller, more focused data set rather than going wide in a massive data set. An example of this is Google’s PEGASUS model , which is built specifically to enable text summarization. PEGASUS is pretrained on a data set that closely resembles its final objective. It is then fine-tuned on text summarization datasets to deliver state-of-the-art results. The benefit of the ungeneralized approach is that it can dramatically increase accuracy for specific tasks. However, it is also significantly less flexible than a generalized model and still requires a lot of training examples before it can begin achieving accuracy.
A generalized approach, in contrast, goes wide. This is GPT-3’s 175 billion parameters at work, and it’s essentially pretrained on the entire internet. This allows GPT-3 to execute basically any NLP task with just a handful of examples, though its accuracy is not always ideal. In fact, the OpenAI team highlights the limits of generalized pre-training and even cede that GPT-3 has “notable weaknesses in text synthesis.” OpenAI has decided that going bigger is better when it comes to accuracy problems, with each version of the model increasing the number of parameters by orders of magnitude. Competitors have taken notice.
Google researchers recently released a paper highlighting a Switch Transformer NLP model that has 1.6 trillion parameters. This is a simply ludicrous number, but it could mean we’ll see a bit of an arms race when it comes to generalized models. While these are far and away the two largest generalized models, Microsoft does have Turing-NLG at 17 billion parameters and might be looking to join the arms race as well. When you consider that it cost OpenAI almost $12 million to train GPT-3 , such an arms race could get expensive.
Promising GPT-3 applications GPT-3’s flexibility is what makes it attractive from an application ecosystem standpoint. You can use it to do just about anything you can imagine with language. Predictably, startups have begun to explore how to use GPT-3 to power the next generation of NLP applications.
Here’s a list of interesting GPT-3 products compiled by Alex Schmitt at Cherry Ventures.
Many of these applications are broadly consumer-facing such as the “Love Letter Generator,” but there are also more technical applications such as the “HTML Generator.” As enterprises consider how and where they can incorporate GPT-3 into their business processes, a couple of the most promising early use cases are in healthcare, finance, and video meetings.
For enterprises in healthcare, financial services, and insurance, streamlining research is a huge need. Data in these fields is growing exponentially, and it’s becoming impossible to stay on top of your field in the face of this spike. NLP applications built on GPT-3 could scrape through the latest reports, papers, results, etc., and contextually summarize the key findings to save researchers time.
And as video meetings and telehealth became increasingly important during the pandemic, we’ve seen demand rise for NLP tools that can be applied to video meetings. What GPT-3 offers is the ability not just to script and take notes from an individual meeting, but also to generate “too long; didn’t read” (TL;DR) summaries.
How enterprises and startups can build a moat Despite these promising use cases, the major inhibitor to a GPT-3 application ecosystem is how easily a copycat could replicate the performance of any application developed using GPT-3’s API.
Everyone using GPT-3’s API is getting the same NLP model pre-trained on the same data, so the only differentiator is the fine-tuning data that an organization leverages to specialize the use case. The more fine-tuning data you use, the more differentiated and more sophisticated the output.
What does this mean? Larger organizations with a higher number of users or more data than their competitors will better be able to take advantage of GPT-3’s promise. GPT-3 won’t lead to disruptive startups; it will allow enterprises and large organizations to optimize their offerings due to their incumbent advantage.
What does this mean for enterprises and startups moving forward? Applications built using GPT-3’s API are just starting to scratch the surface of possible use cases, and so we haven’t yet seen an ecosystem of interesting proof-of-concepts develop. How such an ecosystem would monetize and mature is also still an open question.
Because differentiation in this context requires fine-tuning, I expect enterprises to embrace the generalization of GPT-3 for certain NLP tasks while sticking with ungeneralized models such as PEGASUS for more specific NLP tasks.
Additionally, as the number of parameters expands exponentially among the big NLP players, we could see users shifting between ecosystems depending on whoever has the lead at the moment.
Regardless of whether a GPT-3 application ecosystem matures or whether it’s superseded by another NLP model, enterprises should be excited at the relative ease with which it’s becoming possible to create highly articulated NLP models. They should explore use cases and consider how they can take advantage of their position in the market to quickly build out value-adds for their customers and their own business processes.
Dattaraj Rao is Innovation and R&D Architect at Persistent Systems and author of the book Keras to Kubernetes: The Journey of a Machine Learning Model to Production.
At Persistent Systems, he leads the AI Research Lab. He has 11 patents in machine learning and computer vision.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,915 | 2,021 |
"Connections and inspirations between science fiction, tech, and games | VentureBeat"
|
"https://venturebeat.com/ai/drawing-the-connections-and-inspirations-between-science-fiction-tech-and-games"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Connections and inspirations between science fiction, tech, and games Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
What happens when you put a science fiction writer, a venture capitalist, and a game journalist together? That was the premise behind our latest conversation on Oculus Venues and Zoom in a session dubbed “ Science Fiction, tech, and games.
” I moderated the hour-long session with computer scientist and accomplished science fiction writer Ramez Naam and Tim Chang , partner at Silicon Valley venture capital firm Mayfield Fund. Chang first told me a few years ago about a virtuous cycle between the fields, where science fiction can inspire technology. Lots of firms, for instance, are trying to create Tricorder medical devices from Star Trek, and Chang himself has told entrepreneurs that if they can make the voice-driven operating system from the film Her that he would fund it.
I got the idea for the session because so much science fiction is becoming science fact. Nvidia CEO Jensen Huang has often said that “ we’re living in science fiction.
” Our events have harped on this theme for a few years, as things that we once thought were science fiction, like AI, have become so real in the past few years. (We’ll have another event, GamesBeat Summit 2021 on April 28 and April 29, to hold some similar sessions).
Before Naam started writing his Nexus trilogy (2012 to 2015) of novels, he spent 13 years at Microsoft, leading teams working on machine learning, neural networks, information retrieval, and internet-scale systems. That unique background positions him as a bridge between science fiction and technology, helping him create visions of the future tied to what is technologically possible now.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! And his ideas are more relevant than ever, given the advances in AI and other digital technologies that have the potential to push us closer to a post-human future. Naam can speak to that future, as well as the possible risks that companies driving toward it may not see.
The Nexus trilogy, set in 2040, is also striking in how it foresees the political ramifications of technology. In the series, a mind-altering drug called Nexus immerses users in an augmented version of reality. The creator of Nexus is a brain-hacking civil libertarian who believes that it will free humanity and allow people to move on to a post-human future, where their minds can live on independently of their bodies.
But the U.S. government sees Nexus as an illegal drug, something that can drive a wedge between humans and enhanced humans. It wants to stamp it out and crush terrorists who plan to use it to disrupt society. Chinese researchers conduct frightening experiments that use Nexus to blend humanity and AI. Freedom-minded hackers are caught in the middle.
In addition to the Nexus series, he’s written two non-fiction books: The Infinite Resource: The Power of Ideas on a Finite Planet , and More than Human: Embracing the Promise of Biological Enhancement.
Naam’s books have earned the Prometheus, the Endeavour, the Philip K. Dick awards, been listed as an NPR Best Book of the Year, and have been shortlisted for the Arthur C. Clarke prize.
Chang heard about Naam and welcomed him on a visit to Silicon Valley in a dinner with his tech and sci-fi friends. Chang, who has twice named to the Forbes Midas List of Top Tech Investors, wanted to find startups that fit into a vision that could create a brighter outlook for humanity.
Above: Tim Chang (upper left), Ramez Naam, and Dean Takahashi.
GamesBeat: How did you two meet? Ramez Naam: Tim reached out to me on Facebook one day and said, “I like your books, do you want to have dinner sometime here in the bay area?” I said yes, and he set up an awesome dinner with a number of very cool people. I brought a couple of other friends as guests, some other great science fiction authors, and we hit it off immediately. We’ve stayed in touch ever since.
Tim Chang: I had a hidden agenda. I was testing out this idea. I wanted to bring scientists and founders and some of my favorite science fiction authors together to see what kind of brainstorming and magic would ensue. It was fantastic. It ended up being three or four different science fiction authors, some pretty interesting folks in the science community. It was pretty inspiring to me.
Naam: People who worked in games, in neuroscience, and meditation. It was pretty good.
GamesBeat: Where did that instinct come from, Tim? Ramez’s works are near-future science fiction. In that sense it seems like it makes a lot of sense that you could tap someone like that for ideas about what to fund. But what else drove that instinct? Chang: I grew up influenced by science fiction. It’s what got me into engineering, programming, and even VC. But there’s a tight link between inspiration we get from science fiction–there’s classic tales of many entrepreneurs who walked into VC offices back in the day with Neal Stephenson’s Cryptonomicon, slapping the book down and saying, “I want to build this. Fund me.” There are many of those inspirations.
A lot of our investment thesis ideas are fueled by near term speculative fiction, whether it’s Black Mirror , or my favorite, the movie Her.
I’ve been looking for someone to build adaptive, personalized, linear, hearable OS for years. There’s a tight virtuous cycle between science fiction, how it influences founders, scientists, the research they do, and how technology innovations and research influence authors and what they write.
And when I read Nexus, I had a suspicion this guy knows business. He knows Burning Man. He knows spirituality. He knows technology and social sciences. I was spot on. We’ve been great friends ever since.
Above: Tim Chang is also friends with sci-fi writerEliot Peper. They spoke at our 2017 GamesBeat event.
GamesBeat: It was clear he knew programming as well, right? Ramez, can you talk more about your background? Naam: My first love was philosophy, but I realized there was no career to be had there. I liked physics, but I didn’t want to have to be in a white lab coat my entire life. But somebody had given the school I was at, my elementary school, a Commodore VIC-20. I fell in love with programming. It’s an infinite canvas. You can do whatever the heck you want.
I went to school and got a CS degree. Out of school I was lucky enough to get hired at Microsoft. I spent 13 seminal years there. I got to see and do some amazing stuff. Also, I became a burner early on. I was fascinated by what happens inside my mind. I started reading up on neuroscience. I had some friends that were neuroscientists. I would ask them, “Why does this happen?” And at first they’d answer questions, but after a while they’d get tired of it. “Read this paper. Read this textbook.” And so to me, the whole universe–there’s a book called The Three Pound Universe. Everything we experience, everything we desire, all of human ingenuity, is in this little mass here. My first book, a non-fiction book, was about human augmentation, which I was massively into. Science and Nature, those journals, were my bathroom reading. I was fascinated by the things I saw going on, like rats that had electrodes that could use those in their motor cortex to control a robot arm and feed themselves. I wrote about that.
But honestly, nobody bought that book. It was well-reviewed, but it was a non-fiction pop science book. For quite a while I thought about fiction. When I finally did it, it’s a whole different way to reach people. You reach them on an emotional level, and you can get so much more traction to get stuff across into their minds. That’s what got me there.
GamesBeat: Tim, this whole thing he mentioned turned into the nootropic, brain enhancement or body enhancement movement. I think you’ve seen some of that in the VC world.
Chang: Oh yeah, yeah. It interweaves quite a bit with wearable technologies. First we had things tracking basic heart rate signals. Then it advanced to reading EEG. The first generation was reading things. We’re now well into the writing aspect. You have neurostim and other things that don’t just read signals, but augment them and send signals back into the body, biofeedback.
GamesBeat: Let’s talk more about the Nexus trilogy.
Can you summarize it quickly for us? I know that might be hard to do, but what are some of the issues that come? Naam: It’s the near future, set in 2040. The technology that people are most fixated on–there’s a lot of biotech and AI and so on. But it’s this thing called Nexus, which is solely marketed as a party drug. Burners and ravers and whatever do it. But really it’s nanobots that go into your brain and attach to your neurons and broadcast and receive what they’re doing. If two of us take it and we’re in close proximity, our brains can start to sync up. We become telepathic.
The protagonists are San Francisco bay area grad students that are working on making it permanent, building an app layer and APIs on top of, proxying it across cell phones and the internet. You can telepathically communicate from anywhere. The real moral issue in the book is who gets control of this technology. It’s all in the form of a thriller. It’s a thriller with a cold war between the U.S. and China and so on. A friend of mine called it Tom Clancy meets Burning Man. I think there’s more to it than that, but if you want some shorthand that’s not a bad place to go.
GamesBeat: Tim, what occurred to you when you read this? Chang: It struck me as a spiritual successor to the Matrix trilogy. I thought someone needed to pick it up and make a long form TV series out of it. I’ve been bugging Ramez about this for years. He’s explained the red tape and the bureaucracy in getting fiction optioned for TV.
Naam: I explained it to Tim in venture terms. The option is the seed round. The TV show is the IPO, the big exit. Of course, to use another analogy, Netflix and Amazon and Apple TV are now adding the stack layer, easier ways to get out there. Maybe it’ll happen. There are people working on it right now, shopping it around in Hollywood, so someday, perhaps.
Above: Ramez Naam, science fiction writer and author of the Nexus series.
GamesBeat: I liked how the books spelled out the consequences of this magical invention, this brain-enhancing drug, and the relationship to AI. The politics of how different people or different countries react to the creation of this–the U.S. has this faction that views it as an illegal drug, that the creators are terrorists who have to be stamped out, while the Chinese think it’s a great way to extend a single point of view or single consciousness across of the country in order to control everyone. Yet they also fear the possibility of a posthuman AI. And in the middle of all this are these hacktivists who are being hunted down and have to move underground. The politics are an interesting, more realistic part of this science fiction.
Naam: On the politics, when I started writing it, I was quite irritated about the war on drugs and the war on terror. I was writing about a technology I liked and some spiritual aspects, things like Buddhism and meditation and how technology can amplify or help us access some things in those spheres. But that political viewpoint, the question of whether government can control it or not, was on my mind as well. That led to the exercise of thinking about what will be the big geopolitical conflicts in 2040? A cold war between the U.S. and China seemed not out of the question. That became a natural tension to set up.
Hopefully things go way better in the real world compared to the politics I wrote about in the novels. The period of the last four years, the Trump presidency, was stranger and more extreme than most things I’ve written about. Truth is often stranger than fiction.
GamesBeat: The other thing that happened that nobody expected was, about eight years ago, AI started working, with deep learning neural networks. The acceleration of the AI tech that, for so long, everyone had criticized as just fantasy. That’s accelerated the pace of technological change, and now it makes some of these books seem not so silly anymore, some of this science fiction.
Chang: My favorite phrase these days is, “you can’t make this stuff up.” It’s truly challenging. Fiction is feeling almost like it can’t keep up with reality and some of the scenarios we’re running into now. I’ll give an example. Black Mirror, I think it was episode one in season three, “Downward Spiral,” where one social reputation score affects access to rights, the ability to buy tickets. I think two months after that aired, China declared its citizen score, the black box algorithm based on loyalty to the Communist Party determining access to things like loans or buying train tickets or things like that. You’re seeing reality mirror fiction in ways that are really creepy. Reality is outpacing what we’ve even thought about in these ways.
The point I want to make is that Ramez does a masterful job of weaving in things like spiritual, political, cultural considerations into science fiction, which often has been a genre that doesn’t do that. I’d argue that it’s these aspects, as well as the financial and business model aspects, that are even more pertinent and important to especially near term speculative fiction and science fiction.
Naam: Important to the actual tech we develop and fund, too.
Above: Nexus is about brain enhancement. Will you take the red pill? GamesBeat: You worked in machine learning for a long time. Were you surprised to see the acceleration of AI technology? Naam: I would not say I was surprised to see that acceleration of the effectiveness of deep learning. If you look at a variety of tasks, precision and recall on a variety of things getting better over time–I didn’t predict that deep learning in particular would take off. If you asked me at the time I left Microsoft, we were using two layer, three layer neural nets at the time. Boosted decision trees were still in the running. SVMs were still in the running.
Now deep neural nets have blown up in terms of their effectiveness, in part because of the incredible computational power we have, and elastic clouds where you have the luxury of being able to throw a thousand servers at something for a short period of time. And the explosion of data. It’s becoming a world where–there’s been a lot of sci-fi written about inequality, but one of the big inequalities between businesses now is inequality of data. Whoever has the most and best data can train the best AI. We see this data advantage, the data virtuous cycles. Nobody’s written about that, but there might be a good sci-fi concept in that.
Chang: When we were brainstorming before we were joking about, imagine the next Star Wars trilogy is about data. What would the data Death Star look like? What would data-poor rebels look like versus a data-rich empire? You could take well-known tropes from before and apply them in a new context where data, algorithms, AI, the people who understand these things are the new players in those wars.
GamesBeat: How did you feel about the fact that science fiction usually depicts AI as something that can replicate humanity, and it’s evil? Naam: Honestly, in my daily life, I don’t worry about sentient AI. It’s a category error. The stuff we’re talking about as AI, machine learning, it’s a good categorizer. I don’t know that we’re any closer at all to Her in a real sense, or HAL from 2001 becoming real. That said, and we can get into why if you want to, I do get annoyed with the depictions of AI in media. Not always, but usually they’re more negative. Her is a great example of a counter to that. It’s also very rare for a science fiction story to be a romance. But most of the time, most depictions of AI in science fiction books, and even in movies–it’s going to get you. Here’s a thing that’s super powerful and smarter than humans and it’s going to eradicate us.
To start, why would it care enough to eradicate us? Maybe there’s a different story, which is humans being abusive to their creations that they distrust. You saw me pivot a bit in that direction in the third book, Apex, where I posited AI as a very dangerous entity, but also showed its more human side, if you will. That was intentional.
GamesBeat: I think about some of the predictions in the books. In a lot of ways it predicted difficult political discourse that we’ve seen over the last four years. You had white supremacists creating a virus that they hoped would wipe out the other peoples of the world, and a crackdown or reaction to that from governments against that, and the reaction to the Nexus drug technology that followed. I’m curious about the things that you think you got right, and some things that also surprised you.
Naam: I don’t think reality is going to go the way of the Nexus novels necessarily. A lot of new technology that augments people, that makes us smarter, longer-lived, healthier in old age will be accepted. But the plot structure I used to motivate the world I wanted, where there was a hidden conflict, was one where this technology is heavily misused, and that causes fear. While I think–we don’t have this tech, so it’s not really an issue, but I do see a huge role played by fear in politics today.
The election of Donald Trump, what’s happened politically since he’s been out of office, the filter bubbles are self-sorting into, fear of the other is a massively negative force in American society, and globally as well. I’m curious if Tim has thoughts on that. I know you’re a spiritual guy, that you care about this stuff, about creating a better world. What do we do to build different business models and different technologies that reduce fear of the other and get people to empathize and come together in some way? Chang: Sometimes I’d like to see science fiction explore more is the unintended consequences of business models and powerful technologies. I’d argue we’ve built the real life Matrix now in our social media platforms. Not from some evil AI overlord that wants to enslave us, but because we happened to pick a business model of free ad-based revenues. Often in those cases, when it’s algorithmically driven and feed-based, the only conclusion is that you have to engineer a product that addicts its users. There’s no way around it.
And so even for me, as a venture capitalist, when I realized that–I told my partners, “I’m not going to look at business plans for social, digital media products that have free ad-based revenue models. I don’t think it’s an ethical model. Not with the way we have to scale and grow these things.” I’d love to see more science fiction that explores the unintended consequences of these business models mixed with exponential technologies. It’s a great way to map out what can go wrong with it.
GamesBeat: Sadly, it seems like you’re going against the grain there in terms of what is popular to fund, and what turns out to be successful.
Chang: Capital seeks superior returns. That’s not always what’s best aligned for the planet or for people. News, as a matter of fact–if it bleeds, it leads. We have to explore business models and how they form the way we wield technologies. I’ve always said that tech isn’t good or bad. It’s a tool. What is your intention? What is the model? What is the business you wield it with? GamesBeat: In your role at Singularity University you’re focusing more on climate change and issues related to that. Is that a pivot for you as far as your interests, or did that come out of your interest in science fiction? Naam: Within the last 10 years–actually, the second time I quit Microsoft, my goal was to write a book about saving the world, innovating around climate and energy and so on. My agent just didn’t love the book. I got stalled and didn’t know what to do. I decided to write the science fiction novel. But for the last decade, parallel to the Nexus books, I’ve been a forecaster, a public speaker, and sometimes an investor in clean tech, in companies trying to address climate change and other global challenges, mostly in bioenergy.
There are some similarities. There’s a science fiction aspect to looking into the future and understanding where technology can go. But mostly it’s just that I see it as an enormously important problem. I’m not a proper VC the way Tim is, but I get people pitching me on deals all the time where I think, “This could make a lot of money, but I don’t see how the world would be any better.” For me, it has to be addressing a real, pressing problem for humanity, aside from having the potential to be a good investment.
Climate is a hard thing. It’s a very big challenge. It’s a challenge that is here. It’s a present danger, and it gets worse over generations. It’s hard to frame it even in science fiction. A lot of science fiction talks about climate change, but you don’t have a silver bullet. You can’t end the story with the good guys beating the bad guys or making a discovery. Whatever it is we do, it’s going to be a wide portfolio or panoply of different approaches we take to de-carbonize cars, buildings, ships, planes, whatever. It’ll take decades to deploy.
When I think about climate change, I think about it as a personal thing that I work on, trying to educate people and motivate people, motivate business people about where to put their money and why. But from a fiction perspective, I see it more as a backdrop to science fiction stories, rather than the core conflict or Macguffin if you will.
GamesBeat: When I think of what science fiction has become really popular in tech circles, it’s almost unavoidable to talk about the metaverse. Neal Stephenson coined that term way back in 1992 or so with Snow Crash.
Nexus has a lot of roots in that. I wonder what you think about whether we’ll create something like a metaverse.
Chang: Already happening. I’ve seen dozens of pitches. We have successors to Second Life, which was in some ways a successor to the Well. We’ve had virtual worlds for decades now. But now the headsets, the browsers, the devices are catching up to it such that we can have some pretty compelling experiences. These will continue to take off.
Again, though, you have the question. What’s your business model? Let’s say your business model maximizes or incentivizes session lengths. It’ll be in your interest to build a more appealing world in VR or the metaverse than the real world. Next thing you know, we’ll have VR addiction clinics. A lot of people recently got turned on to this by Ready Player One, but that wasn’t actually, to me, very future-looking. That was a love song ode to retro ‘80s geeks like us who grew up with those things. It was like a mixtape of past-looking greatest hits as opposed to something truly future-looking.
Naam: There’s interesting stuff in both AR and VR. As the hardware gets smaller, lighter, cheaper, longer battery life, we’re going to be able to do all kinds of incredible stuff. I do wonder–today it’s VR that’s taken off more than AR. But I wonder if, as these worlds get more robust, are they going to face the same challenges that social media has? When I pull out my Oculus and put it on, a lot of what’s being pitched at me is social experiences. Online, you can find a group where they’re going to amplify your opinions, whatever they are. People self-select for that, for things that amplify their existing political beliefs especially.
VR and AR have the power to be a much more emotionally impactful medium than the ones we have today. Am I going to have the Benghazi experience? An experience where I’m inside the Capitol being stormed? Are those pushed to different audiences? Is that going to tear us apart even more? How do we avoid that problem? I may be overthinking it. Maybe it’s all going to be about exploring under the sea or going to Mars and whatnot. But I do have to wonder, given the experience of the last few years, how VR will get used and whether it will bring us together and pull us apart.
GamesBeat: I just watched a film that debuted at Sundance, A Glitch in the Matrix , from the line in the movie about how that’s the only way you can tell that we’re living inside a computer simulation. It goes back to a 1977 speech by Philip K. Dick, another science fiction writer, about how you would be able to tell whether or not we’re in a simulation. The film is interesting because they found people who believe this, and they’re living their lives as if, when they walk out of a room, the simulation goes off in order to save energy. The people who were in the room talking to them, they just disappear. It feels like the logical extension of the technologies everyone is trying to create to make a believable universe. I didn’t think we’d see the consequences, people really believing this is true.
Naam: I don’t know if it’s a major consequence, that people believe this is true. I think about more prosaic things, though. One of the first VR experiences I had was something called the Nantucket Project. They had VR headsets built around a stack of iPhones, and something like Google Cardboard. It wasn’t the highest-end system. But it was an experience of being in a Syrian refugee camp. All you could do is turn. You were on a mostly guided tour. You couldn’t walk around on your own. But you had a kid as your guide walking you through and explaining how things were, using real footage taken from this camp. It was hugely emotionally impactful. It was probably still the most emotionally impactful thing I’ve done in VR.
Maybe there is room for empathy. I think it has the chance to help people see other people as real human beings. Will it get used in other ways, though? Will it get used to create hatred or sow dissent? Hopefully empathy and love will–I do believe, despite the experience of the last four years, that overall more communication and higher bandwidth communication does create more connection and understanding. But that’s on net. There are subsets of that communication that go the other way. We’re still trying to figure out how to maximize the good.
Chang: Personally, the Planck constant is an interesting one to noodle on when it comes to the processing power of the computer simulation behind all this. But the framework I wanted to share–we tend to, whenever there’s a new platform, port things. We adapt things we knew from before to the newer platforms. What’s typically a hit, though, are things that are native to the new platform.
When the iPhone came out we ported platform games and shooters to it, but it was Angry Birds that was the first bonafide hit. It used the unique aspects of the touch interface. In VR, the first collection of content was first-person shooters and other things we’d seen before. There’s a couple funny phrases I’ve heard in tech. New forms of content on platforms are always the three Gs. It appeals to the most basic instincts: gambling, girls, and guns. Business models are similar. They appeal to base instincts. We say that something goes widely viral if it helps users get paid, made, or laid.
To Ramez’s point, could you have other models? That’s why I led an investment in a company called Tripp. I wrote a few years ago that the most obvious thing is to go launch shooter games and Netflix 360 VR. But I wanted to see not Netflix, but “net trips.” Could you induce more self-discovery, more connection and empathy with others, with the self, with nature? Arguably Tripp is building one of the first digital psychedelics, a “technodelic” if you will, designed to shift and expand consciousness. That’s a totally different use case than what a typical game you’d port to VR would focus on.
These are new categories. They’re still in the works. But for me, there has to be a way of making ourselves better through these things.
Naam: I think it’s a great effort, a great thing to fund, to try to use this technology to help people modulate their emotional state, find more peace, find more tranquility, find more access to spirituality. What better use for technology is there? GamesBeat: We have some audience questions. One of them is, how do you pop these bubbles, where people are deluding themselves? But also, how do you pop these business models that encourage them? Naam: The question of how you pop the bubble is easier than how you pop the business model. If you look at Facebook, the feed optimizes for engagement. It optimizes for how many likes and responses a post gets. It brings you back to people who you engage with. That ends up reinforcing either positive engagements, people you mostly agree with, or sometimes it reinforces negative engagements. If I wanted to pop those filter bubbles in Facebook, I’d start finding a way for the feed to surface some of the content that is not quite on your side. Maybe it’s not all the way on the other side. But it’s the best written, most reasonable, most accessible, most from people you trust, to try to spread some of that sharing of ideas.
There’s good data that social networks, whether it’s our personal friendships or others–the weak connections are some of the most important ones. They’re the ones that link one clustered network with another clustered network and bridge them together. I worry that people are just entrenching inside those clustered networks. Now, would that make Facebook more money? Maybe not in the first quarter it’s out there. But if I were Facebook or Twitter, I’d be worried. They’re doing well, but the platforms have become toxic enough that people are fleeing from them. Doing something to create more constructive and positive engagement below the surface, across divides, might be something that turns that around.
Chang: I totally agree with Ramez on that. A couple of vectors I’ve been considering–one, back to business model. What if it’s not just free and ad-based? What if it’s tiered subscription? What if it isn’t free to make a comment? What if it cost you a penny? Would it get rid of trolls immediately? This inspires the question of, do we need to redefine what free speech is in the digital era? My conclusion is there’s freedom of speech in what you say, but we’re in an era where there’s no cost or accountability to speech, especially when you can have pseudonyms or create fake accounts. That’s when you can hijack platforms and when things get dangerous. Maybe we can alleviate those with design of algorithms, feeds, feedback mechanisms, getting rid of vanity metrics like likes and retweets. But also business model choices.
The other way we could pop these bubbles, I do think the ultimate thing people want to pay for is meaning, purpose, and self-expression. If you’re helping people bet on themselves, helping them be their aspirational selves, that’s more like a self-improvement model, and I do think that has value. This is a silly example, but back in the day, MP3s were worth nothing, while ringtones were worth something, because ringtones were a self-expression, branding moment. That was the difference between content versus self-expression. Self-help has always proven that people want to bet on themselves, making themselves better, upgraded versions of themselves, versus downgraded, addicted versions of themselves. That’s what our current feeds and business models are designed for.
GamesBeat: We promised a bit of talk about games. I may be supposed to be bringing that in more than anyone here. It’s interesting to see games drive toward increasingly realistic depictions of humans. If you think about the progress we’ve made with AI as well, there seems to be a lot of similar moral implications for the designers of these things. We’ve talked about how social networks require some more responsible guidance or leadership. On the games side as well, we’re eventually going to have questions arise about whether these artificial beings we’re creating inside virtual worlds are “ours,” as property. Are they slaves? If they’re so real that they have their own consciousness, how wrong is that? I wonder what your own guidance might be in that realm as far as how designers should think about consequences.
Naam: I’m not all that worried about it. There’s a world of difference between what it takes to create something that has a great facade of intelligence than something that is sentient and aware. I go back to Eliza. That was the text-based psychotherapist made in the 1970s. All Eliza was, it was a Perl script. You’d say, “I’m worried about my mother,” and it would read those keywords and say, “Tell me more about your mother.” “My mother is sick with diabetes.” “How do you feel about your mother’s diabetes?” That’s all it took to anthropomorphize it through a keyboard.
Even an incredible simulacrum, and there’s a demand for that in games–obviously we want great NPCs to interact with and strong storylines. We’d all love to have the digital personal assistant that organizes our lives. Doing that does not require you to solve the problem of creating consciousness. It’s much easier, and it doesn’t raise any ethical issues.
Chang: I get excited a lot about this topic. I wish I could do a poll of how many in the audience ever developed genuine emotions or feelings for an NPC character. I say this because I just finished about 100 hours in Cyberpunk 2077, and I noticed genuine affection for some of the love interest characters. This made me think about, could games be a vehicle for developing communications and relationship skills with others? Imagine dialogue trees mixed with relationship coaching skills and responsive dialogue. We’re going to have semantic processing that can do that. You could improve your relationship with your family or your wife someday, potentially, through these kinds of thing. It goes back to the Her analogy, from the movie, where you full-on fall in love with the operating system because it knows you better than anyone else, but it could be a possibility.
My other challenge for game designers, the ultimate opportunity is, what if you replace all the end bosses in video games with deeply spiritual reflections? The game learns you, and it turns out the big boss at the end is your own shadow side, your core wounds, your unresolved traumas. If you saw the movie Soul, from Pixar, those lost souls were just you, wrapped in the shadows of your projections and traumas and limitations. Imagine a game as a spiritual experience, or self-transformation vehicle. That’s what I’d love to see.
Naam: A friend of mine is at a startup that makes software for kids with autism. They do have gamified experiences that help them understand things like, how do you read emotions? What is this person doing? What’s the right response in this situation? It seems quite effective. But the idea of just using it for ordinary adults or teenagers or whoever, anyone who wants to learn socialization skills and so on, is a good one.
Chang: Independent game developers in the art world have been doing this for a while. You have games like Papers, Please that teach you empathy for someone like a customs agent. They’re more fringe. But you could imagine these learnings and mechanics making their way into triple-A games.
GamesBeat: I read Ready Player Two , and one of the more interesting things I thought he got right in there was that streamers will be the ones that will post their experiences of what it’s like to do something in a hyper-realistic virtual reality. They’ll publish these things on the equivalent of a YouTube channel, and you just step into these files and go live somebody else’s experience. You could completely understand what it’s like to be them, whether it’s a transgender person or somebody living in a country far away. That was a bit of a hopeful message in that these technologies can be so immersive as to communicate to you what it’s like to be someone else, and that could increase empathy in the world, because it’s so immersive.
Chang: Do you remember a 1995 movie called Strange Days? Ralph Fiennes had this SQUID device that recorded emotional experiences. It was the ultimate spectate mode, because you could feel the emotions and sensations. Even Cyberpunk 2077 had the notion of braindance, which built on that as well. It’s back to the notion of VR as an empathy engine. There will be good and there will be bad. You can imagine snuff videos and other horrifying things, but there could be beautiful things, like witnessing the birth of your baby, or falling in love. The whole range could be possible.
Naam: Maybe some of that has to do with education as well. People, when they are selecting what they’re going to go watch–of course there’s a lot of people who watch amazing documentaries and educate themselves and so on, but a lot more go to see action, adventure, comedy, romance, escapism. In schooling, maybe one of the things we need to have is, live a day in someone else’s shoes. I think that’s an amazing tool for VR or AR to help people learn. What’s it like to be of a different race, a different gender? What’s it like to live in Sri Lanka or Myanmar? To build that degree of empathy. You can combine that with things like real time speech translation and other things like that. I can imagine situations where you really increase the level of empathy.
But I’m no longer quite as sure that it will all happen by itself. We have to make the intentional choice to use technology in these ways and help drive that.
Chang: Somebody told me that TikTok is kind of doing that. I have a lot of problems with TikTok and its algorithmic content, but you get authentic slices of life, five to 30 seconds at a time, from all over the world. A kid in the streets in Myanmar, something like that. That possibly is happening in these lower-res formats, which is something I hadn’t expected.
GamesBeat: There’s a question for Ramez. What’s the distinction between psychedelics or drugs and technologies like virtual reality? Nexus is a drug, but it has the effect of something like taking AR glasses and putting them inside your head.
Naam: Tim should answer this one too. But they’re complementary technologies. Some thinking about psychedelic experience–the value of putting anything in your brain whatsoever, whether it’s the right media that you’re consuming, whether it’s a book, whether it’s Sam Delany’s book Dhalgren, who’s a master of science fiction–that was like a 1200 page psychedelic trip. It really was. Or it’s the right experience in nature or whatnot. Now, when you have AR and VR, they can do some amazing things, including psychedelic experiences. But they can probably be amplified or further amplify things that can be done with substances or meditation or other practices as well.
There’s no bright line, but there are some things that are easier to do in different ways. It’s easier to give someone a very specific visual experience than a screen than it is with any drug you’re going to put in your brain. It’s probably easier to induce other things with something, whether it’s brain stimulus or molecules that interact with your brain as well. The two together might be one plus one equals three.
Chang: You could start to play with people’s perception and feed back to different senses. You could start to mix them together, simulating things like synaesthesia in virtual domains. Could you start to simulate what death is like, as you start to lose your senses? I saw one art project in VR that was used to give people a sense of what it’s like to be paranoid schizophrenic, by playing with the senses, overlays of things, phantom voices. You can image the empathy that can create for different neural states.
GamesBeat: Tim, I noticed you’re on the board of a non-profit called Reimagine Death. The topic comes up frequently in books, the concept of digital immortality. Do you think about that and the opportunities there? Chang: Wired magazine had an article about somebody who created a me-bot for his dying father. It was an evolution of the Eliza experiment. But you can imagine how, if semantic and natural language processing gets better and better–the corpus of data you produce in all your emails, all your text messages, is probably enough training data to create a simulacrum that actually sounds like you. It knows your favorite phrases, words, sentences to use, response patterns. You can imagine your digital tombstone someday is a me-bot of you, which could increasingly answer interactions and questions, with better fidelity, from your great-grandkids.
GamesBeat: You just described what the founder of Replika did with text messages.
She had a friend who died in a car accident. She gathered every text message that he had ever written to friends and family and all that, and tried to reproduce a digital companion that’s now the friend of millions of Replika users.
Chang: It’s even a field now, called death tech. There’s going to be subscription business models based on preserving your legacy vault. As programming and machine learning gets better, I will re-create you with greater fidelity as time goes on, based on everything in your legacy vault. We’ll see some cool things. Going back to science fiction, the remake of Superman in Man of Steel had his father as this kind of bot that was coaching him when he went back to the crypt in his spaceship. There are good examples in sci-fi.
GamesBeat: Nearing the end here, is there anything you’d like to leave with our listeners here about what you think will happen in the future? Naam: I’m super excited. I’ve sounded some notes of caution here, but I do think that in the big picture, over the long arc of human history, any time that we’ve seen new information technologies, there’s been some bad stuff that results, but overall it empowers people. We’re struggling with catching up with the consequences of filter bubbles and social media and how society deals with that. But I do think that the ability to connect is still a power for good. AR and VR are in their total infancy. They’re just baby technologies right now when it comes to adoption and wearability and fidelity. As we get there, like every technology, there will be misuses, but there are massive opportunities for good.
Game developers have the potential to use these technologies to not only entertain, but entertain and enlighten, entertain and empathize. They can help make the people who play their games more fulfilled and better people as well.
Chang: A great example, Jenova Chen, his new game Sky has a lot of this, the theme of enlightenment. What I’d leave with the audience is, just like science fiction authors have a responsibility to create not just black mirrors, but white mirrors, it’s the same with you. Creating experiences that help people become better versions of themselves. Not just killing things, but leveling up and self-transformation. That would be exciting.
Really good science fiction to turn to for these–we’re going to see a lot of cli-fi, climate fiction. Kim Stanley Robinson’s Ministry for the Future has been recommended to me many times. That’s a great one. I actually think back to the notion of exploring business and politics. There will be new categories like de-fi, decentralized fiction, or financial fiction, that explore the worlds of cryptocurrency and the craziness that happens there as business models get turned upside down. There will be a lot of aspects there that can make their way into game themes.
GamesBeat: I’ve had interesting experiences thinking about things from way back, like the quantified self-movement that inspired a lot of these devices for measuring things like your heart rate and all that. I had a chance last year to wear one of those Dexcom glucose monitors , to learn and feel what it’s like to be a person with diabetes that always has to pay attention to how much sugar is in their blood. It was eye-opening. Just from eating pasta, seeing how much my sugar level would spike. Or while I was jogging, how much it would dip.
It was interesting to see not only how this technology has moved into a stage where it can automate an insulin pump for you, which really helps people who have the disease. But also helping people themselves realize, okay, what kind of mood am I in when I have all this sugar in my blood, versus when I’m going toward a low point? How does this affect my brain and my actions? Technology, to me, is on the cusp of delivering some really interesting capabilities for everyone. All this obsession with measuring everything, the quantified self, is at our fingertips. It’s going to be an interesting next few years as those kinds of technologies develop.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,916 | 2,021 |
"Biden should double down on Trump’s policy of promoting AI within government | VentureBeat"
|
"https://venturebeat.com/ai/biden-should-double-down-on-trumps-policy-of-promoting-ai-within-government"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Biden should double down on Trump’s policy of promoting AI within government Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
President Biden is signing a flurry of executive actions during his first few weeks in office, many of them overturning Trump policies. Trump’s recent executive order on promoting the use of artificial intelligence in government agencies, however, presents a rare bipartisan sentiment and promises to improve government policies and services across the board. The current administration should not only maintain this policy, it should make it a priority.
AI is fundamentally changing the way people engage with technology, and the advantages of AI — enhanced problem solving and pattern detection, autonomously operated machines, and so on — extend beyond the private sector. AI can help governments produce informed and effective policy, optimize processes, improve quality of services, and engage the public.
The pandemic has made this even clearer. Many government agencies rely on AI to monitor and treat COVID. The Pentagon is using AI to predict water, medicine, and supply shortages. AI is even helping the Department of Energy identify molecules to test in the lab as potential COVID treatments. The list goes on.
AI also has its uses outside of the pandemic. For example, Pittsburgh used AI to cut down on traffic, reducing travel times by 25% and cutting emissions. Chicago is even using AI to prevent crimes by predicting when and where they are likely to happen (no Precogs needed). Unfortunately, pre-COVID examples like these are scarce.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This is why the previous administration’s executive order is important. It lays out a plan for establishing government-wide guidance (or standards) for the adoption of AI within federal agencies. This represents a break from traditional policy, which lumped AI in with other technologies, relying on old standards for a new, invariably different technology.
The Biden administration should prioritize this policy because standards set the tone for agency staff. Standards not only indicate that the use of certain tools is permitted or encouraged, but they also provide the roadmaps necessary for staff to feel comfortable adopting the technology — and to do so effectively. Creating a common set of standards across all agencies will improve information sharing between agencies, too. This will conceivably increase agency effectiveness and efficiency, as well as the quality of the standards.
Consider a government-wide AI standard for public documents. This standard might require agencies to make all public documents machine readable and include tags, allowing users to quickly and easily sort by topic, search for keywords, or aggregate data from related documents. This would open up large swaths of data for use by the public and private sectors. For example, business owners could quickly identify regulations that apply to their businesses without having to read the nearly 200,000 pages of regulations currently on the books. And that’s just one of countless possibilities for new standards.
Another reason to prioritize this policy is that it promotes goals outlined in the Biden administration’s executive actions. The goal of one such action is “to make evidence-based decisions guided by the best available science and data.” In today’s day and age, AI is often the best scientific tool and provides the best data.
This Biden action also instructs agencies to “expand open and secure access to Federal data,” including a mandate to make collected data available in machine-readable format. This is already one step toward common AI standards across agencies. The administration could even take this a step further by extending it to all documents.
In another executive order , the Biden administration states its goal to produce “a set of recommendations for improving and modernizing regulatory review.” With hundreds of thousands of regulatory pages on the books, one way to modernize review would be to begin with outdated or duplicative regulations, which agencies can identify using AI (a recommendation from the previous administration’s AI policy).
AI also presents new ways to evaluate the success and broader effects of existing regulations. Agencies could use this knowledge to inform a modernized regulatory review process and develop more effective regulations.
Expanding the use of AI in government is bipartisan policy — while people on either side of the aisle may prefer more or fewer policies in any specific area, everyone wants policies to be more informed and better constructed. However, there is a danger that the policy on AI standards for government gets lumped in with the Trump administration’s policy on AI regulations and standards for the private sector.
The private sector AI policy is not as bipartisan and has its critics.
In fact, we responded to the original plan back in 2019, arguing that instead of developing standards for the private sector, the administration should turn its focus to government-wide standards for federal agencies. This is just what the previous administration did with the new AI policy.
Developing government-wide standards specific to AI will promote more and better use of AI within government agencies, leading to higher-quality policies and services. Those in the new administration should not let the origin of this policy blind them to its benefits and bipartisan nature. The Biden administration can use this opportunity to prioritize an effort that both parties can agree on — an effort that will expand the scientific grounding of government policies. That’s an effort that will mean a more effective government, as Biden would say, “for all Americans.” Patrick McLaughlin is the director of policy analytics at the Mercatus Center at George Mason University, where he created and leads the RegData and QuantGov projects, deploying machine learning and other data-science tools to quantify governance indicators found in federal and state regulations and other policy documents.
Tyler Richards is the research coordinator of policy analytics at the Mercatus Center.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,917 | 2,021 |
"AI Weekly: Biden calls for $37 billion to address chip shortage | VentureBeat"
|
"https://venturebeat.com/ai/ai-weekly-biden-calls-for-37-billion-to-address-chip-shortage"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Biden calls for $37 billion to address chip shortage Share on Facebook Share on X Share on LinkedIn U.S. President Joe Biden holds a semiconductor during his remarks before signing an executive order on the economy in the State Dining Room of the White House on February 24, 2021 in Washington, DC.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Shortly after a meeting with members of Congress on Wednesday, President Joe Biden signed an executive order that launches a review of supply chain vulnerabilities in the United States. COVID-19 made evident gaps in the U.S. supply chain in medical equipment like face masks and ventilators, but in a ceremony carried live by TV news networks, Biden held up a chip, calling it the “21st century horseshoe nail.” AI research has received military funding from the outset, and government organizations like DARPA continue to fund AI startups, but a global chip supply shortage caused by COVID-19 has hindered the progress of numerous industries. During his remarks, Biden acknowledged that semiconductor chip shortages impacts products like cars, smartphones, and medical diagnostic equipment. Earlier this month, Ford said the shortage would reduce production by up to 20% in Q1 2021.
Smartphone production is also expected to decline as a result of the chip shortage, and earlier this month, business executives from AMD, Intel, Nvidia, and Qualcomm sent a letter to Biden urging support for the CHIPS for America Act and stating that a chip shortage could interrupt progress for emerging technology areas like AI, 5G, and quantum computing. CHIPS stands for Creating Helpful Incentives to Produce Semiconductors. That bill was introduced in Congress in summer 2020 and called for $22 billion in tax credits and research and development funding. The American Foundries Act , also introduced in Congress last summer, called for $25 billion. As part of the executive order signing ceremony Wednesday, Biden pledged support for $37 billion over an unspecified period described as “short term” and pledged to work with ally nations to address the chip bottleneck. The executive order will also review key minerals and materials, pharmaceuticals, and the kinds of batteries used in electric vehicles.
“We need to prevent the supply chain crisis from hitting in the first place. And in some cases, building resilience will mean increasing our production of certain types of elements here at home. In others, it’ll mean working more closely with our trusted friends and partners, nations that share our values, so that our supply chains can’t be used against us as leverage,” Biden said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A 2019 U.S. Air Force report put the urgency of the matter in context.
That report finds that “90% of all high-volume, leading-edge [semiconductor] production will soon be based in Taiwan, China, and South Korea.” The Semiconductor Industry Association (SIA) finds that 12% of global semiconductor production takes place in the U.S. today.
Analysts who spoke to VentureBeat found a number of factors contributing to the current chip shortage.
Kevin Krewell is a principal analyst at Tirias Research. He attributes the chip shortage to an initial slump followed by unexpected demand increase, not enough advanced semiconductor manufacturers, the fact that more complex semiconductor processes are hard to scale, and that there’s a long lead time on building new semiconductor manufacturing facilities, or “fabs.” Intel and Samsung being slow to get advanced process nodes out in a timely fashion has put more pressure on TSMC to make more chips, but he expects shortages will get addressed as more capacity comes on line and demand returns to more predictable levels.
“The $37 billion figure is a small start, but it is a start,” he said. Building a single semiconductor manufacturing facility can cost tens of billions of dollars.
Linley Group senior analyst Mike Demler said a fourth quarter growth in car sales caught auto manufacturers off guard, that high demand for consumer electronics during the pandemic rippled through other industries. He also said that the U.S. semiconductor industry wants to use the shortage to increase domestic semiconductor-manufacturing capacity.
“The semiconductor industry has thrived because of the global supply chain. Greater investment in R&D could help restore US technological leadership in manufacturing technology, but it would take many years to shift the ecosystem,” Demler said.
IDC analyst Mario Morales said the chip shortage is a real thing but that some businesses may be blaming that shortage to distract from deeper underlying business problems or poor planning. For example, Ford may be reducing inventory due to a lack of chips, but Toyota has a stockpile.
“I think some of this is just not very good business continuity planning, and that some of this is a reaction to that. And others I think they’re using this as an excuse, because there is some underperformance from some of these vendors,” he said.
When discussing what caused the chip shortage, analysts VentureBeat interviewed talked primarily about COVID-19 and made virtually no mention of China, but you could potentially say the opposite about national security interests in the U.S., the other driver of interest in domestic chip production. The final report from the National Security Commission on AI is due out next week. That group was formed by Congress a few years ago and is made up of some of the most influential AI and business leaders in the United States today, like soon-to-be Amazon CEO Andy Jassy, Google Cloud AI chief Andrew Moore, and former Google CEO Eric Schmidt.
The report calls for the United States to remain “two generations ahead of China,” with $12 billion over the next five years for research, development, and infrastructure. It also supports creation of a national microelectronics research strategy like the kind espoused in the American Foundries Act. The 2021 National Defense Authorization Act created a committee to develop a national microelectronic research strategy.
The report calls for 40% refundable tax credit as well. The CHIPS for America Act also calls for hefty tax credits for semiconductor manufacturers through 2027.
“The dependency of the United States on semiconductor imports, particularly from Taiwan, creates a strategic vulnerability for both its economy and military to adverse foreign government action, natural disaster, and other events that can disrupt the supply chains for electronics,” the draft final report reads. “If a potential adversary bests the United States in semiconductors, it could gain the upper hand in every domain of warfare.” The draft final report echoes calls from the National Security Commission on Artificial Intelligence (NSCAI) for more public-private partnerships around semiconductors.
In testimony before the House Budget committee about how AI will change the economy, NSCAI commissioner and Intelligence Advanced Research Projects Activity (IARPA) director Dr. Jason Matheny said, “It will be very difficult for China to match us if we play our cards right.” “We shouldn’t rest on our laurels, but if we pursue policies that strengthen our semiconductor industry while also placing the appropriate controls on the manufacturing equipment that China doesn’t have and that China currently doesn’t have the ability to produce itself and is probably a decade away from being able to produce itself, we’ll be in a very strong position,” he said.
A Bloomberg analysis found that Chinese spending on computer chip production equipment jumped 20% in 2020 compared to 2019.
Reuters has recorded Chinese chip imports above $300 billion for the past three years.
Advanced semiconductor manufacturing facilities can be more expensive than modern day aircraft carriers , and fabs are only part of the equation. IDC’s Morales agreed with Krewell that $37 billion is a start, but that becoming a leader in manufacturing could take a decade of investment not just in semiconductor manufacturing plants, but also design, IP, and infrastructure.
“The goal should be to collaborate a lot more with other regions that I would say are more neutral,” Morales said. He added that, based on conversations with manufacturers, he expects an end to chip supply chain shortage issues by Q2 or Q3 2021.
We’ll have to wait a few months to see what the review ordered by the Biden administration prescribes to improve resilience when it comes to chip production, but it seems clear that $37 billion may only be the start.
For AI coverage, send news tips to Khari Johnson , Kyle Wiggers , and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,918 | 2,021 |
"5 steps to creating a responsible AI Center of Excellence | VentureBeat"
|
"https://venturebeat.com/ai/5-steps-to-creating-a-responsible-ai-center-of-excellence"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 steps to creating a responsible AI Center of Excellence Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
To practice trustworthy or responsible AI (AI that is truly fair, explainable, accountable, and robust), a number of organizations are creating in-house centers of excellence. These are groups of trustworthy AI stewards from across the business that can understand, anticipate, and mitigate any potential problems. The intent is not to necessarily create subject matter experts but rather a pool of ambassadors who act as point people.
Here, I’ll walk your through a set of best practices for establishing an effective center of excellence in your own organization. Any larger company should have such a function in place.
1. Deliberately connect groundswells To form a Center of Excellence, notice groundswells of interest in AI and AI ethics in your organization and conjoin them into one space to share information. Consider creating a slack channel or some other curated online community for the various cross-functional teams to share thoughts, ideas, and research on the subject. The groups of people could either be from various geographies and/or various disciplines. For example, your organization may have a number of minority groups with a vested interest in AI and ethics that could share their viewpoints with data scientists that are configuring tools to help mine for bias. Or perhaps you have a group of designers trying to infuse ethics into design thinking who could work directly with those in the organization that are vetting governance.
2. Flatten hierarchy This group has more power and influence as a coalition of changemakers. There should be a rotating leadership model within an AI Center of Excellence; everyone’s ideas count — everyone is welcome to share and to co-lead. A rule of engagement is that everyone has each other’s back.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 3. Source your force Begin to source your AI ambassadors from this Center of Excellence — put out a call to arms. Your ambassadors will ultimately help to identify tactics for operationalizing your trustworthy AI principles including but not limited to: A) Explaining to developers what an AI lifecycle is.
The AI lifecycle includes a variety of roles, performed by people with different specialized skills and knowledge who collectively produce an AI service. Each role contributes in a unique way, using different tools. A key requirement for enabling AI governance is the ability to collect model facts throughout the AI lifecycle. This set of facts can be used to create a fact sheet for the model or service. (A fact sheet is a collection of relevant information about the creation and deployment of an AI model or service.) Facts could range from information about the purpose and criticality of the model to measured characteristics of the dataset, model, or service, to actions taken during the creation and deployment process of the model or service.
Here is an example of a fact sheet that represents a text sentiment classifier (an AI model that determines which emotions are being exhibited in text.) Think of a fact sheet as being the basis for what could be considered a “nutrition label” for AI. Much like you would pick up a box of cereal in a grocery store to check for sugar content, you might do the same when choosing which loan provider to choose given which AI they use to determine the interest rate on your loan.
B) Introducing ethics into design thinking for data scientists, coders, and AI engineers. If your organization currently does not use design thinking, then this is an important foundation to introduce. These exercises are critical to adopt into design processes. Questions to be answered in this exercise include: How do we look beyond the primary purpose of our product to forecast its effects? Are there any tertiary effects that are beneficial or should be prevented? How does the product affect single users? How does it affect communities or organizations? What are tangible mechanisms to prevent negative outcomes? How will we prioritize the preventative implementations (mechanisms) in our sprints or roadmap? Can any of our implementations prevent other negative outcomes identified? C) Teaching the importance of feedback loops and how to construct them.
D) Advocating for dev teams to source separate “adversarial” teams to poke holes in assumptions made by coders, ultimately to determine unintended consequences of decisions (aka ‘Red Team vs Blue Team ‘ as described by Kathy Baxter of Salesforce).
E) Enforcing truly diverse and inclusive teams.
F) Teaching cognitive and hidden bias and its very real affect on data.
G) Identifying, building, and collaborating with an AI ethics board.
H) Introducing tools and AI engineering practices to help the organization mine for bias in data and promote explainability, accountability , and robustness.
These AI ambassadors should be excellent, compelling storytellers who can help build the narrative as to why people should care about ethical AI practices.
4. Begin teaching trustworthy AI training at scale This should be a priority. Curate trustworthy AI learning modules for every individual of the workforce, customized in breadth and depth based on various archetype types. One good example I’ve heard of on this front is Alka Patel, head of AI ethics policy at the Joint Artificial Intelligence Center (JAIC). She has been leading an expansive program promoting AI and data literacy and, per this DoD blog , has incorporated AI ethics training into both the JAIC’s DoD Workforce Education Strategy and a pilot education program for acquisition and product capability managers. Patel has also modified procurement processes to make sure they comply with responsible AI principles and has worked with acquisition partners on responsible AI strategy.
5. Work across uncommon stakeholders Your AI ambassadors will work across silos to ensure that they bring new stakeholders to the table, including those whose work is dedicated to diversity and inclusivity, HR, data science, and legal counsel. These people may NOT be used to working together! How often are CDIOs invited to work alongside a team of data scientists? But that is exactly the goal here.
Granted, if you are a small shop, your force may be only a handful of people. There are certainly similar steps you can take to ensure you are a steward of trustworthy AI too. Ensuring that your team is as diverse and inclusive as possible is a great start. Have your design and dev team incorporate best practices into their day-to-day activities. Publish governance that details what standards your company adheres to with respect to trustworthy AI.
By adopting these best practices, you can help your organization establish a collective mindset that recognizes that ethics is an enabler not an inhibitor. Ethics is not an extra step or hurdle to overcome when adopting and scaling AI but is a mission critical requirement for orgs. You will also increase trustworthy-AI literacy across the organization.
As Francesca Rossi, IBM’s AI and Ethics leader stated , “Overall, only a multi-dimensional and multi-stakeholder approach can truly address AI bias by defining a values-driven approach, where values such as fairness, transparency, and trust are the center of creation and decision-making around AI.” Phaedra Boinodiris , FRSA, is an executive consultant on the Trust in AI team at IBM and is currently pursuing her PhD in AI and ethics. She has focused on inclusion in technology since 1999. She is also a member of the Cognitive World Think Tank on enterprise AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,919 | 2,021 |
"Exabeam joins cybersecurity ecosystem revolving around Snowflake | VentureBeat"
|
"https://venturebeat.com/security/exabeam-joins-cybersecurity-ecosystem-revolving-around-snowflake"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Exabeam joins cybersecurity ecosystem revolving around Snowflake Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Exabeam has become the latest vendor to join a security ecosystem that is starting to emerge around the Snowflake data services platform residing on the Amazon Web Services (AWS) cloud.
The provider of a security event information management (SIEM) platform revealed this week that it will now work with customers that have made Snowflake their primary repository for storing and analyzing data.
That approach eliminates the need for customers to set up a separate data repository to analyze their security data, Exabeam senior security strategist Samantha Humphries said. “It’s the budget-wise choice,” she said. “The data is already there.” Other vendors in the nascent security ecosystem emerging around Snowflake include Hunters.ai, provider of a platform that employs machine learning algorithms to hunt for potential cybersecurity threats within an IT environment, and Lacework, which provides a platform for automating cloud security and compliance.
Snowflake is working to build alliances with security vendors that will deploy applications on top of its cloud data services, Snowflake head of cybersecurity strategy Omer Singer said. “We’re looking for a number of partners that will play different roles.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As part of this alliance, Exabeam has also added a Cloud Connector for Snowflake to its software-as-a-service (SaaS) platform. Cybersecurity teams can also use this to monitor audit logs within Snowflake to detect anomalous account behaviors within the platform itself. Exabeam can provide continuous, real-time mapping of logs stored within Snowflake, along with surfacing the attributes of all activity and behavior associated with specific users and devices.
Historically, security analysts have needed to collect their own data. However, as organizations invest in data warehouses and associated analytics applications on cloud platforms, the need for a security team to build, deploy, and manage a separate data repository is declining. One of the best ways to maximize an investment in a data warehouse is to make it accessible to as many applications as possible. As the amount of data stored in Snowflake continues to grow, the forces of data gravity start to exert more influence over where applications should be deployed.
Snowflake makes it possible to use standard SQL to launch queries that might surface anomalies indicative of a data breach. Security analysts will be able to collaborate with database administrators and data science teams that use SQL as the lingua franca for interrogating data, Singer noted. Longer-term, Snowflake will also provide a platform to more easily access the data that would be needed to create an AI model to automate a security process, Singer added.
Most IT organizations are trying to navigate two competing agendas. As IT continues to evolve, the amount of data residing on a much wider range of platforms that needs to be secured is increasing exponentially. At the same time, cybersecurity teams, along with the rest of the organization, are under extreme pressure to reduce costs in the wake of the economic downturn brought on by the COVID-19 pandemic.
Leveraging platforms such as Snowflake to analyze data using standard SQL tools is one way to reduce costs while gaining access to a larger pool of data to analyze. The average SIEM platform running on-premises in an enterprise is usually limited to gigabytes of data. It’s not uncommon for cybersecurity teams to have to choose between different types of data to collect and analyze because they don’t have the capacity to store it all, Singer noted.
Being forced to make that choice runs counter to the best interests of cybersecurity, an issue Singer said is obviated by a Snowflake cloud platform that can make petabytes of data readily available to cybersecurity teams working from home or in the office.
It’s hard to say how large a cybersecurity ecosystem around Snowflake might become. There are plenty of options when it comes to cloud data services. However, the amount of time cybersecurity teams spend collecting data versus analyzing it should be sharply reduced in the months and years ahead.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,920 | 2,021 |
"The DeanBeat: Our 360-degree view of the metaverse ecosystem | VentureBeat"
|
"https://venturebeat.com/games/the-deanbeat-our-360-degree-view-of-the-metaverse-ecosystem"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: Our 360-degree view of the metaverse ecosystem Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
I’m very proud that we were able to create an original event around the metaverse this week and that it received good attention in spite of crazy news like GameStop ‘s stock frenzy and Apple’s reaffirmation of its stance on privacy over targeted advertising.
Our GamesBeat Summit: Into the Metaverse and GamesBeat/Facebook: Driving Game Growth event drew more than 3,400 registered guests (not counting those who viewed livestreams), building a community around the metaverse and mobile games that we didn’t really know was there. We had record engagement with this metaverse event, and that confirms for me that we’re all fed up with the Zoomverse. Thank you for support, as you affirmed that I’m not alone in this passion. While we may not be able to define the metaverse yet, it feels like we all agree something else should connect us amid the bleak reality of the pandemic.
When we started work on our metaverse conference last year, we weren’t sure if we would have half a day of talks. We wound up with 29 talks and 68 speakers (not counting the 13 sessions and 33 speakers from our GamesBeat/Facebook: Driving Game Growth event on Tuesday). What we wound up with was a 360-degree view of the metaverse ecosystem. That will help us figure out what it is.
As I searched for speakers in this space, I was pleased to find so much more effort going into it this across multiple industries related to gaming and entertainment. And some of the people thinking about this have been contemplating it for decades. Ian Livingstone, the cofounder of Hiro Capital (and years earlier, Games Workshop), has been thinking about this for so long that he named his venture firm after the lead character in Snow Crash , Hiro Protagonist. And if you could somehow find out the secret research and development funds of the largest companies in technology, games, entertainment, and other industries, you would find billions upon billions of dollars being invested in the metaverse.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! While assembling this event, the only session I wasn’t able to schedule in time was Brands and the Metaverse. But we can do that one in the future, as brands need some time to catch up. Yet I’m glad about what we got done and that our event didn’t present a monolithic perspective of the metaverse, which is rapidly moving from a hypothetical sci-fi dream to something real.
Just what is the metaverse? We found that it is hard to define the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
When I say the word, I mean an online universe that connects worlds, each of which is a place where you are enticed to live your life.
Matthew Ball says it will give us virtually infinite supplies of persistent content, live content, on-demand playback, interactivity, dynamic and personalized ads, and content distribution. It won’t have a cap to how many people can be in it at a time. It will have a functioning economy. You’ll watch movies with friends by your side. You’ll play games. You’ll shop, socialize, and come back every day.
Vicki Dobbs Beck of ILMxLab says the metaverse brings together all of the different arts of humanity in one place. During the conference, I discovered that no one had the same definition of the metaverse. I learned this in the town hall Q&A, when Hironao Kunimitsu of Gumi said it would take a decade to create the metaverse, while Jesse Schell of Schell Games, Tim Sweeney of Epic Games, and Cyberpunk creator Mike Pondsmith of R. Talsorian Games said that we have a version of the metaverse that already exists.
But nobody disputed how important the metaverse will be. I see it as the ultimate revenge of the nerds, the people who dreamed of making the Star Trek holodeck or the main street of the metaverse in Snow Crash.
How big a deal is this? “It’s like being at the beginning of motion pictures, or television, or the web,” Schell said.
The open or walled garden metaverse Sweeney stood up for the open metaverse, saying it should be democratically controlled by everybody, not just the tech giants with their walled gardens. That would only hold up innovation, result in big “taxes” for consumers, and put the rights of users at the bottom of the list. Those are bold words for someone who is suing Apple for antitrust violations.
Jason Rubin, the vice president of play at Facebook , didn’t repeat the same words about Apple. But he said Facebook’s goal is to knock down barriers that limit access to games. If Facebook succeeds with its Instant Games and cloud games, it could find a way to bypass the app stores, just as Sweeney is trying to do.
These are sentiments that make me feel that Sweeney is far from being alone.
Ryan Gill and Toby Tremayne of Crucible proposed blockchain and the open web as a way to create agents that will represent us in the metaverse. Perhaps the open web could give developers a way around the app stores, or maybe Unit 2 Games’ Crayta technology will enable developers to build their dreams in the cloud.
Meanwhile, CCP Games CEO Hilmar Petursson reminded us that Eve Online, created back in 2003, already exists as a metaverse for 300,000 souls who are dedicated to it. While his audience is small by Roblox standards ( Dave Baszucki’s company has 36 million daily users ), it is bigger than Petursson’s native Iceland, and he has 17 years of learning about the metaverse, or an online haven where people are willing to live their lives for decades.
The creative leaders of the industry voiced their support for the metaverse as a new place to tell stories. Vicki Dobbs Beck of ILMxLab and Siobhan Reddy of Media Molecule talked about “storyliving,” or creating new experiences that emerge as we live our lives online. Those experiences could blend both emergent and narrated experiences, they said, while Baszucki at Roblox is putting his faith in user-generated games, which might be the only way to populate a metaverse with enough things to do.
Genvid Technologies CEO Jacob Navok described the interactive reality show of Rival Peak , which has become a hit on YouTube and Facebook where influencers can summarize the week’s events in a Survivor-like competition between AI characters. The “cloud-native” games like Rival Peak are the kind of modern entertainment that could be enabled by the metaverse.
Mark Long, CEO of Neon Media , is itching to get a big transmedia project underway that shows that entertainment properties can span different types of media and live in a kind of metaverse. And Hironao Kunimitsu of Gumi, whose company is working on a massively multiplayer online virtual reality game, told us how the metaverse isn’t just an inspiration from Western science fiction. He got his inspiration from the Japanese anime series Sword Art Online.
We had alternative views of the metaverse, as expressed by Akash Nigam of Genies and Edward Saatchi of Fable Studio. They argued that existing smaller 2D efforts — like Instagram AI characters — might be the earliest and most accessible manifestation of the metaverse. And John Hanke of Niantic and Ronan Dunne of Verizon talked about an augmented reality alliance that will take the metaverse on the go.
The chicken and egg problem I saw from the presentations that we have a huge chicken and egg problem. Dunne mentioned that Verizon has committed $60 billion to the 5G network on the assumption that people will use it. The infrastructure gets built because we have a long history of proving such capital investments as reasonable investments.
But who’s going to write the first check for the metaverse? Sweeney doesn’t have that much money, despite the riches of Fortnite. Apple could write that check. Maybe Google or Facebook or Amazon. But then we know that Sweeney’s dream of openness would probably go up in smoke.
Does the metaverse require us to lay down that kind of money for the infrastructure? Probably. We’re talking about the next version of the internet, which would support shifting most of the world’s population into online living. It’s the place where gamers will be able to play every game they’ve ever wanted. If we’re not that ambitious, then I wouldn’t really say it is the metaverse we are trying to create. The tech giants have that kind of money, but Sweeney isn’t so sure we should trust them.
When you think about the scale of the problem, it’s a big one. Fortnite has amassed more than 350 million users, and that gives Sweeney a big advantage. He hopes to evolve Fortnite over time into the metaverse. But Fortnite’s world is built for 100 players to fight each other in a single shard at a time. That’s the battle royale experience. But a metaverse needs 100 million people to be in a shard at once. Can Epic bolt on this capability to a game that was never designed to accomplish that? Dean Abramson and Sean Mann of RP1 pitched their “shardless” world architecture that they hope will allow them to cram 100 million people into a single world without melting the polar ice caps. Abramson started working on the problem a decade ago, after designing the concurrent architecture of Full Tilt Poker. But, again, we have the chicken and egg problem. Who will adopt this technology, which hasn’t been proven yet, and toss out the architecture that has a lot of legacy users? Fortunately, we have more resources dedicated to this purpose now than ever before. We’re at a rare moment when financial, cultural, entertainment, and human interests are aligned. Gaming had historic growth in 2020. The odds are good it will keep growing. Mobile insights and analytics firm App Annie estimates mobile games, already the biggest segment, will grow 20% to $120 billion in 2021. This growth means that game companies will have the cash to invest in the metaverse. Roblox, which is banking on user-generated content for the metaverse, raised $520 million and will still go public soon. That gives Roblox a big war chest to build the metaverse that founder Dave Baszucki wants to see.
The exchange rates and toll bridges between worlds Roblox and Minecraft have hit critical mass as well. But if we were simply to connect those worlds, it would be hell to figure out the exchange rate between the currencies of the games. How do you translate the most valuable gun in Fortnite to something in Roblox? Together Labs , creator of IMVU, has figured out a way to get users paid for the things they create with VCoin, which could be transferred between worlds or cashed out in fiat currency such as U.S. dollars. Folks like John Burris of Together Labs, John Linden at Mythical Games, and Arthur Madrid of The Sandbox are figuring out how to best do the payments and economies of the metaverse, using blockchain. Like Petursson, they might be creating products for the techno-geeks of the world, a very small audience that could make them vastly profitable. They are learning so much, but they have to figure out how to make cryptocurrency and blockchain relevant to the masses.
One of the possible misconceptions of the metaverse, as depicted in Steven Spielberg’s Ready Player One movie, is that we will want to take a single avatar and move from one world to another world instantly. Petursson warned that you can wreck game economies if you cause the flow of labor to switch from one game to another. That is, if you can mine something cheaply in Minecraft, and then convert it to something valuable in Roblox, where it’s more expensive to obtain the same resource, then all of the labor force in Roblox will shift to Minecraft and then take their resources back to Roblox. Why would different companies allow that to happen to their worlds? That’s one reason that Frederic Descamps’ Manticore Games lets users create multiple worlds — all within the same company — where the users can instantly teleport from one experience to another.
If we don’t make worlds interoperable, then we’ll have the Tower of Babel, no different from today’s game industry. To make game worlds interoperable, Sweeney noted that we would have to figure out a new programming model, built on something like Javascript that can cross over different platforms. And yes, that means that code created by someone at Nintendo would have to run in Sony’s game world. That way, you could take your avatar and your guns and shoot at characters in another world. We have the tech to do that through cross-platform tech — from blockchain to Javascript — but how would fast would it run? Who would debug it? Sweeney said that if all companies could look beyond their own interests — and consider enlightened self-interest — they could create a metaverse for the greater good that could lead to such a large economic boom that all companies will benefit.
Common standards are necessary, but perhaps governments would likely get into the picture as well, said futurist Cathy Hackl. After all, the U.S. government probably wouldn’t want the metaverse to originate in China, or visa versa.
Living in science fiction Above: Jensen Huang of Nvidia holds the world’s largest graphics card.
I don’t want to suggest that this is too hard a problem for society to solve, or that we have no hope of creating an ambitious metaverse. As Nvidia CEO Jensen Huang said in an interview with me that “ we’re living in science fiction now.
” A passionate engineer, he believes the metaverse will come about soon because we have made so much progress in other technologies such as graphics and AI.
Remember how we used to talk about AI as a fantasy? Then, around eight years ago, it started working better with the advent of deep learning neural networks. Now we have 7,000 AI startups around the world that have raised billions of dollars. 76% of enterprises are now prioritizing AI in their 2021 IT budgets. Now technologies such as Open AI’s GPT-3 could enable some really smart AI characters, which means we could populate the metaverse with non-player characters, or NPCs, who are so intelligent we could talk to them for hours and never figure out that they are not really human beings. With Huang’s chips driving these advances, we’re on a path of accelerated innovation, and the advances in AI will help advance the creation of other inventions, such as the metaverse. It’s a snowball effect.
If, as Playable Worlds cofounder Raph Koster says, “the metaverse is inevitable,” then we should think about some of the downsides.
We also have to think about the consequences of the metaverse. It would be wonderful if it became our digital Disneyland. But if it succeeds in creating artificial companions for all of us, we might lose interest in the real world and it might become the most addictive drug ever made. These are all problems that we have precedent for, as the science fiction writers like Neal Stephenson, William Gibson, and Ernest Cline have all taught us with their dystopian visions of the metaverse.
It would be a shame to create paradise and leave out a lot of people. That’s why Stanley Pierre-Louis, the CEO of the Entertainment Software Association , reminded us that we should make the worlds diverse from the get-go while using the most diverse creative teams we can find. Otherwise, the metaverse creators could shoot themselves in the feet, creating an elitist colony instead of something that is accessible to everyone in the world with a smartphone. It is critical we get the metaverse right from the start, Pierre-Louis said, echoing Sweeney’s words even though they were both talking about different things.
Our NPCs, ourselves Richard Bartle , the game scholar at the University of Essex, offered some useful cautionary notes about creating the metaverse in the closing talk of our event. If we create artificial beings with sapience, or real intelligence and free will, then we have to consider whether we should treat them as our slaves, as we do in modern-day video games. If we think of these virtual beings as our property, then it’s OK if we turn the switch off on them or slaughter them for sport. But if we come to think of them as real people, or companions, then developers face a real ethical dilemma over how much power they should give us over these artificial people.
Rony Abovitz, the former CEO of Magic Leap, announced his new startup, Sun and Thunder , at our event. Over the next 15 years, he wants to create synthetic beings, which are AI-driven. But rather than just create them, Abovitz wants to imbue them with intelligence so that they can then help his company develop more synthetic beings. Sounds crazy, but it just might make sense in our world of accelerated change.
Bartle wants us to remember that the metaverse should be the place that he has always dreamed of. It should be the place where we are free from the social pressures of society, where we are free from the limitations of our own “roll of the dice,” or our individual heritage. The metaverse should be the place where we can be the people we can’t be in real life. Where we can be our best selves. And rather than be a vendor who extracts the biggest toll on the metaverse, developers should help us reach that goal of being our best selves, Bartle said.
Beyond the Zoomverse I’ll reiterate why the metaverse is so important. We need this to happen now because we are stuck in the Zoomverse. The coronavirus has tainted our world, and the digital world provides the escape. For our mental well-being, we need something better than video calls. Something more human. Something that brings us together. Just as the world needed a vaccine and science brought it to us, our social needs are so great that we have to build something like the metaverse and, like the proliferation of smartphones through the world, bring it to everybody.
It will be a long road, and that is why we need inspiration. We need people to paint a vision for what life could be like. Sci-fi writers laid the groundwork. Now it’s up to the game industry, and whatever other industries want to come along for the ride, to build it and make it fun. And as I said in my opening speech , I hope one day, our GamesBeat community can hold an event inside the real metaverse.
Once again, I am so glad to see our speakers point out the momentous decisions we face around the building of the metaverse, which could either be humanity’s cage or its digital salvation. I appreciate the time you gave us, as apparently you might have all been better off making millions of dollars by buying GameStop stock instead.
I’ll weigh in next week with a roundup of our GamesBeat/Facebook: Driving Game Growth event.
But first, I need to sleep.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,921 | 2,021 |
"Why savvy shoppers are in line to sign up for Clearcover's car insurance | VentureBeat"
|
"https://venturebeat.com/commerce/why-savvy-shoppers-are-in-line-to-sign-up-for-clearcovers-car-insurance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals Why savvy shoppers are in line to sign up for Clearcover’s car insurance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The insurance industry has seen its fair share of disruptors in 2020. From Lemonade’s affordable home and renters insurance offering to Fabric’s new approach to life insurance, savvy shoppers are taking advantage of new, more tech-forward alternatives daily. It’s no longer taboo to switch your insurance policy away from one of the traditional behemoths; in fact, many would say it’s the smarter approach.
And when it comes to car insurance, there’s one name sweeping the headlines: Clearcover.
Put simply, it’s showing the masses that yes, there are ways to save money while still maintaining the level of service and protection we all crave.
Clearcover is an online car insurance provider that saves customers money by eliminating the costs that drive up rates at bigger insurance carriers. Rather than maintaining expensive offices with loads of infrastructure, Clearcover runs lean. Using artificial intelligence, it connects drivers with the insurance coverage they want at the lowest possible rate. Then, it speeds up the claims process with nearly instant eligible claims payments in the event of an accident or loss.
Many drivers don’t realize they can change their auto insurance provider at any time. But with a visit to Clearcover’s website , you can enter some basic information and find out in a matter of minutes whether Clearcover can save you money compared to your current rates.
To be clear, this isn’t cut-rate insurance. Clearcover plans provide all the same coverage protections as major carriers, plus a few up-to-the-minutes extras many of them don’t, like lower rates for ridesharing drivers and those with vehicles employing advanced safety or self-driving features.
Right now, Clearcover services are available in 12 states — and quickly expanding. And while it can’t guarantee everybody will see savings, there’s no cost to checking your price. It’s not uncommon for new Clearcover customers to save $500 or more annually, so it’s worth taking a few minutes out of your day.
Plus, perhaps best of all, Clearcover lets you do everything from sign up to file claims on their award-winning mobile app. That means you can manage your policy all through a few finger taps on your phone.
With a stellar 4.5 out of 5-star rating across more than 400 reviews on sites like Google, the Better Business Bureau, TrustPilot, and more, Clearcover is clearly a fan favorite for a reason.
Prices subject to change.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,922 | 2,021 |
"Verusen raises $8 million to reconcile supply chain data using AI | VentureBeat"
|
"https://venturebeat.com/business/verusen-raises-8-million-to-reconcile-supply-chain-data-using-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verusen raises $8 million to reconcile supply chain data using AI Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Atlanta, Georgia-based Verusen , a startup leveraging AI to build a connected supply chain, today raised $8 million in a series A round co-led by Forte Ventures and Flyover Capital. The company says it will put the funds toward R&D as it expands the size of its workforce.
A recent PricewaterhouseCoopers report anticipated that companies would have to address the implications of their supply chains in regions affected by the coronavirus. For instance, they might have to secure future air transportation as supply and capacity become available or buy ahead to procure much-needed inventory and raw materials.
Verusen aims to address this using AI -based technology that automatically integrates with enterprise resource management systems and learns from experts who can fine-tune the system for automatic inventory naming, categorization, and deduplication. Verusen says its AI continues to predict and learn from real actions over time. Moreover, it offers suggestions to optimize inventory allocation and procurement that customers can choose to accept or decline.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Verusen claims it helps businesses save an average of $10 million within a three-month period, with clients including Georgia Pacific , Graphic Packaging , and AB InBev. “As supply chains start on their digital transformations, we help them better understand their disparate and incomplete data and connect it to trusted business outcomes from the very start,” founder and CEO Paul Noble said in a statement.
Gartner says that by 2023 at least 50% of large global companies will be using AI, advanced analytics, and internet of things technologies in supply chain operations. Meanwhile, McKinsey & Company estimates companies that “aggressively” digitize their supply chains can expect to boost annual interest, tax, depreciation, and amortization (EBITDA) growth by 3.2% and annual revenue growth by 2.3%.
“Verusen AI is purpose-built to deliver trusted material records and verify demand signals which influence one another to drive our proprietary trusted network optimization,” Noble continued. “This leads to unparalleled scalable inventory and procurement intelligence, helping our customers achieve their material truth.” BMW i Ventures, Glasswing Ventures, Zetta Venture Partners, Kubera VC, and Engage also participated in Verusen’s funding round announced today. It brings the company’s total raised to over $14 million, following seed and pre-seed rounds totaling $6.1 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,923 | 2,021 |
"SAP's acquisition of Signavio goes to the core of its digital business strategy | VentureBeat"
|
"https://venturebeat.com/business/saps-acquisition-of-signavio-goes-to-the-core-of-its-digital-business-strategy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SAP’s acquisition of Signavio goes to the core of its digital business strategy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
SAP’s acquisition of Signavio , a provider of tools for analyzing existing business processes, will fill a critical gap in the company’s digital business transformation strategy. The deal was announced yesterday, along with a managed Rise with SAP service through which the company will manage digital business transformation initiatives on behalf of customers. The Signavio acquisition gives SAP an intelligence platform that identifies inefficient business processes SAP could then make a case to modernize on behalf of a customer. Terms of the Signavio deal, expected to close in the first quarter, were not disclosed.
At the core of SAP’s revamped approach to digital business transformation is the SAP S/4 enterprise resource planning (ERP) software. The company regularly adds processes and updates to address use cases spanning multiple vertical industries. However, convincing organizations to embrace a business process defined for them by SAP requires the company to first prove existing business processes are inefficient.
Most senior business executives tend to assume that processes defined by senior management are closely followed. In reality, there is often a lot of drift from those prescribed processes because of simple inertia or the need to make exceptions, SAP GM for business process intelligence Rouven Morato said.
The tools from Signavio first create a model of those processes. The model can then be compared to the way processes are actually being executed, using data collected from the customer, Morato said. A free version of Signavio will be included with the Rise with SAP service, with an option to upgrade to a more expansive and expensive option.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Armed with that data, SAP can then make a better case for employing SAP S/4 to replace a custom business process, like invoice processing.
Under the terms of a Rise with SAP services engagement, the transition to the business process defined by SAP will be managed under a single contract, including all the integration work provided by third-party IT services providers. The providers will act as subcontractors to SAP. The goal is to reduce the level of friction organizations encounter when they decide to modernize a business process.
The business intelligence platform will also continue to play a significant role after the process is modernized, Morato added. There’s often a lot of resistance to new processes, in the form of organizational inertia. But the SAP platform will enable organizations to identify when employees are either relying on a legacy process that has been rendered obsolete or not correctly following proper procedures, Morato said.
SAP will also be using the data collected via the Signavio platform to further refine the processes embedded within S/4 to create a more symbiotic relationship between software and processes, he noted. That data will also be employed to train the machine learning algorithms SAP is embedding within its applications and databases. “We’ll be able to create business process profiles,” Morato said.
He said the decision to acquire Signavio partially stems from the fact that SAP has been using the software to help optimize its internal processes for more than six years.
While organizations are embracing digital business transformation initiatives at varying rates, most of them are moving away from batch-oriented applications that were typically updated overnight in favor of processes that occur in near real time. That shift results in an improved digital experience because all the data residing in multiple applications is better synchronized across a range of processes.
SAP is hardly the only provider of these kinds of enterprise applications. The challenge most organizations face today is that they currently rely on a wide range of applications from multiple vendors to drive their processes. It’s not clear to what degree organizations may be willing to abandon those applications, but SAP is trying to make it a lot more attractive.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,924 | 2,021 |
"Researchers propose AI that creates 'controllable' videos | VentureBeat"
|
"https://venturebeat.com/business/researchers-propose-ai-that-creates-controllable-videos"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers propose AI that creates ‘controllable’ videos Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Humans at an early age can identify objects and how each object can interact with its environment. For example, when watching videos of sports like tennis and football, spectators and sportscasters can understand and anticipate plays despite never being given a list of possible actions. We as humans develop this skill as we watch events unfold live and on the screen. Furthermore, we can reason about what happens if a player took a different action and how this might change the video.
In an effort to create an AI system that can develop some of these same reasoning skills, researchers at the University of Trento, the Institut Polytechnique de Paris, and Snap, Inc.
propose in a new paper the task of playable video generation, where the goal is to learn a set of actions from real-world video clips and offer users the ability to generate new videos. The idea is that users provide an “action label” at every time step and can see its impact on the generated video, like a video game. The researchers believe this framework might pave the way for methods that can simulate real-world environments and provide a gaming-like experience.
In an experiment, the researchers architected a framework called Clustering for Action Decomposition and DiscoverY (CADDY) that discovers a set of actions after watching multiple videos and outputs “playable” videos. (Here’s a live demo.
) CADDY uses the aforementioned action labels to encode the semantics of a given action, as well as a continuous component to capture how the action is performed.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers claim that CADDY can generate “high-quality” videos while offering users the chance to choose which actions occur in those videos — akin to Facebook’s AI that extracts playable characters from real-world videos. For example, with CADDY, given a real-life video of a tennis player, users can select Left, Right, Forward, Backward, Hit the ball, or Stay to prompt the system to create videos capturing that action.
“Our experiments show that we can learn a rich set of actions that offer the user a gaming-like experience to control the generated video. As future work, we plan to extend our method to multi-agent environments,” the researchers wrote. “CADDY automatically discovers the most significant actions to condition video generation and can produce playable video generation models in a variety of settings, from video games to real videos.” In the near term, the researchers’ work could lower the cost of corporate video production. Filming a short commercial runs $1,500 to $3,500 on the low end , a hefty expense for small-to-medium-size businesses. This leads some companies to pursue in-house solutions, but not all have the expertise required to execute on a vision. A tool like CADDY could eliminate the need for reshoots while opening up new creative possibilities.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,925 | 2,021 |
"Physna raises $20 million for AI that analyzes and digitizes 3D objects | VentureBeat"
|
"https://venturebeat.com/business/physna-raises-20-million-for-ai-that-analyzes-and-digitizes-3d-objects"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Physna raises $20 million for AI that analyzes and digitizes 3D objects Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Physna , a Cincinnati, Ohio-based startup developing an AI -powered 3D modeling platform for industrial engineering, today announced that it raised $20 million. The company says the funds will be used to grow its team and increase development as new customers sign on.
It’s often difficult to search using the 3D model of a physical part, such as a CAD model or a 3D scan — especially when the part is a component of something larger (e.g., matching a screw to an assembly). Traditionally, solutions have been costly and time-consuming, impacting processes not only during engineering procurement but parts identification. It’s estimated that while over 70% of the economy is centered around physical goods, less than 1% of software is capable of handling 3D data CEO Paul Powers and CTO Glenn Warner founded Physna in 2015, originally to protect product designs from intellectual property theft. But in 2016, the company pivoted, aiming to bridge the gap between the physical and digital world with deep learning technologies that codify 3D models into data understandable by software.
Physna breaks down the structure of 3D models, analyzes them, and determines how different models are related to each other. Customers can find models by searching with a 3D model, partial model, geometric measurements, or nothing but model data. Physna’s AI finds model matches by predicting descriptions, classifications, cost, and materials. The platform surfaces all duplicate and similar parts, even components within complex assemblies, and shows the exact location and quantity of components within assemblies.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Physna says its technology can learn from existing models through metadata and information tagged by a team of engineers. Beyond making production predictions, estimate costs, materials, and manufacturability, the technology can ostensibly predict part performance based on historical data and uncover potential design flaws.
In addition to Physna’s enterprise product, the company provides a search engine called Thangs, which offers a search tool for physical object search (including assemblies) and shows how individual parts fit together. Physna claims that since launching in August 2020, hundreds of thousands of people have used Thangs to search with models directly rather than relying on text.
CEO Powers says that Physna counts among its customers the Department of Defense and “a host” of Fortune 100 companies in the aerospace, automotive, manufacturing, medical, and robotics industries. “Physna has enabled a quantum leap in technology by allowing software to truly understand physical 3D data. By merging the physical with the digital, we have unlocked massive and ever-growing opportunities in everything from geometric search to 3D machine learning and predictions,” he said. “Having both Sequoia and Drive endorse this next-generation of search and machine learning helps Physna empower even more technical innovations for our customers and the market as a whole.” Sequoia Capital led the series B investment in Physna announced today, with participation from Drive Capital. It brings the company’s total raised to date to $29 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,926 | 2,021 |
"InGen Dynamics to Continue to Diversify Application of A.I and Robotics Technologies | VentureBeat"
|
"https://venturebeat.com/business/ingen-dynamics-to-continue-to-diversify-application-of-a-i-and-robotics-technologies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release InGen Dynamics to Continue to Diversify Application of A.I and Robotics Technologies Share on Facebook Share on X Share on LinkedIn PALO ALTO, Calif.–(BUSINESS WIRE)–January 31, 2021– InGen Dynamics, a Palo Alto, based Robotics and A.I firm, raised a recent round of funding bringing the total funding commitment to $4 Million at the most recent valuation of $45 Million.
It has always been the goal of InGen Dynamics to solve complicated challenges in the real world by offering simple but highly functional solutions formulated through Robotics and A.I technologies. This is evident from the company’s mission statement, which is to improve humans’ quality of life by creating cost-effective, intuitive, and practical A.I and robotics-based solutions. To a much extent, the company has shown a high level of excellence in the industry to the point that its products have attracted attention and recognition from other international companies and groups such as IEEE, Robo-Business and Disney, Boston Consulting Group. InGen’s products have also been featured in Forbes, Fortune, Mashable, Discovery, BCG, and PopSci.
Originally, the company focused on creating A.I and robotics products for homes and social use. However, InGen has continued to broaden its products by diversifying its targeted industries to include security, education, and healthcare industries, among others. One of InGen’s famous products is Dynamix™, a set of reusable components designed using robotics and artificial intelligence technologies. According to the sources available, with enough funds, the company is expected to continue developing and expanding the adoption of Dynamix™ products and platforms.
The diversification of products and services offered by InGen can be traced to the character and intellectual excellence of Arshad Hisham, who is its founder and CEO. Arshad is an inventor, chief designer, engineer, and serial entrepreneur whose vision has helped shape the company. According to Arshad, InGen is primed to raise the necessary funding to fuel the future growth of A.I and Robotics technologies to solve the ever-changing problems being faced in contemporary society.
In what is seen as a significant step towards achieving this mission, InGen believes that it is on track to hit $100 million valuations in 2021. The Palo Alto, based A.I and Robotics firm said this after it carried out a round of funding led by Altrium capital, which continued with the momentum of fundraising with the investors of the last round closed in 2019.
View source version on businesswire.com: https://www.businesswire.com/news/home/20210131005050/en/ Contact Person: Arshad Hisham Email: [email protected] Phone: 001 650 353 5782 Address: 2345 Yale StPalo Alto, CA 94306, USA Website: www.getaido.com VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,927 | 2,021 |
"How Modern Health uses AWS to secure participant data and scale its enterprise mental health platform | VentureBeat"
|
"https://venturebeat.com/business/how-modern-health-uses-aws-to-secure-participant-data-and-scale-its-enterprise-mental-health-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Modern Health uses AWS to secure participant data and scale its enterprise mental health platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Enterprise mental health care platform Modern Health has been architected in AWS since its launch in 2017, and now it is using the cloud platform to handle more complex and higher volumes of sensitive data. Modern Health partners with companies like Lyft , Postmates , and Udemy to provide benefits to their employees: Its entire user base consists of clients’ employees. The company has accordingly strategized to account for traffic across different time zones, integrate new product features for wellness, and posture its security to comply with data privacy laws for personal health records.
When new users join, Modern Health asks them to select their areas of concern; it then uses a simple triage model to recommend a digital care plan based in self-guided meditation programs, group therapy activities, and 1:1 personal counseling with licensed therapists. Modern Health collects data related their mental health and treatment plan starting from this initial intake. According to Jonathan Lloyd, Modern Health’s engineering lead, the platform’s security has been tailored to protect users’ personally identifiable information and health records from the start. Lloyd said AWS has remained central to this strategy.
“When we think about technology decisions, a lot of them exist at a point in time … [but] Amazon is a decision we’ve made since the beginning. … [Amazon] continued to deliver features that match the needs that we have in the health care space, as the platform has grown as our user base expanded,” he said in an interview with VentureBeat.
Lloyd’s IT team uses several AWS services, such as the key management service that creates unique keys for protecting individual clients’ specific data. Lloyd said his team also applies added layers of security with PGP encryption to verify that any data exchanged is encrypted between the sender and receiver, and the team performs annual penetration testing to measure how protected the data is from cyberattacks. Legal requirements like the HIPAA Security Rule make these processes particularly critical.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The production web app consumes user data and pulls it through a de-identifying data lake in AWS. Within the data lake, ETLs like Step Functions and Simple Storage Service (Amazon S3) extract, track, and sanitize it via loading processes to match the validation shape. This stage integrates alerting and monitoring to help ensure the data pipelines are stable.
The resulting data is stored in an AWS POST rest database and used within internal BI tooling. Modern Health also sends some unspecified form of this data to enterprises. According to Lloyd, the data concerns how their employees are benefiting from Modern Health’s services.
Lloyd reported that since March 2020, Modern Health doubled its number of clients and tripled its own internal headcount. According to materials the company sent VentureBeat, 30% of Eventbrite ‘s employees use the platform, and engagement rates have increased across companies. Modern Health also introduced integrations like therapist-led group sessions called Circles.
“Investments in things like load and stress testing [became] really strong investments. So toward the end of last year, we dedicated resources to really doing that extensive load testing to confirm that the platform would support the scale that we knew was coming,” said Lloyd.
These stress tests attempt to mirror and replicate traffic based on existing data. And in scaling its data capabilities to account for this growth in usage, Modern Health also adjusts for global demand surrounding when and how employees use modern health, including time differences and which languages live therapists speak.
Modern Health is fairly new: The company was founded in 2017 and accelerated with the Y Combinator in 2018, but has already made significant headlines, including a 2020 lawsuit from a cofounder alleging wrongful termination and company bribes to customers. Next week, Modern Health will announce its acquisition of Kip , another San Francisco-based digital mental health platform, which focuses on consumers rather than enterprises and their employees.
In the long term, Lloyd said Modern Health wants to expand its infrastructure, particularly by bringing on more engineers across the board to develop new platform features for monitoring and securing data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,928 | 2,021 |
"Google Cloud announces VM Manager, a suite of tools to automate infrastructure management | VentureBeat"
|
"https://venturebeat.com/business/google-cloud-announces-vm-manager-a-suite-of-tools-to-automate-infrastructure-management"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud announces VM Manager, a suite of tools to automate infrastructure management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As enterprises become more reliant on cloud-native tools, Google’s Compute Engine has become a popular choice for helping to maintain those online resources at large scales. Now Google Cloud is rolling out a new suite of infrastructure management tools to automate and simplify that work.
Dubbed VM Manager, the new toolset will include a single dashboard designed to offer greater visibility into computing projects while enabling better tracking of data. In a blog post by Google Cloud product manager Ravi Kiran Chintalapudi and product marketing manager Senanu Aggor, the company said the new service would bring greater ease and security to managing virtual machine fleets.
“Enterprises are accelerating their digital transformation — moving more and more workloads to the cloud,” they wrote. “Customers tell us they need simplified cloud-native tools to operate and manage their cloud resources, similar to their familiar on-premises infrastructure management tools.” The goal is to reduce the time teams have to spend monitoring compute infrastructure to free them to focus more on business issues.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Among the automated features included will be patch management to make sure all systems are updated to limit vulnerabilities; configuration management to validate that settings are consistent to limit errors and security issues; and better inventory management so enterprises have more real-time insight into their data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,929 | 2,021 |
"Former Accenture Digital Group CEO, Mike Sutcliff, joins mce Advisory Board | VentureBeat"
|
"https://venturebeat.com/business/former-accenture-digital-group-ceo-mike-sutcliff-joins-mce-advisory-board"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Former Accenture Digital Group CEO, Mike Sutcliff, joins mce Advisory Board Share on Facebook Share on X Share on LinkedIn Mike will also undertake an active role as a senior advisor to mce’s Digital Transformation Readiness (DTR) initiative for global mobile operators TEL-AVIV, Israel–(BUSINESS WIRE)–January 28, 2021– mce Systems Ltd.
(“ mce ” or the “ Company “) announces that Mr. Mike Sutcliff, a former group CEO of Accenture Digital has joined mce as an Advisory Board member.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20210128005902/en/ Mr. Mike Sutcliff, former Accenture Digital Group CEO and Advisory Board member in mce Systems Ltd. Photo courtesy of Accenture. (Photo: mce) Mike has spent his career at Accenture creating practices and building businesses. His most recent experience was launching Accenture Digital and growing it to $20+ billion in revenues. The Accenture Digital business included Accenture Interactive, Applied Intelligence, and Industry X.0 business units alongside all digital delivery technology teams. Mike is also an Operating Partner with Advent International Private Equity and an active angel investor across technology, payments, healthcare, and life sciences markets. He also serves as a board director at Encora, which helps companies develop digitally enabled products and services.
Yuval Blumental, mce Co-Founder & CEO stated that: “Mike has built multibillion dollar empires around digital transformation business advisory. He has done so for many years in Accenture and earned his place as a global thought leader in this space. We are honoured to have him join the team and help us set mce as a frontrunner in digital transformation readiness for operators worldwide.” Mr. Sutcliff commented: “I am excited to be working with MCE as a senior advisor to the leadership team as they bring innovative software products to the market capable of creating better customer experiences with connected devices. I believe that many physical products will become connected products and see a bright future for MCE as they develop the next generation of tools to manage those products with software updates and enhancements over time.” About mce Systems: mce Systems is a software solution and integration provider, specializing in digital services solutions for mobile operators . mce enables device lifecycle management, device value optimization, cost reduction and the generation of new business for operators worldwide delivering Omni-channel capability across web, call-center, retail, on-device and reverse/forward logistic channels. Read more at www.mce.systems Follow us on LinkedIn at https://www.linkedin.com/company/mce-systems Visit our Facebook page at https://www.facebook.com/mceSystems Follow mce on Twitter @mce_systems View source version on businesswire.com: https://www.businesswire.com/news/home/20210128005902/en/ Media: Alona Stein ReBlonde for mce [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,930 | 2,021 |
"Cloud data analytics service Phocas raises $34 million to grow AI, global footprint | VentureBeat"
|
"https://venturebeat.com/business/cloud-data-analytics-service-phocas-raises-34-million-to-grow-ai-global-footprint"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud data analytics service Phocas raises $34 million to grow AI, global footprint Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Phocas Software ‘s cloud data analytics tools might be best known in Australia, where they’re used by Thermo Fisher Scientific , Fiskars Royal Doulton, and Burson Automotive to streamline employee access to key financial data, but the company is planning to become more aggressive globally in 2021. Today, Phocas announced that it raised $34 million to bolster its AI capabilities and reach new customers across the world, while expanding its data tools to reach new verticals.
With the new capital, Phocas plans to “supercharge” its outreach to American and UK companies while increasing its tools’ use of AI to extract value from data. Developed to enable businesses to run operations without cross-referencing spreadsheets, Phocas’ tools enable employees to easily track KPIs and similar metrics specific to their roles, using industry-customized data reports that can be accessed from corporate or home offices.
Phocas’ expansion is significant to technical decision makers because it reflects growing competition in and segmentation of the enterprise data technology arena. The company’s software-as-a-service (SaaS) data analytics tools have historically targeted medium-size enterprises rather than small or large businesses, notably including medical, scientific, manufacturing, and automotive companies in Australia, the United States, and the United Kingdom. Over 1,900 “mid-market” customers already use Phocas, including American Metal Supply and Dixie Plywood in the U.S. and various retail, hospitality, and equipment vendors in the UK; these types of businesses will continue to be Phocas’ focus, but it will also use its new funding to expand to unspecified new industries.
The company’s core business intelligence tools are largely made for sales, purchasing, finance, and executive teams, enabling daily rather than monthly tracking of data across an enterprise. With support for over 20 global ERP partners — notably including Epicor — Phocas’ cloud-based tool can connect to enterprise ERP, CRM, and AP/AR systems, as well as other data sources.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Australia’s Ellerston Capital led the funding round with a $27 million equity investment, while earlier investor OneVentures added $7 million in equity financing, increasing its stake after four years to accelerate Phocas’ place in the SaaS market. The company continues to be led by cofounder and CEO Myles Glashier.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,931 | 2,021 |
"Classiq aims to advance software development for quantum computers | VentureBeat"
|
"https://venturebeat.com/business/classiq-aims-to-advance-software-development-for-quantum-computers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Classiq aims to advance software development for quantum computers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Startups providing the tools to build software that will run on quantum computers are enjoying attention from investors.
Classiq , which provides a modeling tool for building algorithms for quantum computers, revealed this week that it has raised $10.5 million.
The round was led by Team8 and Wing Venture Capital, with additional participation from Entrée Capital, OurCrowd, and IN Venture, the corporate venture arm of Sumitomo in Israel. Previously, Classiq had raised $4 million in a seed round from Entrée Capital.
Algorithms for quantum computers have thus far been built using low-level tools that are specific to each platform. But this approach is painstakingly slow and results in algorithms that can only run on one quantum computing platform, Classiq cofounder and CEO Nir Minerbi said.
“It’s like programming was back in the 1950s,” Minerbi said. “Developers are working at the equivalent of the gate level.” Classiq has developed a modeling tool that enables developers to build algorithms for quantum computers at a much higher level of abstraction. That capability not only increases the rate at which those algorithms can be built, Minerbi said, it also enables algorithms to be employed on different quantum computing platforms.
The tools Classiq provides are roughly equivalent to the chip design tools for conventional computing systems provided by companies like Cadence Design Systems, Minerbi noted.
Quantum computers running experimental applications today are based on quantum circuits that make a qubit available as the atomic unit of computing. Traditional computing systems are based on bits that can be set at 0 or 1. A qubit can be set for 0 and 1 at the same time, which will theoretically increase raw compute horsepower to the point where more complex chemistry problems could be solved to advance climate change research or break encryption schemes that are widely employed to ensure cybersecurity. Experts also expect that quantum computers will advance AI by making it possible to train more complex models much more quickly.
The challenge is that qubits are not especially stable when distributed across multiple computing platforms. However, Minerbi said people in his field expect that by 2023 more than 1,000 qubits for various hardware platforms will have been created.
The list of companies with quantum computing initiatives based on one or more subsets of those qubits is extensive. It includes: Alphabet, IBM, Honeywell, Righetti Computing, Amazon, Microsoft, D-Wave Systems, Alibaba, Nokia, Intel, Airbus, Hewlett-Packard Enterprise (HPE), Toshiba, Mitsubishi, SK Telecom, NEC, Raytheon, Lockheed Martin, Rigetti, Biogen, Volkswagen, Silicon Quantum Computing, IonQ , Huawei, Amgen, and Zapata.
The Chinese government is also known to be funding quantum computing research, as is the U.S. via groups like the National Security Agency (NSA), National Aeronautics and Space Administration (NASA), and Los Alamos National Laboratory. In most cases, there is a fair amount of collaboration between these U.S. entities. For example, the Google subsidiary of Alphabet has created Quantum AI Laboratory in collaboration with the NSA using quantum computers provided by D-Wave Systems.
If organizations want to build applications that can take advantage of qubits that might be stable enough to support applications by 2023, they would need to start those efforts this year, Minerbi said. As such, Classiq expects demand for tools used to build quantum algorithms to steadily increase through the rest of this year and into the next.
Given the cost of building quantum computers, they will for the most part be made available as another type of infrastructure-as-a-service (IaaS) platform. As quantum computing moves past the experimentation phase, the number of companies providing tools to build and manage software on these platforms will also grow. It may be a while before quantum computing applications are employed in production environments. In the meantime, however, the tools for building those applications are starting to find their way into the hands of researchers now.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,932 | 2,021 |
"Brands face a ‘year of change’ with consumer privacy laws | VentureBeat"
|
"https://venturebeat.com/business/brands-face-a-year-of-change-with-consumer-privacy-laws"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Brands face a ‘year of change’ with consumer privacy laws Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Over the last year, advertisers have seen their entire landscape shift as a result of changes to consumer privacy laws that have happened all over the world. In the United States, the California Consumer Privacy Act ( CCPA ), along with its amendment, which was voted in last November, California Privacy Rights Act ( CPRA ), allow consumers to know exactly what personal information is being collected and gives them the opportunity to opt-out of having their information sold. Similarly, Apple’s updates to its IDFA will present consumers with a pop-up window warning them that an app is tracking their data as well as the option to opt-out of tracking. In France, CNIL in its latest guidelines from October 2020 made it compulsory that refusing cookies needs to be as simple as accepting cookies on the cookie banner.
These changes have left brands and advertisers to adapt — and to do so quickly — with virtually no roadmap. As brands and advertisers are forced to change their strategies to comply with new consumer privacy laws, they are faced with the reality that the laws are impacting every facet of their industry. In order to navigate these challenges, brands and advertisers must take into account key factors to evolve their strategies alongside ever-changing consumer privacy laws.
Being compliant is not always easy Navigating the new consumer privacy laws, and the challenges they bring to advertising, has proven to be an uphill battle for most brands. Advertisers have been tasked with understanding and complying with a patchwork of laws pertaining to consumer data, which has made it difficult to adjust their strategies.
Evolving their ways of collecting data might guarantee success when complying with one regulation, and not the other.
For example, under GDPR , advertisers are required to obtain “informed consent,” including requiring different choices be shown in a similar manner as to not influence the user’s decision, and under CCPA, consumer action is centered around opting out. How advertisers approach each of those situations needs to be different, as the consumer is being asked two different things. How to find a universally accepted strategy, or adjust their strategy depending on the region in an efficient and cost-effective way, is something advertisers have yet to figure out.
Many questions around compliance revolve around regions. For example the CCPA is only enforceable in California. If a resident of California is on vacation in another state or country, do advertisers and brands still need to follow CCPA regulations? Advertisers are also struggling with the concept of consumers using multiple devices. With IDFA changes, does one app need to ask consumers on all devices to have their information collected? There are no answers yet to these questions, because there is no country-wide or universal regulation. With so many variables still unknown, advertisers are struggling to create a cohesive plan that works effectively.
In order to survive as consumer privacy laws evolve, advertisers must evolve with them. To make the evolution as seamless as possible, there are two key factors advertisers should consider: creating a new data-collection strategy and making transparency a priority.
Reevaluate your data collection strategy Moving into the age where consumer privacy is at the forefront of everyone’s mind with “privacy by design” becoming a popular topic, advertisers and brands need to take a hard look at themselves and more specifically, the type of data they are collecting. Is every piece of data collected from consumers useful? Does the data play an instrumental role in the success of the brand? When a brand collects data for media and marketing purposes, are they using and storing it in a way that fits all applicable regulations? As regulations mount, collecting user data will not be the primary way advertisers collect information moving forward. Advertisers and brands need to think outside the box to both gather the information they need to be effective and be compliant with new consumer privacy standards.
Some advertisers may see the most success by deepening relationships with consumers.
Consumers are inundated with emails and messages that are offering them the next opportunity. Building relationships and connections with consumers may be more valuable in the long term. In order to maximize relationships with consumers, CRM data is crucial.
Brands that are able to dig deeper into how consumers are feeling, and empathize with those emotions have a better chance of creating long-lasting customers. Offering meaningful messages and showing that they are trustworthy can help advertisers find the right way to reach consumers with very little data.
Transparency is key Many of the changes coming to consumer privacy laws allow the consumer to have more control over if, and what kind of, information is collected. The changes will require that brands have a more explicit consent from users for marketing and media purposes and that they offer an easier way to refuse data collection. As a result, it is imperative that brands move into 2021 with a sense of transparency. Consumers want to know “what is in it for me?” Informing them that data collection allows for more personalized offers and unique shopping experiences can be a huge advantage for advertisers.
Letting consumers know exactly what information of theirs will be collected, where it is going, and how it is being used may make them more open to the idea of having their information collected.
Now is the time to adapt While there are still many unknowns surrounding the new consumer privacy laws and how the different martech tools will evolve and adapt, one thing is certain: Advertisers and brands have no choice but to adapt and progressively change the way they plan, orchestrate, and measure performance of their media and marketing campaigns.
In order to continue to experience success, advertisers need to evolve their strategies along with the changing consumer privacy regulations and new tech limitations.
Guillaume Tollet is a Consulting Associate Director with fifty-five.
For over 13 years, he has supported companies in their strategic, digital, and data transformation.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,933 | 2,021 |
"AI Weekly: Announcing our 'AI and the future of health care' special issue | VentureBeat"
|
"https://venturebeat.com/business/ai-weekly-announcing-our-ai-and-the-future-of-health-care-special-issue"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Announcing our ‘AI and the future of health care’ special issue Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence and health care both deal heavily with issues of complexity, efficacy, and societal impact. All of that is multiplied when the two intersect. As health care providers and vendors work to use AI and data to improve patient care, health outcomes, medical research, and more, they face what are now standard AI challenges. Data is difficult and messy. Machine learning models struggle with bias and accuracy. And ethical challenges abound. But there’s a heightened need to solve these problems when they’re couched within the daily life-and-death context of health care.
Then, in the midst of the AI’s growth in health care, the pandemic hit, challenging old ways of doing things and pushing systems to their breaking points. In our upcoming special issue, “ AI and the future of health care ,” we examine how providers and vendors are tackling the challenges of this extraordinary time.
The biggest hurdle has to do with data. Health care produces massive amounts of data, from electronic health records (EHR) to imaging to information on hospital bed capacity. There’s enormous promise in using that data to create AI models that can improve care and even help cure diseases, but there are barriers to that progress. Privacy concerns top the list, but worldwide health care data also needs standardization. There are still too many errors in this data, and the medical community must address persisting biases before they become even more entrenched.
When humans rely on AI to help them make clinical decisions like injury or disease diagnoses, they also have to be aware of their own biases. Because bias exists in the data AI models are built upon, practitioners have to be careful not to fall into the trap of automation bias, relying too much on model output to make decisions. It’s a delicate balance with profound impacts on human health and life.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The pandemic has also challenged the practical day-to-day functions of health care systems. As COVID-19 cases threaten to overwhelm hospitals and patients and doctors risk infection during in-person visits, providers are figuring out how to deliver patient care remotely. With more doctors shifting to telemedicine, chatbots and other tools are helping relieve some of the burden and allowing patients to access care from the safety of their own homes.
For particularly vulnerable populations, like senior citizens, remote care may be necessary, especially if they’re in locked-down residential facilities or can’t easily get to their doctor. The technologies involved in monitoring such patients include wearables that track vitals and even special wireless tech that offers no-touch, personalized biometric tracking.
These are sea changes in health care, and because of the pandemic, they’re coming faster than anyone expected. But a certain optimism persists — a sense that despite unprecedented challenges to the medical field, careful and responsible use of AI can enable permanent, positive changes in the health care system. The astonishing speed with which researchers developed a working COVID-19 vaccine offers ample evidence of the way necessity spurs medical innovation. The best of the technologies, tools, and techniques that health care providers are employing now could soon become standard and lead to more democratized, less expensive, and overall better health care.
You can get this special issue delivered straight to your inbox next week by signing up here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,934 | 2,021 |
"AI needs an open labeling platform | VentureBeat"
|
"https://venturebeat.com/business/ai-needs-an-open-labeling-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI needs an open labeling platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
These days it’s hard to find a public company that isn’t talking up how artificial intelligence is transforming its business. From the obvious (Tesla using AI to improve auto-pilot performance) to the less obvious (Levis using AI to drive better product decisions), everyone wants in on AI.
To get there, however, organizations are going to need to get a lot smarter about data. To even get close to serious AI you need supervised learning which, in turn, depends on labeled data.
Raw data must be painstakingly labeled before it can be used to power supervised learning models. This budget line item is big enough for C-suite attention. Executives that have spent the last 10 years stockpiling data and now need to turn that data into revenue face three choices: 1. DIY and build your own bespoke data labeling system.
Be ready and budget for major investments in people, technology, and time to create a robust, production-grade system at scale that you will maintain in perpetuity. Sound straightforward? After all, that’s what Google and Facebook did. The same holds true for Pinterest, Uber, and other unicorns. But those aren’t good comps for you. Unlike you, they had battalions of PhDs and IT budgets the size of a small country’s GDP to build and maintain these complex labeling systems. Can your organization afford this ongoing investment, even if you have the talent and time to build a from-scratch production system at scale in the first place? If you’re the CIO, that’s sure to be a top MBO.
2. Outsource.
There is nothing wrong with professional services partners, but you will still have to develop your own internal tooling. This choice takes your business into risky territory. Many providers of these solutions mingle third-party data with your own proprietary data to make N sample sizes much larger, theoretically resulting in better models. Do you have confidence in the audit trail of your own data to keep it proprietary throughout the entire lifecycle of your persistent data labeling requirements? Are the processes you develop as competitive differentiators in your AI journey repeatable and reliable — even if your provider goes out of business? Your decade of hoarded IP — data — could possibly help enrich a competitor who is also building its systems with your partners. Scale.ai is the largest of these service companies, serving primarily the autonomous vehicle industry.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 3. Use a training data platform (TDP).
Relatively new to the market, these are solutions that provide a unified platform to aggregate all of the work of collecting, labeling, and feeding data into supervised learning models, or that help build the models themselves. This approach can help organizations of any size to standardize workflows in the same way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine learning algorithms, making the work easier still. Best of all, a TDP solution frees up expensive headcount, like data scientists, to spend time building the actual structures they were hired to create — not to build and maintain complex and brittle bespoke systems. The purer TDP players include Labelbox , Alegion, and Superb.ai.
Above: Labelbox is an example of a TDP platform that supports labeling of text and images, among other data types.
Why you need a training data platform The first thing any organization on an AI journey needs to understand is that data labeling is one of the most expensive and time-consuming parts of developing a supervised machine learning system. Data labeling does not stop when a machine learning system has matured to production use. It persists and usually grows. Regardless of whether organizations outsource their labeling or do it all in-house, they need a TDP to manage the work.
A TDP is designed to facilitate the entire data labeling process. The idea is to produce better data, faster, thereby enabling organizations to create performant AI models and applications as quickly as possible. There are a few companies in the space using the term today, but few are true TDPs.
Two things ought to be table stakes: enterprise-readiness and an intuitive interface. If it’s not enterprise-ready, IT departments will reject it. If it’s not intuitive, users will route around IT and find something that’s easier to use. Any system that handles sensitive, business-critical information needs enterprise-grade security and scalability or it will be a non-starter. But so is anything that feels like an old-school enterprise product. We’re at least a decade into the consumerization of IT. Anything that isn’t as simple to use as Instagram just won’t get used. Remember Siebel’s famous salesforce automation shelfware? Salesforce stole that business out from under their noses with an easy user experience and cloud delivery.
Beyond those basics, there are three big requirements: annotate, manage, and iterate. If a system you are considering does not satisfy all three of these requirements, then you’re not choosing a true TDP. Here are the must-haves on your list of considerations: Annotate.
A TDP must provide tools for intelligently automating annotation.
As much labeling as possible should be done automatically. A good TDP should be able to work with a limited amount of professionally-labeled data. For example, it would start with tumors circled by radiologists in X-rays before pre-labeling the tumors itself. The task of humans then is to correct anything that was mislabeled. The machine assigns a confidence output — for example, it might be 80% confident that a given label is correct. The highest priority for humans should be checking and correcting the labels in which the machines have the least confidence. As such, organizations should look to automate annotation and invest in professional services to ensure the accuracy and integrity of the labeled data. Much of the work around annotation can easily be done without human help.
Manage.
A TDP should serve as the central system of record for data training projects. It’s where data scientists and other team members collaborate. Workflows can be created and tasks can be assigned either through integrations with traditional project management tools or within the platform itself.
It’s also where datasets can be surfaced again for later projects. For example, each year in the United States, roughly 30% of all homes are quoted for home insurance. In order to predict and price risk, insurers depend on data, such as the age of the home’s roof, the presence of a pool or trampoline, or the distance of a tree to the home. To assist this process, companies now leverage computer vision to provide insurance companies with continual analysis via satellite imagery. A company should be able to use a TDP to reuse existing datasets when classifying homes in a new market. For example, if a company enters the UK market, it should be able to re-use existing training data from the US and simply update it to adjust for local differences such as building materials. These iteration cycles allow companies to provide highly accurate data while adapting quickly to keep up with the continuous changes being made to homes across the US and beyond.
That means your TDP needs to provide APIs for integration with other software, whether that’s project management applications, tools for harvesting and processing data, or SDKs that let organizations customize their tools and extend the TDP to meet their needs.
Iterate.
A true TDP knows that annotated data is never static. Instead, it’s constantly changing, ever iterating as more data joins the dataset and the models provide feedback on efficacy of the data. Indeed, the key to accurate data is iteration. Test the model. Improve the model. Test again. And again and again. A tractor’s smart sprayer might apply herbicide to one kind of weed 50% of the time, but as more images of the weed are added to the training data, future iterations of the sprayer’s computer vision model may boost that to 90% or higher. As other weeds are added to the training data, meanwhile, the sprayer can recognize those unwanted plants. This can be a time-consuming process, and it generally requires humans in the loop, even if much of the process is automated. You have to do iterations, but the idea is to get your models as good as they can be as quickly as possible. The purpose of a TDP is to accelerate those iterations and to make each iteration better than the last, saving time and money.
The future Just as the shift in the 18th century to standardization and interchangeable parts ignited the Industrial Revolution, so, too, will a standard framework for defining TDPs begin to take AI to new levels. It is still early days, but it’s clear that labeled data — managed through a true TDP — can reliably turn raw data (your company’s precious IP) into a competitive advantage in almost any industry.
But C-suite executives need to understand the need for investing to tap the potential riches of AI. They have three choices today, and whichever decision they make, it will be expensive, whether it’s to build, outsource, or buy. As is often the case with key business infrastructure, there can be enormous hidden costs to building or outsourcing, especially when entering a new way of doing business. A true TDP “de-risks” that expensive decision while maintaining your company’s competitive moat, your IP.
(Disclosure: I work for AWS, but the views expressed here are mine.) Matt Asay is a Principal at Amazon Web Services. He was formerly Head of Developer Ecosystem for Adobe and held roles at MongoDB, Nodeable (acquired by Appcelerator), mobile HTML5 start-up Strobe (acquired by Facebook);and Canonical. He is an emeritus board member of the Open Source Initiative (OSI) and a member of the Cognitive World Think Tank on enterprise AI.
VentureBeat is always looking for insightful guest posts from expert data and AI practioners.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,935 | 2,021 |
"5 ways to finally fix data privacy in America | VentureBeat"
|
"https://venturebeat.com/business/5-ways-to-finally-fix-data-privacy-in-america"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 ways to finally fix data privacy in America Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As a new administration enters the White House, we have the chance to finally fix privacy in America. Short of passing a national privacy law (which the majority of Americans want), we need action on data privacy. We need changes enacted swiftly and without delay. Both consumers and businesses deserve consistency and clarity.
As business leaders, we must put our customers first. We should be customer-centric in our thinking and unwavering in our support for greater controls that deflate the “us vs. them” mentality pervasive in privacy. Managing privacy shouldn’t be complicated and confusing. It should be as simple and straightforward as reasonably possible.
In that spirit, here are five ideas to make privacy work for consumers and businesses. A couple of them could likely even be mandated by Executive Order. One thing is sure: American companies are creative and innovative. We can protect an individual’s right to privacy while still delivering services and products that consumers want — at a profitable price. What are we waiting for? 1. Make privacy opt-in It’s a sad state of affairs when companies have to trick you into agreeing to share your personal information to use their services or gain access to a website. Nearly everywhere you look, you have to opt-out of data sharing. Flip that on its head: that means that privacy is an opt-in experience; you have to “opt-in” to protect your privacy because you have no default assumption of data privacy.
And you have to declare your privacy preferences with every single company. You have to tell each company that you want data privacy by telling them how to use your data. In essence, you’re opting into privacy by opting out of their data sharing. This needs to change. Data privacy should be the default. The onus is then on companies to show the benefits of data sharing so that consumers can actively choose to opt in to share their personal information.
2. Require plain English privacy policies Privacy policies are dense documents thick with legalese. They’re so hard to understand that few people actually read them and so long that it would take 76 working days to read the policies you encounter in a single year. Incredible.
How in the world are we putting data privacy on the shoulders of consumers when it’s the companies that are getting the most benefit from invading our data privacy? They should be honest and transparent; we shouldn’t need lawyers to understand what we’re consenting to.
Companies should have privacy policies that are written plainly in easy-to-understand language. Businesses can put these policies into centralized privacy hubs. These hubs show users how their data is collected, stored, and used, as well as one-click privacy controls to manage their consent. Plain and simple language, with easy navigation in one location — that’s the answer.
3. Mandate privacy labels Apple is absolutely on the right track here. The company’s requirement for app developers to clearly define how apps use data is a watershed moment for privacy in America. However, when one company acts alone, it doesn’t create a shared environment of trust. Even if other companies follow suit, we will only have a patchwork of privacy labels that is equally dense as our current system. Instead, we should mandate privacy labels just like we do nutrition labels on foods.
Every company should explain its data usage with privacy labels that are consistent in content and conspicuous in placement — as in, the labels have the same layout and are easily located. When you flip over a product in the supermarket, you know what you’re going to find and where. Privacy should be the same: You should know what you’re signing up for in a consistent way across services.
4. Give data an expiration date What if we simply required companies to allow each of us to set personal limits on data storage and usage? We could refine our data privacy settings in a more granular way, controlling our data destiny by deciding what data specific companies can use and for how long. Google has already started this in some products; all companies should follow suit.
If all data had an expiration date, it would prevent algorithms from using that data after the consumer has requested its deletion. Think about it: Even if you ask a company to stop using your data, it likely lives on in black-box algorithms. If data had an expiration date, it would rebalance the power away from the algorithms and towards humans.
5. Make protecting data cheaper than abusing it Data needs to be protected, plain and simple. When the FTC settled with Flo, the fertility app accused of misleading consumers about data usage, it highlighted what we’d known all along: Many companies, especially health and fitness trackers , know more about us than we know about ourselves.
And yet we have no idea how well businesses are protecting our personal data. Europe’s privacy law, the GDPR, requires data protection as a default — and the law makes non-compliance costly. Fines range from 10 million euros or 2% of worldwide annual revenue to 20 million euros or 4% of revenue. Those fines have also increased 40% year-over-year. While the U.S.
fined Facebook $5 billion for abusing customer data in 2019 (a record fine), we need a consistent penalty framework that makes privacy protection less costly than privacy violation.
Companies should rightly be penalized when they violate our trust. We must align the disincentives with the externalities caused by privacy abuses. When it’s less expensive to pay fines than implement sound privacy practices, we have a serious problem.
To fix privacy in America, we have to shift the burden of privacy management from consumers to companies. Privacy is a human right and should be a de facto facet of the internet — not something that we have to fight for at every turn of our online journeys. Privacy protection should be a mandatory part of doing business in America — not an optional afterthought.
Harry Maugans is the CEO of Privacy Bee.
His vision for the future of privacy is a world in which consumers have total transparency and control over their data footprints. He’s contributed to HackerNoon, AllBusiness, IB Times and ReadWrite.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,936 | 2,021 |
"3 ways 2021 will be digitally different | VentureBeat"
|
"https://venturebeat.com/business/3-ways-2021-will-be-digitally-different"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 ways 2021 will be digitally different Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
2020 was a huge struggle for individuals and businesses, but it did also bring progress, as it transformed work-life balance into work-life blended. It also gave companies a reality check on their digital aspirations. Digitally-ready companies shifted, optimized, expanded, or pivoted, but many companies that were not digitally-ready struggled to survive.
2021 may be different, or it may be more of the same. How you and your business respond to adversity in 2021 will again determine whether you thrive, survive, or shutter. Here are three trends to be aware of as you tackle that challenge: Trend 1: The fuzzy concept of ‘digital transformation’ will become crystal clear Many companies, including my own, have been talking about digital transformation for years, but many individuals and businesses have struggled to identify what that actually means for their business and what the steps are to digitally transform.
In 2021, if you don’t know what it means, then you are at risk of going out of business. 2020 saw many companies in the retail and hospitality space forced to move their business online much quicker than they intended. Online ordering at restaurants went from a “nice-to-have” to a requirement. Free shipping and free delivery went from a “promotion” to an expectation. If you didn’t transform, you were going out of business.
Some industries may not have been hit as hard by changes in buyer behavior in 2020, but trust me, it’s coming. You may have done layoffs, or cut spending and felt okay about slowed growth in 2020, but in 2021, that won’t be acceptable. Your competitors will be making it easier to research online, provide access to information online, communicate online, and ultimately to buy online. If you don’t keep up, you may go out of business.
Trade shows and conferences are a thing of the past. Schmoozing at sporting events and fancy dinners are a thing of the past. Face-to-face meetings are a thing of the past. If you aren’t ready with suitable replacements to these long-standing ways of doing business, you will not survive.
2021 will be the year that digital transformation finally means something to every business.
Trend 2: Content creation will spread across the enterprise Marketing teams have always been responsible for creating content. Whether it’s blogs, thought leadership articles, or webinars, content has always been king.
For departments outside of marketing, content has not always been as important. Externally facing teams like sales often relied more on meetings and conversations, even if they had access to content. Internal teams like finance were more likely to send emails or do live training sessions than provide real consumable and educational content.
In 2021, all teams will be going digital, which means all teams will create content. Sales will be creating videos and doing podcasts. Finance will be creating intranet sites and portals. These will not be managed by marketing or by IT; they will be managed by finance and sales teams themselves.
In the movie Ratatouille, there is a quote: “Not everyone can become a great artist, but a great artist can come from anywhere.” In 2021, “Not everyone can be great at creating content, but great content can come from anywhere.” Trend 3: Data privacy will be everyone’s concern, but still your concern 2019 was the year of the General Data Protection Regulation (GDPR). It was painful for businesses, but most consumers didn’t really understand it in 2019. In 2020, consumers started to become more educated about data privacy, and for that reason, 2021 could result in a data revolution.
For any individual, there are probably thousands of companies that have information about them. It’s not just Facebook that knows your name, location, relatives, friends, and the places you visit. Many other companies are tracking everything you do, even if it’s just anonymous activity.
Local and federal government agencies are exploring how to limit data sharing and increase data privacy. Companies are giving consumers more control over their data than ever before. But the business of data is also growing wildly. ZoomInfo’s billion-dollar IPO is a great example of this.
Just because consumers have more control over their data doesn’t mean it’s more secure or private. It just means that the responsibility is more distributed.
With consumers becoming more educated about who has their data, companies will be forced to be more open and honest about how they are using that data. Companies will no longer be sheepish and secretive about their strategy for using customer data, and instead will begin actively showing consumers how the data they have acquired is actually improving the consumer’s experience with their brand.
Individuals will become more comfortable sharing data with brands when they realize the value it adds to their experience. Transparency will be an important theme in the data privacy and security conversation.
The above trends are only a glimpse of how 2021 will be different. Due to the pandemic and the increase in digitalization, companies will find new ways to solve complex problems. No matter how companies approach these trends, one thing is sure: we are now living truly digital lives.
Joe Henriques has been a strategist in the field of marketing technology for 15 years. Currently President of Jahia for North America, he engages with leading global organizations on digital transformation and its effect on their business. He previously spent more than 11 years at Sitecore in various strategic sales and partner positions.
Find out about VentureBeat guest posts.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,937 | 2,021 |
"3 tips for enterprises as Apple's iOS14 privacy features roll out | VentureBeat"
|
"https://venturebeat.com/business/3-tips-for-enterprises-as-apples-ios14-privacy-features-roll-out"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 tips for enterprises as Apple’s iOS14 privacy features roll out Share on Facebook Share on X Share on LinkedIn UKRAINE - 2020/10/14: In this photo illustration the iOS 14 logo of the iOS mobile operating system is seen displayed on a mobile phone with an Apple logo in the background. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images) Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Enterprises and app developers, brace yourselves — the iOS 14 upgrade will soon roll out a new data consent window that will appear in all apps that collect and share data with outside parties for advertising purposes. The rollout will have a widespread impact on businesses and will affect the number of iOS devices available for personalized advertising.
Many consumers will view the new consent features as a positive step forward to better privacy protection, which it is.
For developers and enterprises , each consumer’s decision to consent to or refuse “Tracking” will shape the business models of the App Store economy and the wider internet for years to come.
The new consent screens give consumers more control in shaping the Future of the Internet which will ultimately be a net positive. Clarity, transparency, and consumer control are good for iPhone users — and the internet at large. But there are still steps that developers — and enterprises — can take to ensure that they not only comply with Apple’s new rules, but find success in the next era of the privacy-first internet.
Here are three strategic recommendations that can help developers and enterprises adapt to the new privacy normal: 1. Let your users know WHY you need their data — and what benefit they derive from opting into data sharing While the language included in Apple’s mandatory AppTrackingTransparency (ATT) notification cannot be changed, developers can add a message that appears ahead of the ATT consent. This message can include any language the developer chooses (so long as it is accurate and not misleading) and should be utilized as a way to build trust with the user. After all, if the user trusts an app, they’ll be more likely to consent to data-sharing.
When possible, use plain, concise language that will clearly articulate what kind of data is being collected, what it is being used for, and (most importantly) the value exchange – why the user benefits from sharing that data. Perhaps certain app functions are improved by data sharing, or the app is funded through data-sharing, and users would need to pay for downloads if the app can no longer collect data. Regardless of the reason, this primer message is the best opportunity to make your case to your user.
To see if different language affected opt-in rates, Foursquare tested out several versions of our primer messages on our own app users. While it’s still early days, our results showed that a straightforward explanation of the value exchange (“Support City Guide. Your data allows us to provide this app for free to you.”) yielded the highest number of opt-ins. We shouldn’t be surprised that consumers respect when businesses are transparent with them.
2. Shift to an ID-agnostic strategy As mobile advertising IDs (MAIDs — also known as IDFAs) are phased out , enterprises and developers need to embrace a pluralistic future and an interim period of complexity around identity. The Future of the Internet will involve multiple types of identifiers, and it will take time for each company to find the solution that works best for both the business and its users. During this period, developers must be nimble and willing to keep an ID-agnostic approach until they’ve experimented with several different forms of ID, and until we see how the whole market shakes out.
For many, email addresses will emerge as the best form of identity because user consent is clearly established. When users willingly provide their emails while downloading an app or setting up a profile, they authenticate the relationship between themselves and the service. There are other industry solutions being rolled out to further protect consumer privacy that have emails as their foundation, so establishing a logged-in user base today may allow you to leverage those solutions as they gain prominence and adoption.
3. Plan for the short and long term to avoid product interruptions The future is likely going to look more contextual and probabilistic, and less deterministic. This may sound daunting to many enterprises that have been doing marketing the same way for a long time. Enterprises must plan for a future in which scale is in shorter supply and accessing device-level identity may be more challenging. Apple’s changes are not the final chapter in this story. As the next step, expect Android to follow with changes to the availability of Google advertising IDs (AAIDs) in late 2021 or early 2022.
To adapt for the long-term, double down today on investments on data science, or find partners who are already doing so. For example, some enterprises are experimenting with cohort-based ad delivery and measurement. Plan to keep adding scale and incorporating new types of data — such as transaction data — that will help fill in the gaps left by the loss of MAIDs. It’s also important to have a holistic strategy across first-, second-, and third-party data. When you leverage second- and third-party data, being strategic means vetting your partners to be sure they are adhering to the same privacy principles as your company because your reputations will be linked.
Exactly what the Future of the Internet will look like is still a mystery, but there’s no reason for developers or enterprises to move forward blindly. By taking the above steps and, perhaps most importantly, committing to being flexible, you won’t just be “riding out” the impending changes but will actually be adapting both your business and the ecosystem to a more sustainable — and privacy-sensitive — place.
Tyler Finn is the Director of Data Strategy at Foursquare , where he focuses on the future of privacy and identity. Prior to the merger with Foursquare, he led global privacy and policy initiatives for Factual. Earlier in his career, he worked on public policy in the unmanned aerial vehicles space.
If you’re an expert in data tech or strategy and have an important story to share, contact the VentureBeat guest post team.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,938 | 2,021 |
"What will it take to make good on the promise of the metaverse? | VentureBeat"
|
"https://venturebeat.com/ai/what-will-it-take-to-make-good-on-the-promise-of-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event What will it take to make good on the promise of the metaverse? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The “How to Build the Metaverse” panel during GamesBeat’s two-day all-digital metaverse event brought together speakers who are tackling the challenges in bringing the metaverse to life.
Wanda Meloni, CEO & principal analyst at M2 Insights, was joined on stage by John Linden, CEO of Mythical Games, Jacob Navok, CEO of Genvid, and Chris Swan, publishing director at Unit 2 Games — together representing all parts of the spectrum, from creation to cloud and experience.
“To me, the metaverse is this ideology that we need to get to as an industry that opens up accessibility, makes gaming more ubiquitous, and makes a lot of these gaming experiences flow together,” said Linden. “What’s great about this panel is we’re all working on different pieces of technology that contribute to that overall ideology.” Navok called the metaverse the next format of the internet: Mass scale, persistent connection, a space where people connect — though instead of looking at standard HTML pages, they’ll be part of a 3D-rendered universe.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “It takes years and decades to get to the level of network and compatibility formats that are going to be required to enable this, and we’re experimenting with some of the first experiences of what happens when you have millions of people in the exact same piece of content,” he said.
Genvid is doing that with the show they just started airing on Facebook using their technology, Rival Peak.
The show, a game-like reality series with AI characters, has attracted millions of interactions in just the brief period of time it’s been online, Navok added.
Right now, Unit 2 is in early adopter territory, Swan said, where the pioneering users are the ones who are patient enough to physically move in and out of experiences that aren’t connected yet — because the infrastructure isn’t there yet.
“But the pandemic has made people find a lot more perseverance to try and stick to these technologies and learn more about it,” he pointed out. “Can I go to an online festival or do this thing that I wouldn’t otherwise have put myself out of my comfort box to try?” Of course, there are still technology challenges in the effort to develop the metaverse — and one of the major stumbling blocks is open standards, Navok said. The internet evolved from work done by a consortium of university and government organizations with open standards, effectively, so the universal markup language they created enabled users to talk to each other without the barriers of mismatched standards.
“This is what needs to start happening when it comes to the metaverse,” said Navok. “We need to be able to have the interchange formats between different graphic types. The Unity engine needs to talk to the Unreal engine needs to talk to the various other engines people use in order to have that seamless experience.” It will also require networks that are capable of dealing with lots of people connected in real time. Ironically the pandemic is helping us see what that’s like for the first time — everyone is on video calls, and the same kind of persistent connection a video call requires is the kind of connection that multiplayer gaming requires, and the kind of connection that is needed in the metaverse.
Another problem is rights issue — moving between worlds with different IPs can be tricky, and transferring an avatar between them might end up being a sticky legal issue in the end.
For Linden, the biggest issue is ubiquity of access — being able to access what you’re looking for on any device you use. There are a lot of pieces behind that, especially the walled gardens of major properties which have a number of rules and regulations around what you can and can’t do. Even if you’re able to bring an experience to a platform in the first place, you still need to work around the platform’s restrictions, and everyone has different rules.
“If you can pull up any game on any platform, that’s a big first step,” Linden said. “Then once you have that, you can move to a little bit more of the interoperability. Now your wallets can move. Your access to items can move. But until we have that, we’re pretty gated right now before we can even start the concept.” Swan added, “It’s this interoperability that’s going to be a tricky one to crack, whether that starts with big official partnerships, where a couple of big entities get together and do a deal here, another deal there, until a standard falls out — or whether we can be smarter than that and start the other way round from the bottom up.” So how do you make the experience seamless for a player who wants to move between two completely different game engines? Does the player have to have downloaded both? Can assets generated in one game transfer to the other and be read? Does your computer have to go and patch it before you jump into that portal? Navok explained these are real usability issues that would affect the kind of seamless experience that metaverse engineers are imagining if not solved for.
“The conclusion is something I’ve been thinking about for a while now,” said Navok. “The same way in which we have a W3C that sets web standards, we’re going to need an open metaverse foundation to set interoperability between games and different policies. If this panel ends up being a call for that, we’ve accomplished our initial mission.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,939 | 2,021 |
"What algorithm auditing startups need to succeed | VentureBeat"
|
"https://venturebeat.com/ai/what-algorithm-auditing-startups-need-to-succeed"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis What algorithm auditing startups need to succeed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
To provide clarity and avert potential harms, algorithms that impact human lives would ideally be reviewed by an independent body before they’re deployed, just as environmental impact reports must be approved before a construction project can begin. While no such legal requirement for AI exists in the U.S., a number of startups have been created to fill an algorithm auditing and risk assessment void.
A third party that is trusted by the public and potential clientele could increase trust in AI systems overall. As AI startups in aviation and autonomous driving have argued, regulation could enable innovation and help businesses, governments, and individuals safely adopt AI.
In recent years, we have seen proposals for numerous laws that support algorithm audits by an external company, and last year dozens of influential members of the AI community from academia, industry, and civil society recommended external algorithm audits as one way to put AI principles into action.
Like consulting firms that help businesses scale AI deployments , offer data monitoring services , and sort unstructured data, algorithm auditing startups fill a niche in the growing AI industry. But recent events surrounding HireVue seem to illustrate how these companies differ from other AI startups.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! HireVue is currently used by more than 700 companies, including Delta, Hilton, and Unilever, for prebuilt and custom assessment of job applicants based on a resume, video interview, or their performance when playing psychometric games.
Two weeks ago, HireVue announced that it would no longer use facial analysis to determine whether a person is fit for a job. You may ask yourself: How could recognizing characteristics in a person’s face have ever been considered a scientifically verifiable way to conclude that they’re qualified for a job? Well, HireVue never really proved out those results, but the claim raised a lot of questions.
A HireVue executive said in 2019 that 10% to 30% of competency scores could be tied to facial analysis. But reporting at that time called the company’s claim “ profoundly disturbing.
” Before the Utah-based company decided to ditch facial analysis, ethics leader Suresh Venkatasubramanian resigned from a HireVue advisory board.
And the Electronic Privacy Information Center filed a complaint with the Federal Trade Commission (FTC) alleging HireVue engaged in unfair and deceptive trade practices in violation of the FTC Act.
The complaint specifically cites studies that have found facial recognition systems may identify emotion differently based on a person’s race. The complaint also pointed to a documented history of facial recognition systems misidentifying women with dark skin , people who do not conform to a binary gender identity , and Asian Americans.
Facial analysis may not identify individuals — like facial recognition technology would — but as Partnership on AI put it , facial analysis can classify characteristics with “more complex cultural, social, and political implications,” like age, race, or gender.
Despite these concerns, in a press release announcing the results of their audit, HireVue states: “The audit concluded that ‘[HireVue] assessments work as advertised with regard to fairness and bias issues.'” The audit was carried out by O’Neil Risk Consulting and Algorithmic Auditing ( ORCAA ), which was created by data scientist Cathy O’Neil. O’Neil is also author of the book Weapons of Math Destruction , which takes a critical look at algorithms’ impact on society.
The audit report contains no analysis of AI system training data or code, but rather conversations about the kinds of harm HireVue’s AI could cause in conducting prebuilt assessments of early career job applicants across eight measurements of competency.
The ORCAA audit posed questions to teams within the company and external stakeholders, including people asked to take a test using HireVue software and businesses that pay for the company’s services.
After you sign a legal agreement, you can read the eight-page audit document for yourself. It states that by the time ORCAA conducted the audit, HireVue had already decided to begin phasing out facial analysis.
The audit also conveys a concern among stakeholders that visual analysis makes people generally uncomfortable. And a stakeholder interview participant voiced concern that HireVue facial analysis may work differently for people wearing head or face coverings and disproportionately flag their application for human review. Last fall, VentureBeat reported that people with dark skin taking the state bar exam with remote proctoring software expressed similar concerns.
Brookings Institution fellow Alex Engler’s work focuses on issues of AI governance. In an op-ed at Fast Company this week , Engler wrote that he believes HireVue mischaracterized the audit results to engage in a form of ethics washing and described the company as more interested in “favorable press than legitimate introspection.” He also characterized algorithm auditing startups as a “burgeoning but troubled industry” and called for governmental oversight or regulation to keep audits honest.
HireVue CEO Kevin Parker told VentureBeat the company began to phase out facial analysis use about a year ago. He said HireVue arrived at that decision following negative news coverage and an internal assessment that concluded “the benefit of including it wasn’t enough to justify the concern it was causing.” Alex Engle is right: algorithmic auditing companies like mine are at risk of becoming corrupt.
We need more leverage to do things right, with open methodology and results.
Where would we get such leverage? Lawsuits, regulatory enforcement, or both.
https://t.co/2zkgFs4YEo — Cathy O'Neil (@mathbabedotorg) January 26, 2021 Parker disputes Engler’s assertion that HireVue mischaracterized audit results and said he’s proud of the outcome. But one thing Engler, HireVue, and ORCAA agree on is the need for industrywide changes.
“Having a standard that says ‘Here’s what we mean when we say algorithmic audit’ and what it covers and what it says intent is would be very helpful, and we’re eager to participate in that and see those standards come out. Whether it’s regulatory or industry, I think it’s all going to be helpful,” Parker said.
So what kind of government regulation, industry standards, or internal business policy is needed for algorithm auditing startups to succeed? And how can they maintain independence and avoid becoming co-opted like some AI ethics research and diversity in tech initiatives have in recent years? To find out, VentureBeat spoke with representatives from bnh.ai , Parity , and ORCAA, startups offering algorithm audits to business and government clients.
Require businesses to carry out algorithm audits One solution endorsed by people working at each of the three companies was to enact regulation requiring algorithm audits, particularly for algorithms informing decisions that significantly impact people’s lives.
“I think the final answer is federal regulation, and we’ve seen this in the banking industry,” bnh.ai chief scientist and George Washington University visiting professor Patrick Hall said. The Federal Reserve’s SR-11 guidance on model risk management currently mandates audits of statistical and machine learning models, which Hall sees as a step in the right direction. The National Institute for Standards and Technology (NIST) tests facial recognition systems trained by private companies, but that is a voluntary process.
ORCAA chief strategist Jacob Appel said an algorithm audit is currently defined as whatever a selected algorithm auditor is offering. He suggests companies be required to disclose algorithm audit reports the same way publicly traded businesses are obligated to share financial statements. For businesses to undertake a rigorous audit when there is no legal obligation for them to do so is commendable, but Appel said this voluntary practice reflects a lack of oversight in the current regulatory environment.
“If there are complaints or criticisms about how HireVue’s audit results were released, I think it’s helpful to see connection with the lack of legal standards and regulatory requirements as contributing to those outcomes,” he said. “These early examples may help highlight or underline the need for an environment where there are legal and regulatory requirements that give some more momentum to the auditors.” There are growing signs that external algorithm audits may become a standard. Lawmakers in some parts of the United States have proposed legislation that would effectively create markets for algorithm auditing startups. In New York City, lawmakers have proposed mandating an annual test for hiring software that uses AI. Last fall, California voters rejected Prop 25, which would have required counties to replace cash bail systems with an algorithmic assessment. The related Senate Bill 36 requires external review of pretrial risk assessment algorithms by an independent third party. In 2019, federal lawmakers introduced the Algorithmic Accountability Act to require companies to survey and fix algorithms that result in discriminatory or unfair treatment.
However, any regulatory requirement will have to consider how to measure fairness and the influence of AI provided by a third party since few AI systems are built entirely in-house.
Rumman Chowdhury is CEO of Parity, a company she created a few months ago after leaving her position as a global lead for responsible AI at Accenture. She believes such regulation should take into consideration the fact that use cases can range greatly from industry to industry. She also believes legislation should address intellectual property claims from AI startups that do not want to share training data or code, a concern such startups often raise in legal proceedings.
“I think the challenge here is balancing transparency with the very real and tangible need for companies to protect their IP and what they’re building,” she said. “It’s unfair to say companies should have to share all their data and their models because they do have IP that they’re building, and you could be auditing a startup.” Maintain independence and grow public trust To avoid co-opting the algorithm auditing startup space, Chowdhury said it will be essential to establish common professional standards through groups like the IEEE or government regulation. Any enforcement or standards could also include a government mandate that auditors receive some form of training or certification, she said.
Appel suggested that another way to enhance public trustworthiness and broaden the community of stakeholders impacted by technology is to mandate a public comment period for algorithms. Such periods are commonly invoked ahead of law or policy proposals or civic efforts like proposed building projects.
Other governments have begun implementing measures to increase public trust in algorithms. The cities of Amsterdam and Helsinki created algorithm registries in late 2020 to give local residents the name of the person and city department in charge of deploying a particular algorithm and provide feedback.
Define audits and algorithms A language model with billions of parameters is different from a simpler algorithmic decision-making system made with no qualitative model. Definitions of algorithms may be necessary to help define what an audit should contain, as well as helping companies understand what an audit should accomplish.
“I do think regulation and standards do need to be quite clear on what is expected of an audit, what it should accomplish so that companies can say ‘This is what an audit cannot do and this is what it can do.’ It helps to manage expectations I think,” Chowdhury said.
A culture change for humans working with machines Last month, a cadre of AI researchers called for a culture change in computer vision and NLP communities.
A paper they published considers the implications of a culture shift for data scientists within companies. The researchers’ suggestions include improvements in data documentation practices and audit trails through documentation, procedures, and processes.
Chowdhury also suggested people in the AI industry seek to learn from structural problems other industries have already faced.
Examples of this include the recently launched AI Incidents database , which borrows an approach used in aviation and computer security. Created by the Partnership on AI, the database is a collaborative effort to document instances in which AI systems fail. Others have suggested that the AI industry incentivize finding bias in networks the way the security industry does with bug bounties.
“I think it’s really interesting to look at things like bug bounties and incident reporting databases because it enables companies to be very public about the flaws in their systems in a way where we’re all working on fixing them instead of pointing fingers at them because it has been wrong,” she said. “I think the way to make that successful is an audit that can’t happen after the fact — it would have to happen before something is released.” Don’t consider an audit a cure-all As ORCAA’s audit of a HireVue use case shows, an audit’s disclosure can be limited and does not necessarily ensure AI systems are free from bias.
Chowdhury said a disconnect she commonly encounters with clients is an expectation that an audit will only consider code or data analysis. She said audits can also focus on specific use cases, like collecting input from marginalized communities, risk management, or critical examination of company culture.
“I do think there is an idealistic idea of what an audit is going to accomplish. An audit’s just a report. It’s not going to fix everything, and it’s not going to even identify all the problems,” she said.
Bnh.ai managing director Andrew Burt said clients tend to view audits as a panacea rather than part of a continuing process to monitor how algorithms perform in practice.
“One-time audits are helpful but only to a point, due to the way that AI is implemented in practice. The underlying data changes, the models themselves can change, and the same models are frequently used for secondary purposes, all of which require periodic review,” Burt said.
Consider risk beyond what’s legal Audits to ensure compliance with government regulation may not be sufficient to catch potentially costly risks. An audit might keep a company out of court, but that’s not always the same thing as keeping up with evolving ethical standards or managing the risk unethical or irresponsible actions pose to a company’s bottom line.
“I think there should be some aspect of algorithmic audit that is not just about compliance, and it’s about ethical and responsible use, which by the way is an aspect of risk management, like reputational risk is a consideration. You can absolutely do something legal that everyone thinks is terrible,” Chowdhury said. “There’s an aspect of algorithmic audit that should include what is the impact on society as it relates to the reputational impact on your company, and that has nothing to do with the law actually. It’s actually what else above and beyond the law?” Final thoughts In today’s environment for algorithm auditing startups, Chowdhury said she worries companies savvy enough to understand the policy implications of inaction may attempt to co-opt the auditing process and steal the narrative. She’s also concerned that startups pressured to grow revenue may cosign less than robust audits.
“As much as I would love to believe everyone is a good actor, everyone is not a good actor, and there’s certainly grift to be done by essentially offering ethics washing to companies under the guise of algorithmic auditing,” she said. “Because it’s a bit of a Wild West territory when it comes to what it means to do an audit, it’s anyone’s game. And unfortunately, when it’s anyone’s game and the other actor is not incentivized to perform to the highest standard, we’re going to go down to the lowest denominator is my fear.” Top Biden administration officials from the FTC, Department of Justice, and White House Office of Science and Technology have all signaled plans to increase regulation of AI, and a Democratic Congress could tackle a range of tech policy issues.
Internal audit frameworks and risk assessments are also options. The OECD and Data & Society are currently developing risk assessment classification tools businesses can use to identify whether an algorithm should be considered high or low risk.
But algorithm auditing startups are different from other AI startups in that they need to seek approval from an independent arbiter and to some degree the general public. To ensure their success, people behind algorithm auditing startups, like those I spoke with, increasingly suggest stronger industrywide regulation and standards.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,940 | 2,021 |
"Pinecone emerges from stealth with $10 million to accelerate AI workloads | VentureBeat"
|
"https://venturebeat.com/ai/pinecone-emerges-from-stealth-with-10-million-to-accelerate-ai-workloads"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pinecone emerges from stealth with $10 million to accelerate AI workloads Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Pinecone Systems , a company developing a database platform for AI workloads, today emerged from stealth with $10 million in funding. A company spokesperson told VentureBeat the funds will be put toward customer acquisition and product R&D as Pinecone’s platform launches for self-onboarding.
AI models take data, whether images, sentences, or user behavior, and convert them into complex collections of numbers called vectors. Using a machine learning model is often a question of finding out which vectors are nearest or most similar to others, but existing ways of doing this can be slow and inaccurate.
Pinecone aims to make AI queries faster, more accurate, cheaper, and easier to use with fully managed serverless vector database technology. Founded by developers from Facebook and Google — along with CEO Edo Liberty, who ran Yahoo’s Scalable Machine Learning Platforms group and later Amazon AI Labs — Pinecone’s database supports deployments of personalization, semantic text search, image retrieval, data fusion, deduplication, recommendation, anomaly detection, and other real-time applications.
Developers can use Pinecone to configure and launch services with custom machine learning models, transformations, and rankers. They’re also able to add or update data in batches or streams; data is turned into vectors and the index is updated in real time. Moreover, they can monitor and track deployments for operational health and support queries to find similar or top-ranking items.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Liberty claims Pinecone, which connects to third-party data sources, can scale to billions of high-dimensional vectors while keeping latencies below 50 milliseconds for queries, updates, and embeddings with horizontal container distribution. Pinecone ostensibly has a real-time indexing speed 30 times higher than open source libraries.
“One of the largest retailers in the world reports using Pinecone to serve real-time shopping recommendations based on their own deep learning models. They saw an immediate 18.5% lift in revenue per recommendation compared with their previous solution,” a Pinecone spokesperson told VentureBeat. “Pinecone’s database is 100% serverless and API-driven, which means customers always have the computing resources they need, when they need them, without having to worry about infrastructure or maintenance.” Wing Venture Capital led the seed investment in Sunnyvale, California-based Pinecone announced today.
“The modern enterprise is built on data and powered by AI,” Peter Wagner, a founding partner at Wing Venture Capital and the newest member of Pinecone’s board of directors, said in a statement. “The data cloud has emerged as its foundation with the ascendance of Snowflake. Pinecone is poised to unleash data teams and their ML-based applications in a similar fashion.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,941 | 2,021 |
"Here's where AI will advance in 2021 | VentureBeat"
|
"https://venturebeat.com/ai/heres-where-ai-will-advance-in-2021"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Here’s where AI will advance in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence continues to advance at a rapid pace. Even in 2020, a year that did not lack compelling news, AI advances commanded mainstream attention on multiple occasions. OpenAI’s GPT-3 , in particular, showed new and surprising ways we may soon be seeing AI penetrate daily life. Such rapid progress makes prediction about the future of AI somewhat difficult, but some areas do seem ripe for breakthroughs. Here are a few areas in AI that we feel particularly optimistic about in 2021.
Transformers Two of 2020’s biggest AI achievements quietly shared the same underlying AI structure. Both OpenAI’s GPT-3 and DeepMind’s AlphaFold are based on a sequence processing model called the Transformer.
Although Transformer structures have been around since 2017, GPT-3 and Alphafold demonstrated the Transformer’s remarkable ability to learn more deeply and quickly than the previous generation of sequence models, and to perform well on problems outside of natural language processing.
Unlike prior sequence modelling structures such as recurrent neural networks and LSTMs, Transformers depart from the paradigm of processing data sequentially. They process the whole input sequence at once, using a mechanism called attention to learn what parts of the input are relevant in relation to other parts. This allows Transformers to easily relate distant parts of the input sequence, a task that recurrent models have famously struggled with.
It also allows significant parts of the training to be done in parallel, better leveraging the massively parallel hardware that has become available in recent years and greatly reducing training time. Researchers will undoubtedly be looking for new places to apply this promising structure in 2021, and there’s good reason to expect positive results. In fact, in 2021 OpenAI has already modified GPT-3 to generate images from text descriptions.
The transformer looks ready to dominate 2021.
Graph neural networks Many domains have data that naturally lend themselves to graph structures: computer networks, social networks, molecules/proteins, and transportation routes are just a few examples.
Graph neural networks (GNNs) enable the application of deep learning to graph-structured data, and we expected GNNs to become an increasingly important AI method in the future. More specifically, in 2021, we expect that methodological advances in a few key areas will drive broader adoption of GNNs.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dynamic graphs are the first area of importance. While most GNN research to date has assumed a static, unchanging graph, the scenarios above necessarily involve changes over time: For example, in social networks, members join (new nodes) and friendships change (different edges). In 2020, we saw some efforts to model time-evolving graphs as a series of snapshots, but 2021 will extend this nascent research direction with a focus on approaches that model a dynamic graph as a continuous time series.
Such continuous modeling should enable GNNs to discover and learn from temporal structure in graphs in addition to the usual topological structure.
Improvements on the message-passing paradigm will be another enabling advancement. A common method of implementing graph neural networks, message passing is a means of aggregating information about nodes by “passing” information along the edges that connect neighbors. Although intuitive, message passing struggles to capture effects that require information to propagate across long distances on a graph. Next year, we expect breakthroughs to move beyond this paradigm, such as by iteratively learning which information propagation pathways are the most relevant or even learning an entirely novel causal graph on a relational dataset.
Applications Many of last year’s top stories highlighted nascent advances in practical applications of AI, and 2021 looks poised to capitalize on these advances. Applications that depend on natural language understanding, in particular, are likely to see advances as access to the GPT-3 API becomes more available. The API allows users to access GPT-3’s abilities without requiring them to train their own AI, an otherwise expensive endeavor. With Microsoft’s purchase of the GPT-3 license, we may also see the technology appear in Microsoft products as well.
Other application areas also appear likely to benefit substantially from AI technology in 2021. AI and machine learning (ML) have spiraled into the cyber security space, but 2021 shows potential of pushing the trajectory a little steeper. As highlighted by the SolarWinds breach , companies are coming to terms with impending threats from cyber criminals and nation state actors and the constantly evolving configurations of malware and ransomware. In 2021, we expect an aggressive push of advanced behavioral analytics AI for augmenting network defense systems. AI and behavioral analytics are critical to help identify new threats, including variants of earlier threats.
We also expect an uptick in applications defaulting to running machine learning models on edge devices in 2021. Devices like Google’s Coral , which features an onboard tensor processing unit (TPU), are bound to become more widespread with advancements in processing power and quantization technologies. Edge AI eliminates the need to send data to the cloud for inference, saving bandwidth and reducing execution time, both of which are critical in fields such as health care. Edge computing may also open new applications in other areas that require privacy, security, low latency, and in regions of the world that lack access to high-speed internet.
The bottom line AI technology continues to proliferate in practical domains, and advances in Transformer structures and GNNs are likely to spur advances in domains that haven’t yet readily lent themselves to existing AI techniques and algorithms. We’ve highlighted here several areas that seem ready for advancement this year, but there will undoubtedly be surprises as the year unfolds. Predictions are hard, especially about the future, as the saying goes, but right or wrong, 2021 looks to be an exciting year for the field of AI.
Ben Wiener is a data scientist at Vectra AI and has a PhD in physics and a variety of skills in related topics including computer modeling, optimization, machine learning, and robotics.
Daniel Hannah is a data scientist and researcher with more than 8 years of experience turning messy data into actionable insights. At Vectra AI, he works at the interface of artificial intelligence and network security. Previously, he applied machine learning approaches to anomaly detection as a fellow at Insight Data Science.
Allan Ogwang is a data scientist at Vectra AI with a strong math background and experience in econometrics, statistical modeling, and machine learning.
Christopher Thissen is a data scientist at Vectra AI, where he uses machine learning to detect malicious cyber behaviors. Before joining Vectra, Chris led several DARPA-funded machine learning research projects at Boston Fusion Corporation.
VentureBeat is always looking for insightful guest posts on data tech and strategy.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,942 | 2,021 |
"Google's updated Voice Access leverages AI to detect in-app icons | VentureBeat"
|
"https://venturebeat.com/ai/googles-updated-voice-access-leverages-ai-to-detect-in-app-icons"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s updated Voice Access leverages AI to detect in-app icons Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google today launched an updated version of Voice Access , its service that enables users to control Android devices using voice commands. It leverages a machine learning model to automatically detect icons on the screen based on UI screenshots, enabling it to determine whether elements like images and icons have accessibility labels , or labels provided to Android’s accessibility services.
Accessibility labels allow Android’s accessibility services to refer to exactly one on-screen element at a time, letting users know when they’ve cycled through the UI. Unfortunately, some elements lack labels, a challenge the new version of Voice Access aims to address.
A vision-based object detection model called IconNet in the new Voice Access (version 5.0) can detect 31 different icon types, soon to be extended to more than 70 types. As Google explains in a blog post, IconNet is based on the novel CenterNet architecture, which extracts app icons from input images and then predicts their locations and sizes. Using Voice Access, users can refer to icons detected by IconNet by their names, e.g., “Tap ‘menu’.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To train IconNet, Google engineers collected and labeled more than 700,000 app screenshots, streamlining the process by using heuristics, auxiliary models, and data augmentation techniques to identify rarer icons and enrich existing screenshots with infrequent icons. “IconNet is optimized to run on-device for mobile environments, with a compact size and fast inference time to enable a seamless user experience,” Google Research software engineers Gilles Baechler and Srinivas Sunkara wrote in their blog post.
Google says that in the future, it plans to expand the range of elements supported by IconNet to generic images, text, and buttons. It also plan to extend IconNet to differentiate between similar-looking icons by identifying their functionality. Meanwhile, on the developer side, Google hopes to increase the number of apps with valid content descriptions by improving tools to suggest content descriptions for different elements when building applications.
Above: IconNet analyzes the pixels of the screen and identifies the centers of icons by generating heatmaps, which provide precise information about the position and type of the different types of icons present on the screen.
“A significant challenge in the development of an on-device UI element detector for Voice Access is that it must be able to run on a wide variety of phones with a range of performance performance capabilities, while preserving the user’s privacy,” the authors wrote. “We are constantly working on improving IconNet.” Voice Access, which launched in beta in 2016, dovetails with Google’s other mobile accessibility efforts. The company is continuing to develop Lookout , an accessibility-focused app that can identify packaged foods using computer vision, scan documents to make it easier to review letters and mail, and more. There’s also Project Euphonia , which aims to help people with speech impairments communicate more easily; Live Relay , which uses on-device speech recognition and text-to-speech to let phones listen and speak on a person’s behalf; and Project Diva , which helps people give the Google Assistant commands without using their voice.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,943 | 2,021 |
"Forget user experience. AI must focus on 'citizen experience' | VentureBeat"
|
"https://venturebeat.com/ai/forget-user-experience-ai-must-focus-on-citizen-experience"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Forget user experience. AI must focus on ‘citizen experience’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Tech giants and their AI-powered digital platforms and solutions can affect the destinies of world leaders, nation states, multinational corporations, global stock market, and individuals alike.
The creators of major digital platforms as well as the designers and developers of ubiquitous AI systems treat individuals as mere users, customers, or data points, oftentimes completely ignoring the individual’s role and rights as a citizen.
As a result, the individual users and customers are removed from the societal context with appalling consequences. The individual can unconsciously become a misinformation-spreading user. A misinformed customer can turn into a violent insurgent. Or she can be treated unfairly by a biased AI system while applying for a job or updating her insurance policy.
Why citizen experience design? Now, as the societal impact of AI solutions is becoming obvious, their effects, as well as their design and development principles need to be considered from the point of view of citizens and society.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Already today, we’re seeing amazing work done in uncovering the effects of biased AI systems and their impact on various fields from healthcare to scientific research and from criminal justice to financial services. Simultaneously, we’ve been witnessing positive developments around data rights and practices.
But the regulation of tech companies, such as GDPR or data governance initiatives, isn’t enough. Similarly, the emerging field of algorithmic auditing doesn’t yet have sufficient means to directly affect AI development and its practices. Neither are the current AI ethics boards changing the course of AI development on a larger scale fast enough.
The most effective and sustainable impact on the field of AI will only be achieved by ensuring that the design and development of AI solutions is concretely guided by citizen-centric values and principles.
Previously, “citizen experience” has been seen as belonging solely to the field of public service. But no more. Today, we need a thoughtful citizen-centric approach that belongs to the general toolbox of every AI designer and developer.
Concretely, we need AI companies, data scientists, and designers that think and act citizen-first. We need citizen experience experts that bring the societal understanding into the core of product thinking, design, and development.
How to start thinking about citizen experience design for AI So how can we create a basis for sustainable practice of citizen experience design that truly aims to create AI solutions that take into account the individual as a citizen, belonging to a wider fabric of society ? First, citizen experience design needs to be a multidisciplinary effort, bringing together social sciences, data science, and design.
Data literacy and algorithm literacy are required for citizen experience design, i.e. concretely understanding the pros and cons of different data, and being able to assess the applications as well as potential effects of different algorithms. And this literacy can only be achieved by multidisciplinary approach.
Second, citizen experience design should help designers and developers to think of individuals as user-citizens and consumer-citizens.
Citizen experience design should provide concrete tools for considering individuals as real people living in a real world, thus allowing a company to assess its product decisions in a wider societal context. Such tools would enable deeper user experience and customer experience design and data science practices.
Third, citizen experience design should affect all the elements of product design and development , from use cases and goal setting to applied metrics and user interface design, and from data pipelines and selection of AI technologies to user research and analytics.
And fourth, the principles of citizen experience design must be created together with citizens.
The co-creative practice will surface new insights that bring the citizen concretely into the center of things as an active force.
The founding principles of citizen experience design It all starts with this: AI practitioners acknowledge the individual’s status as a full-fledged citizen and treat and respect her accordingly. AI solutions are never considered in the vacuum of a single product or platform.
Here are some concrete suggestions for further iteration: AI systems should be designed and developed to guard the rights of the citizen.
Algorithms are created in a responsible and transparent way. The AI system doesn’t treat citizens unfairly or endanger their immunity or integrity based on who they are.
A citizen’s data is handled and processed safely and responsibly.
Personal data is not collected unnecessarily or used without explicit consent. Likewise, the citizen has to be made aware when she is interacting with, or being affected by, an AI system. An AI system should never try to fool or manipulate the citizen, for example by presenting itself as a human being or by optimizing a recommendation system for unhealthy addictive behavior.
When the citizen perspective is taken into account from the start, personal control of data must be thought of as a fundamental feature of any digital product. As the citizen’s rights to privacy are at the center of things, unnecessary AI-powered surveillance systems are out of the question.
AI systems should not promote behavior that violates any existing laws or the rights of other citizens.
AI systems must respect the existing legislation and good manners. In short, AI designers and developers, or their AI solutions, do not decide independently what’s good, fair, or acceptable or what is lawful.
Citizen experience design empowers practitioners to proactively consider their solutions in the societal context through continuous dialogue with experts from different fields. AI solutions — recognized as socio-technological systems that are seamlessly intertwined with society — are continuously monitored, assessed, audited and iterated to mitigate potential problems or conflicts of interest early on.
AI systems should allow people to educate themselves about the use and effects of AI.
When individuals are treated as full-fledged citizens, they also have to be held accountable for their use of AI solutions. For this, new citizenship skills are needed, including adequate data literacy, algorithmic literacy and digital media literacy. This requires effort both from citizens and AI practitioners. For example, the citizen could observe her data trails or exposure to algorithmic systems in an accessible manner.
Such educational transparency helps people to understand the motives and incentives of AI systems and their creators, building trust between citizens and AI developers. A citizen-centric and societally aware design informs citizens and empowers behavior and safety mechanisms that, for example, make harmful information operations easier to detect, mitigate, and even prevent.
A founding principle Ideally, citizen experience design for AI should be a founding principle that concretely guides the design and development of AI solutions, not something that is used to assess or iterate the system retrospectively.
When looking at the big picture, it’s clear that citizen experience design will create new opportunities for AI innovations because the existing products as well as future solutions can’t ignore the individual’s multifaceted role as a citizen.
The core principles of citizen experience design must be created and iterated together. Let’s start today.
Jarno M. Koponen is Head of AI and Personalization at Finnish media house Yle. He creates smart human-centered products and personalized experiences by combining UX design and AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,944 | 2,021 |
"Avaya expands its alliance with Google for AI for contact centers | VentureBeat"
|
"https://venturebeat.com/ai/avaya-expands-its-alliance-with-google-for-ai-for-contact-centers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Avaya expands its alliance with Google for AI for contact centers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Avaya has extended the capabilities of its contact center platforms to include an enhanced version of Google Cloud Dialogflow CX. This can be employed to create virtual agents infused with AI capabilities that verbally interact with customers.
Residing on the Contact Center AI (CCAI) cloud service provided by Google, the conversational AI capabilities Avaya offers are enabled using an instance of the service dubbed Avaya AI Virtual Agent Enhanced.
In collaboration with Google, the company has optimized that offering for its enterprise customers to provide, for example, barge-in and live agent handoff capabilities, Avaya VP Eric Rossman said.
Earlier this week, Google also announced the general availability of its Dialogflow service within the Google CCAI platform.
While Avaya has a long-standing alliance with Google, the CCAI service is only one of several AI platforms Avaya has integrated into its contact center platforms, Rossman said. In some cases, those services are complementary to each other. In other cases, the end customer prefers one AI service to another, Rossman said. But he added that in all cases, organizations are trying to move beyond the simple bots that are now widely employed across websites.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He said that regardless of the AI platform selected, Avaya is dedicating engineering resources to optimizing those platforms and building its own AI models to automate a wide range of processes. Avaya machine learning algorithms, for example, can be applied to Google Cloud CCAI to determine the next best action for an agent. Google Cloud Insights, combined with Avaya AI, uses natural language to identify call patterns, as well as generating sentiment analysis.
Avaya AI Virtual Agent Enhanced is being embedded within the Avaya OneCloud CCaaS and OneCloud CPaaS offerings. The latter is a platform-as-service (PaaS) environment for building applications on top of the core contact center-as-a-service (CCaaS). Those offerings can be deployed on a public cloud, a private cloud, or across a hybrid cloud as IT organizations see fit. Overall, Avaya claims that more than 16 million agents currently access contact center platforms.
Interest in AI-enabled virtual agents that could be employed to augment customer service spiked in the wake of the COVID-19 pandemic, Rossman said. With more people working from home, the number of service and support calls made to organizations increased dramatically, he added. At the same time, most customer service representatives were also working from home. Virtual agents enabled by AI provide a means to offload many of those calls. “The supply of agents was limited,” Rossman noted.
Of course, the use cases for a virtual agent with speech capabilities need to be carefully considered, Rossman said. He said one of the things that distinguishes Avaya is that it offers a professional services team to work with the end customers on where and how to employ virtual agents.
As AI continues to evolve, organizations will need to make a classic “build versus buy” decision. Google, IBM, Microsoft, and Amazon Web Services (AWS) are all making available AI services that can be consumed via an application programming interface (API). Alternatively, some organizations will decide to invest in building their own AI models to automate a specific task. In the case of virtual agents, Avaya is trying to strike a balance between the two approaches, depending on the use case.
Naturally, not every end customer will want to engage with a virtual agent any more than they did an interactive voice response system (IVR). However, for every customer who prefers to speak to a human, there is another who would just as soon have their issue resolved without having to wait for a customer service representative. In many cases, an interaction with a virtual agent may lead to engagement with a human representative who has been informed of the issue. The younger the customer, the more willing they tend to be to rely on a virtual agent, but there are never any absolutes when it comes to customer service.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,945 | 2,021 |
"AI holds the key to even better AI | VentureBeat"
|
"https://venturebeat.com/ai/ai-holds-the-key-to-even-better-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI holds the key to even better AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
For all the talk about how artificial intelligence technology is transforming entire industries, the reality is that most businesses struggle to obtain real value from AI. 65% of organizations that have invested in AI in recent years haven’t yet seen any tangible gains from those investments, according to a 2019 survey conducted by MIT Sloan Management Review and the Boston Consulting Group. And a quarter of businesses implementing AI projects see at least 50% of those projects fail, with “lack of skilled staff” and “unrealistic expectations” among the top reasons for failure, per research from IDC.
A major factor behind these struggles is the high algorithmic complexity of deep learning models. Algorithmic complexity refers to the computational complexity of building and running these models in production. Faced with prolonged development cycles, high computing costs, unsatisfying inference performance, and other challenges, developers often find themselves stuck in the development stage of AI adoption, attempting to perfect deep learning models through manual trial-and-error, and nowhere near the production stage. Alternatively, data scientists rely on facsimiles of other models, which ultimately prove to be poor fits for their unique business problems.
If human-developed algorithms inevitably run up against barriers of cost, time, manpower, and business fit, how can the AI industry break those barriers? The answer lies in algorithms that are designed by algorithms – a phenomenon that has been confined to academia to date but which will open up groundbreaking applications across industries when it is commercialized in the coming years.
This new approach will enable data scientists to focus on what they do best – interpreting and extracting insights from data. Automating complex processes in the AI lifecycle will also make the benefits of AI more accessible, meaning it will be easier for organizations that lack large tech budgets and development staff to tap into the technology’s true transformative power.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More of an art than a science Because the task of creating effective deep learning models has become too much of a challenge for humans to tackle alone, organizations clearly need a more efficient approach.
With data scientists regularly bogged down by deep learning’s algorithmic complexity, development teams have struggled to design solutions and have been forced to manually tweak and optimize models – an inefficient process that often comes at the expense of a product’s performance or quality. Moreover, manually designing such models prolongs a product’s time-to-market exponentially.
Does that mean that the only solution is fully autonomous deep learning models that build themselves? Not necessarily.
Consider automotive technology. The popular dichotomy between fully autonomous and fully manual driving is far too simplistic. Indeed, this black-and-white framing obscures a great deal of the progress that automakers have made in introducing greater levels of autonomous technology. That’s why automotive industry insiders speak of different levels of autonomy – ranging from Level 1 (which includes driver assistance technology) to Level 5 (fully self-driving cars, which remain a far-off prospect). It is plausible that our cars can become much more advanced without needing to achieve full autonomy in the process.
The AI world can (and should) develop a similar mindset. AI practitioners require technologies that automate cumbersome processes involved in designing a deep learning model. Similar to how Advanced Driver Assistance Systems (ADAS) (automatic braking systems, adaptive cruise control) are paving the way towards greater autonomy in the automotive industry, the AI industry needs its own technology to do the same. And it’s AI that holds the key to help us get there.
AI building better AI Encouragingly, AI is already being leveraged to simplify other tech-related tasks, like writing and reviewing code (which itself is built by AI). The next phase of the deep learning revolution will involve similar complementary tools. Over the next five years, expect to see such capabilities slowly become available commercially to the public.
So far, research on how to develop these superior AI capabilities has remained constrained to advanced academic institutes and, unsurprisingly, the largest names in tech. Google’s pioneering work on neural architecture search (NAS) is a key example. Described by Google CEO Sundar Pichai as a way for “ neural nets to design neural nets ,” NAS — an approach that began attracting notice in 2017 — involves algorithms searching among thousands of available models, a process that culminates in an algorithm suited to the particular problem at hand.
For now, NAS is a new technology that hasn’t been widely introduced commercially. Since its inception, researchers have been able to shorten runtimes and decrease the amount of compute resources needed to run NAS algorithms. But these algorithms are still not generalizable among different problems and datasets — let alone ready for commercial use — because for each individual use case, one must manually tweak the architecture space for each problem, an approach that is far from scalable.
Most research in the field has been carried out by tech giants like Google and Facebook, as well as academic institutes like Stanford, where researchers have hailed emerging autonomous methods as a “promising avenue” for driving AI progress.
But with innovative AI developers building on the work that’s already been done in this field, the exclusivity of technology like NAS is set to give way to greater accessibility as the concept becomes more scalable and affordable in the coming years. The result? AI that builds AI, thus unleashing its true potential to solve the world’s most complex problems.
As the world looks toward 2021, this is an area ripe for innovation – and that innovation will only beget further innovation.
Yonatan Geifman is CEO and co-founder at Deci.
[ VentureBeat regularly publishes guest posts from experts who can provide unique and useful perspectives to our readers on news, trends, emerging technologies, and other areas of interest.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,946 | 2,021 |
"LongTailPro can supercharge all of your SEO efforts for only $40 | VentureBeat"
|
"https://venturebeat.com/uncategorized/longtailpro-can-supercharge-all-of-your-seo-efforts-for-only-40"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LongTailPro can supercharge all of your SEO efforts for only $40 Share on Facebook Share on X Share on LinkedIn Long tail keywords are usually phrases of three words or more — and they’re a lot more common in the search world than you might think. In fact, as much as 70 percent of all Google search traffic is a long tail keyword search.
Over 90 percent of long tail keywords get 10 or fewer searches per month, which means if you can find the right path to identifying some of the prime seed words in those long tails, you can rank highly for literally hundreds of terms, all based on one word.
LongTailPro is a nifty keyword suggestion tool that can help a business or manager find those skeleton key words that can unlock a bounty of new web traffic for your product. It’s all part of an all-in-one package to help find your potential audience online and turbo-charge an organization’s SEO efforts.
Once you look under the hood, it becomes clear that LongTailPro isn’t really one SEO tool — it’s actually five SEO tools. First, there’s a tried and true Keyword Research Tool to help find the best keywords for your projects. It’ll help users find head terms that can extend into up to 400 long-tail keywords, group and organize those keywords, and find keywords that worked for competitors. Meanwhile, the Rank Tracker keeps users updated daily on the progress of their chosen keywords with advanced metrics that can help fine tune all your SEO campaigns.
There’s also a Site Audit feature to manage your site health, find SEO mistakes, offer optimization ideas, track site speed, find internal links, and more. The SERP Analysis Tool expands your strategy options to find up to 200 manual keywords, analyze competitor terms, and analyze their backlinks. And speaking of backlinks, the Backlink Analysis goes even deeper into that area, including tracking on all the domains and page level metrics of all your backlinks, fixing broken site links, and spotting new backlink opportunities as soon as they appear.
For a limited time, a LongTailPro subscription that’s good for finding up to 10,000 keywords (a nearly $600 value) is now also part of our second Cyber Week sale. By using the code CYBER20 , shoppers can take an extra 20 percent off the total price and get their LongTailPro access for just $39.99.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.” VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,947 | 2,021 |
"Increase your chances for business success with insights from winning entrepreneurs | VentureBeat"
|
"https://venturebeat.com/uncategorized/increase-your-chances-for-business-success-with-insights-from-winning-entrepreneurs"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Increase your chances for business success with insights from winning entrepreneurs Share on Facebook Share on X Share on LinkedIn Undoubtedly you have heard the staggering statistics before. About 1 in 5 start-up businesses fail within the first year. Only about 25 percent of them make it past the 15-year mark. They, of course, don’t set out to fail. It seems, however, that often the dreams are bigger than the reality, and sometimes they just, as the saying goes, bite off more than they can chew. Perhaps with a little more guidance, they could have gone farther on the road to success. It’s too late for them, but not for you.
Starter Story Premium could be just the guide you are looking for to give you the right tools and advice in starting your own business.
Most often, businesses fail not because of what we do as new entrepreneurs, but because of what we don’t do. If we don’t have a clear vision as to where we want to take our business, there is no path to follow. If we try to be all things to all people, we set ourselves up for disappointment. If we lack a business plan or a marketing plan, we will lose out on what is needed to succeed, potentially missing the mark on both short- and long-term objectives. All this to say that there is a lot more that goes into ensuring your business thrives than just a great idea.
Fortunately, there are several that have gone before you that have run successful businesses and are more than happy to share their tips and tricks. This Starter Story Premium Plan gives you unlimited access to over 3,500 founder case studies which you can scrutinize, dissect, and analyze at your leisure. Winning entrepreneurs will give you insights as to how they got started, how they grow, and how they run their businesses today. With a comprehensive database of more than 200 marketing tactics, you pick the ones that suit your budget and style and start growing your business. And if you don’t know exactly what you want your business to be, although you do know you want to go down the entrepreneur path, Starter Story offers over 5,000 business ideas and data points.
This review sums it up nicely: “This is an awesome database of startup ideas, founder interviews, and a whole lot more. Definitely worth a purchase if you have hopes of starting your own business and need inspiration!” Normally valued at over $1,600, a lifetime subscription to Starter Story can be yours for only $99.99.
That’s a small price to pay for a virtual mentor right inside your computer.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,948 | 2,021 |
"Disctopia actually pays artists for their work, and a one-year subscription is only $40 | VentureBeat"
|
"https://venturebeat.com/uncategorized/disctopia-actually-pays-artists-for-their-work-and-a-one-year-subscription-is-only-40"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Disctopia actually pays artists for their work, and a one-year subscription is only $40 Share on Facebook Share on X Share on LinkedIn Did you know that while you’re pumping away to your favorite song on your trusty Peloton bike, the company is paying the artist of that song just 3 cents? And if that sounds like literal chump change, then think about this — that 3 cents is actually on the high end of what an artist will make for themselves when someone streams their work online.
For streaming services like Tidal, Apple Music, and Spotify , artists get less than 1 cent per song. YouTube only pays less than a sixth of 1 cent back to an artist.
Bruce Springsteen and Taylor Swift might not much care about their sixth of 1 cent in streaming royalties, but for non-mega-stars, from musicians to podcasters, the streaming landscape is a complete ripoff. Disctopia is a platform dedicated to treating artists fairly, a place where creative types can upload their work and allow it to be consumed by happy listeners while cutting in the original creator for a significantly more fair share of their labors.
With a one-year subscription to Disctopia , artists have access to unlimited uploads, downloads, and storage of their work, all while retaining 100 percent of the revenue from any sales through the platform. Disctopia doesn’t charge any distribution or commission fees, so musicians and podcasters can monetize their work themselves, with music, podcast, and even merch hosting with all the money going back to the creator. There’s even an easy-to-follow dashboard to track your sales and document all your earnings, right through Disctopia.
Disctopia is already a robust content hub with nifty customization options, including the ability to create private content, as well as exclusive podcast episodes, then manage and earn from that content. Even your downloaded music and podcasts from other services will play in Disctopia’s content-friendly app and player.
That positive user and artist experience extends to the presentation as well, with no ads or ad-based user tracking of any kind. And Disctopia is designed to integrate perfectly with all the most popular music and podcast distribution platforms around, including Apple Podcasts, Spotify, Pandora, Amazon Music, Deezer, Google Podcasts, and more.
Committed to 100 percent transparency with its users, those creators have responded overwhelmingly to their model, giving Disctopia a resounding 4.9 stars out of 5 among reviews in both the Google Play and Apple App stores.
A one-year Creative plan to help artists start creating their own unlimited offering of music and podcasts usually costs $119, but with the current deal, users can enter the code CYBER20 to get an extra 20 percent off the already discounted price. With that added savings, a 12-month Disctopia Creative Plan subscription is just $39.99.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,949 | 2,021 |
"AI Weekly: AI prosecutors and pong-playing neurons closed out 2021 | VentureBeat"
|
"https://venturebeat.com/uncategorized/ai-weekly-ai-prosecutors-and-pong-playing-neurons-closed-out-2021"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: AI prosecutors and pong-playing neurons closed out 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In the week that drew 2021 to a close, the tech news cycle died down, as it typically does. Even an industry as fast-paced as AI needs a reprieve, sometimes — especially as a new COVID-19 variant upends plans and major conferences.
But that isn’t to say late December wasn’t eventful.
One of the most talked-about stories came from the South China Morning Post (SCMP), which described an “AI prosecutor” developed by Chinese researchers that can reportedly identify crimes and press charges “with 97% accuracy.” The system — which was trained on 1,000 “traits” sourced from 17,000 real-life cases of crimes from 2015 to 2020, like gambling, reckless driving, theft, and fraud — recommends sentences given a brief text description. It’s already been piloted in the Shanghai Pudong People’s Procuratorate, China’s largest district prosecution office., according to SCMP.
It isn’t surprising that a country like China — which, like parts of the U.S., has embraced predictive crime technologies — is pursuing a black-box stand-in for human judges. But the implications are nonetheless worrisome for those who might be subjected to the AI prosecutor’s judgment, given how inequitable algorithms in the justice system have historically been shown to be.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Published last December, a study from researchers at Harvard and the University of Massachusetts found that the Public Safety Assessment (PSA), a risk-gauging tool that judges can opt to use when deciding whether a defendant should be released before a trial, tends to recommend sentencing that’s too severe. Moreover, the PSA is likely to impose a cash bond on male arrestees versus female arrestees, according to the researchers — a potential sign of gender bias.
The U.S. justice system has a history of adopting AI tools that are later found to exhibit bias against defendants belonging to certain demographic groups. Perhaps the most infamous of these is Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which is designed to predict a person’s likelihood of becoming a recidivist. A ProPublica report found that COMPAS was far more likely to incorrectly judge black defendants to be at higher risk of recidivism than white defendants, while at the same time flagging white defendants as low risk more often than black defendants.
With new research showing that even training predictive policing tools in a way meant to lessen bias has little effect, it’s become clear — if it wasn’t before — that deploying these systems responsibly today is infeasible. That’s perhaps why some early adopters of predictive policing tools, like the police departments of Pittsburgh and Los Angeles, have announced they will no longer use them.
But with less scrupulous law enforcement , courtrooms , and municipalities plowing ahead, regulation-driven by public pressure is perhaps the best bet for reigning in and setting standards for the technology. Cities including Santa Cruz and Oakland have outright banned predictive policing tools, as has New Orleans. And the nonprofit group Fair Trials is calling on the European Union to include a prohibition on predictive crime tools in its proposed AI regulatory framework.
“We do not condone the use [of tools like the PSA],” Ben Winters, the creator of a report from the Electronic Privacy Information Center that called pretrial risk assessment tools a strike against individual liberties, said in a recent statement. “But we would absolutely say that where they are being used, they should be regulated pretty heavily.” A new approach to AI It’s unclear whether even the most sophisticated AI systems understand the world the way that humans do. That’s another argument in favor of regulating predictive policing, but one company, Cycorp — which was profiled by Business Insider this week — is seeking to codify general human knowledge so that AI might make use of it.
Cycorp’s prototype software, which has been in development for nearly 30 years, isn’t programmed in the traditional sense. Cycorp can make inferences that an author might expect a human reader to make. Or it can pretend to be a confused sixth-grader, tasking users with helping it to learn sixth-grade math.
Is there a path to AI with human-level intelligence? That’s the million-dollar question. Experts like the vice president and chief AI scientist for Facebook, Yann LeCun, and renowned professor of computer science, and artificial neural networks expert, Yoshua Bengio, don’t believe it’s within reach, but others beg to differ.
One promising direction is neuro-symbolic reasoning, which merges learning and logic to make algorithms “smarter.” The thought is that neuro-symbolic reasoning could help incorporate common sense reasoning and domain knowledge into algorithms to, for example, identify objects in a picture.
New paradigms could be on the horizon, like “synthetic brains” made from living cells. Earlier this month, researchers at Cortical Labs created a network of neurons in a dish that learned to play Pong faster than an AI system. The neurons weren’t as skilled at Pong as the system, but they took only five minutes to master the mechanics versus the AI’s 90 minutes.
Pong hardly mirrors the complexity of the real world. But in tandem with forward-looking hardware like neuromorphic chips and photonics , as well as novel scaling techniques and architectures, the future looks bright for more capable, potentially human-like AI. Regulation will catch up, with any luck. We’ve seen a preview of the consequences — including wrongful arrests , sexist job recruitment , and erroneous grades — if it doesn’t.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,950 | 2,022 |
"A look back at recent AI trends -- and what 2022 might hold | VentureBeat"
|
"https://venturebeat.com/uncategorized/a-look-back-at-recent-ai-trends-and-what-2022-might-hold"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages A look back at recent AI trends — and what 2022 might hold Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
2021 was an eventful year for AI. With the advent of new techniques, robust systems that can understand the relationships not only between words but words and photos, videos, and audio became possible. At the same time, policymakers — growing increasingly wary of AI’s potential harm — proposed rules aimed at mitigating the worst of AI’s effects, including discrimination.
Meanwhile, AI research labs — while signaling their adherence to “ responsible AI ” — rushed to commercialize their work, either under pressure from corporate parents or investors. But in a bright spot, organizations ranging from the U.S. National Institutes of Standards and Technology (NIST) to the United Nations released guidelines laying the groundwork for more explainable AI, emphasizing the need to move away from “black-box” systems in favor of those whose reasoning is transparent.
As for what 2022 might hold, the renewed focus on data engineering — designing the datasets used to train, test, and benchmark AI systems — that emerged in 2021 seems poised to remain strong. Innovations in AI accelerator hardware are another shoo-in for the year to come, as is a climb in the uptake of AI in the enterprise.
Looking back at 2021 Multimodal models In January, OpenAI released DALL-E and CLIP , two multimodal models that the research lab claims are “a step toward systems with [a] deeper understanding of the world.” Its name, inspired by Salvador Dalí, DALL-E was trained to generate images from simple text descriptions, while CLIP (for “Contrastive Language-Image Pre-training”) was taught to associate visual concepts with language.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DALL-E and CLIP turned out to be the first in a series of increasingly capable multimodal models in 2021. Beyond reach a few years ago, multimodal models are now being deployed in production environments, improving everything from hate speech detection to search relevancy.
Google in June introduced MUM , a multimodal model trained on a dataset of documents from the web that can transfer knowledge between different languages. MUM, which doesn’t need to be explicitly taught how to complete a task, is able to answer questions in 75 languages, including “I want to hike to Mount Fuji next fall — what should I do to prepare?” while realizing that “prepare” could encompass things like fitness as well as weather.
Not to be outdone, Nvidia recently released GauGAN2 , the successor to its GauGAN model, which lets users create lifelike landscape images that don’t actually exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation, GauGAN2 can create photorealistic art from a mix of words and drawings.
Large language models Large language models (LLM) came into their own in 2021, too, as interest in AI for workloads like generating marketing copy , processing documents , translation , conversation , and other text tasks grew. Previously the domain of well-resourced organizations like OpenAI, Cohere , and AI21 Labs , LLMs were suddenly within reach of startups to commercialize, thanks partially to the work of volunteer efforts like EleutherAI. Corporations like DeepMind still muscled their way to the top of benchmarks, but a growing cohort of companies — among them CoreWeave, NLP Cloud, and Neuro — began serving models with features akin to OpenAI’s GPT-3 to customers.
Motivated in equal parts by sovereignty and competition, large international organizations took it upon themselves to put massive computational resources toward LLM training. Former OpenAI policy director Jack Clark, in an issue of his Import AI newsletter, said that these models are a part of a general trend of “different nations asserting their own AI capacity [and] capability via training frontier models like GPT-3.” Naver , the company behind the South Korean search engine Naver, created a Korean-language equivalent to GPT-3 called HyperCLOVA.
For their parts, Huawei and Baidu developed PanGu-Alpha (stylized PanGu-α) and PCL-BAIDU Wenxin ( Ernie 3.0 Titan ), respectively, which were trained on terabytes of Chinese-language ebooks, encyclopedias, and social media.
Pressure to commercialize In November, Google’s parent company, Alphabet, established a subsidiary focused on AI-powered drug discovery called Isomorphic Labs, helmed by DeepMind cofounder Demis Hassabis. The launch of Isomorphic underlined the increasing pressure in 2021 on corporate-backed labs to pursue research with commercial, as opposed to purely theoretical, applications.
For example, while DeepMind remains engaged in prestigious projects like systems that can beat champions at StarCraft II and Go , the lab has turned its attention to more practical domains in recent years, like weather forecasting , materials modeling , atomic energy computation , app recommendations , and datacenter cooling optimization.
Similarly, OpenAI — which started as a nonprofit in 2016 but transitioned to a “capped-profit” in 2019 — made GPT-3 generally available through its paid API in late November following the launch of Codex , its AI-powered programming assistant, in August.
The emphasis on commercialization in 2021 is at least partially attributable to the academic “brain drain” in AI throughout the last decade. One paper found that ties to corporations — either funding or affiliation — in AI research doubled to 79% from 2008 and 2009 to 2018 and 2019. And from 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40% , reflecting the growing movement of researchers from institution to an enterprise.
The academic process isn’t without flaws of its own. There’s a concentration of compute power at elite universities, for example. But it’s been shown that commercial projects unsurprisingly tend to underemphasize values such as beneficence, justice, and inclusion.
Increased funding — and compute Dovetailing with the commercialization trend, investors poured more money into AI startups than at any point in history. According to a November 2021 report from CB Insights, AI startups around the world raised $50 billion across more than 2,000 deals — surpassing 2020 levels by 55%.
Cybersecurity and processor companies led the wave of newly minted unicorns (companies with valuations over $1 billion), with finance, insurance, retail, and consumer packaged goods following close behind. Health care AI continued to have the largest deal share, which isn’t surprising considering that the AI in the health care market is projected to grow from $6.9 billion to $67.4 billion by 2027.
Driving the funding, in part, is the rising cost of state-of-the-art AI systems. DeepMind reportedly set aside $35 million to train an AI system to learn Go; OpenAI is estimated to have spent $4.6 million to $12 million training GPT-3. Meanwhile, companies developing autonomous vehicle technologies have spun off , merged , agreed to be acquired , or raised hundreds of millions in venture capital to cover operating and R&D costs.
While relatively few startups are developing their own massive, costly AI models, running models can be equally as expensive. One estimate pegs the price of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year. APIs can be cheaper than self-hosted options, but not dramatically so. A hobbyist site powered by OpenAI’s GPT-3 API was forced to consider shutting down after estimating that it would have to pay a minimum of $4,000 monthly.
Regulation and guidelines In light of the accelerating commercialization of AI, policymakers have responded with rules to reign in — and make more transparent — AI systems. Corporate disregard for ethics forced regulators’ hands in some cases. After firing high-profile ethicists Timnit Gebru and Margaret Mitchell, Google tried to reassure employees that it remained committed to its AI ethics principles while at the same time attempting to limit internal research that showed its technologies in a bad light. Reports have described Meta’s (formerly Facebook’s) AI ethics team, too, as largely toothless and ineffective.
In April, the European Union proposed regulations to govern the use of AI across the bloc’s 27 member states. They impose bans on the use of biometric identification systems in public, like facial recognition (with some exceptions). And they prohibit AI in social credit scoring, the infliction of harm (such as in weapons), and subliminal behavior manipulation.
Following suit, the U.N.’s Educational, Scientific, and Cultural Organization (UNESCO) in November approved a series of recommendations for ethics that aim to recognize that AI can “be of great service” while raising “fundamental … concerns.” UNESCO’s 193 member countries, including Russia and China, agreed to conduct AI impact assessments and place “strong enforcement mechanisms and remedial actions” to protect human rights.
While the policy is nonbinding, China’s support is significant because of the country’s stance on the use of AI surveillance technologies. According to the New York Times, the Chinese government has piloted the use of predictive technology to sweep a person’s transaction data, location history, and social connections to determine whether they’re violent. Chinese companies such as Dahua and Huawei have developed facial recognition technologies, including several designed to target Uighurs, an ethnic minority widely persecuted in China’s Xinjiang province.
Spurred by vendors like Clearview AI , bans on technologies like facial recognition also picked up steam across the U.S. in 2021 — at least at the local level. California lawmakers passed a law that will require warehouses to disclose the algorithms that they use to track workers. And NYC recently banned employers from using AI hiring tools unless a bias audit can show that they won’t discriminate.
Elsewhere, the U.K.’s Centre for Data Ethics and Innovation (CDEI) recommended this year that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of supervision. Even China has tightened its oversight of the algorithms that companies use to drive certain parts of their business.
Rulemaking in AI for defense remains murkier territory. For some companies, like Oculus cofounder Palmer Luckey’s Anduril and Peter Thiel’s Palantir, military AI contracts have become a top revenue source. While the U.S., France, the U.K., and others have developed autonomous defense technologies, countries like Belgium and Germany have expressed concerns about the implications.
Staking out its position, the U.S. Department of Defense published a whitepaper in December — circulated among the National Oceanic and Atmospheric Administration, the Department of Transportation, ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service — outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. NATO also this year released an AI strategy listing the organization’s principles for “responsible use [of] AI,” as the U.S. National Institute of Standards and Technology began working with academia and the private sector to develop AI standards.
“[R]egulators are unlikely to step completely aside” anytime soon, analysts at Deloitte wrote in a recent report examining trends in the AI industry. “It’s a nearly foregone conclusion that more regulations over AI will be enacted in the very near term. Though it’s not clear exactly what those regulations will look like, it is likely that they will materially affect AI’s use.” Predictions for 2022 Milestone moments Looking ahead to 2022, technical progress is likely to accelerate in the multimodal and language domains. Funding, too, could climb exponentially as investors’ appetites grow for commercialized AI in call center analytics , personalization , and cloud usage optimization.
“This past year, we saw AI capable of generating its own code to construct increasingly complex AI systems. We will continue to see growth in both AI that can write its own code in different programming languages, as well as AI that allows people to simply speak their instructions,” Salesforce ethical AI practice Yoav Schlesinger said in a recent blog post. “These speech-to-code engines will generate images, video, and code using natural commands without worrying about syntax, formatting, or symbols. Say “I’d like an image of a purple giraffe with orange spots, wings, and wheels instead of legs and watch what the AI generates.” Os Keyes, an AI ethics researcher at the University of Washington, believes that the pandemic has brought attention to the broader implications of AI on working conditions and inequality. That includes the conditions underpinning much of AI’s development, Keyes says, which often depends on low-paid, low-skilled, and crowdsourced work. For example, a growing body of research points to the many problems with datasets and benchmarks in machine learning, including sociocultural and institutional biases — in addition to missteps introduced by human annotators.
“I think there’s a real opportunity here to push for changes in how we conceive of automation and the deployment of technology in daily life, as we’re pushing for changes in how that life is financed,” Keyes told VentureBeat via email.
At the same time, Keyes cautions that the pandemic and its effects “[have] been a godsend” to companies that see new opportunities to “monetize the rot.” Keyes points to the spread of facial recognition for social distancing and opportunities to exploit organizations’ desires to be lean, efficient, and low in headcount, like workplace monitoring software.
“There are a ton of places where half-baked tools — which describes both the software and its developers — are being dangled in front of finance folk[s]. Algorithms, after all, don’t ask for pension contributions,” Keyes added. “I worry that without sustained attention, we’ll flub the opportunities for regulation, ethical standards, and reimagining technology that this crisis moment has catalyzed. It’s all too easy for those who already have money to ‘ethics wash’ practices, and to a degree, we can see that already happening with the nonsensical NIST work on AI trust.
” Mike Cook, an AI researcher, and game designer believes that 2022 might see bigger research labs like DeepMind and OpenAI look for a new “milestone moment” in AI. He also thinks that AI will continue to pop up more in everyday products, especially photography and video and that some companies will try to blend NFTs, the metaverse, and AI into the same product “It’s been a while since something truly headline-grabbing happened from the pure AI labs, especially on a par with the AlphaGo and Lee Sedol match in 2016, for instance … [We could see] AI that can invent a cure for something, synthesize a new drug to treat an illness, or prove a mathematical conjecture , for example,” Cook said. “[Elsewhere, if] we look at what Photoshop, TikTok and other image-driven apps are using AI for currently, we can see we’re not too far off the ability to have AI insert our friends into group photographs that they missed out on, or change the pose and expression of people in selfies … I can [also] imagine us seeing some pitches for metaverse-ready AI companions that follow us from one digital world to the next, like if Alexa could play Mario Kart with you.” Continuous learning Joelle Pineau, the managing director at Meta AI Research, Meta’s (formerly Facebook’s) AI research division, says that 2022 will bring new AI datasets, models, tasks, and challenges that “embrace the rich nature” of the real world, as well as augmented and virtual reality. (It should be noted that Meta has a vested interest in the success of augmented and virtual reality technologies, having pledged to spend tens of billions of dollars on their development in its quest to create the metaverse.) “[I foresee new] work on AI for new modalities, including touch, which enables our richer sensory interaction with the world,” Pineau told VentureBeat via email. “[I also expect] new work embracing [the] use of AI for creativity that enhances and amplifies human expression and experience; advances in egocentric perception to build more useful AI assistants and home robots of the future; and advances in new standards for responsible deployment of AI technology, which reflects better alignment with human values and increased attention to safety, fairness, [and] transparency.” More sophisticated multimodal systems could improve the quality of AI-generated videos for marketing purposes, for example, along the lines of what startups like Synthesia , Soul Machines , and STAR Labs currently offer. They could also serve as artistic tools, enabling users in industries such as film and game design to iterate and refine artwork before sending it to production.
Pineau also anticipates an increased focus on techniques like few-shot learning and continual learning, which he believes will enable AI to quickly adapt to new tasks. It could result in more systems like the recent language models from OpenAI and Meta, WebGPT , and BlenderBot 2.0 , which surf the web to retrieve up-to-date answers to questions posed to them.
“[Most work] remains focused on passive data, collected in large (relatively) homogeneous and stable batches. This approach may be suitable for internet-era AI models, but will need to be rethought as we look to continue to bring the power of AI to the metaverse in support of the fast-changing societies in which we live,” he said.
Indeed, many experts believe 2022 will see a heightening shift in focus from modeling to the underlying data used to develop AI systems. As the spotlight turns to the lack of open data engineering tools for building, maintaining, and evaluating datasets, a growing movement — data-centric AI — aims to address the lack of best practices and infrastructure for managing data in AI systems. Data-centric AI consists of systematically changing and enhancing datasets to improve the accuracy of an AI system, a task that has historically been overlooked or treated as a one-off task.
Tangibly, this might mean more compute-efficient approaches to LLM development (such as a mixture of experts ) or the use of synthetic datasets.
Despite its drawbacks , synthetic data — AI-generated data that can stand in for real-world data — is already coming into wider use, with 89% of tech execs in a recent surve y saying that they believe it’s the key to staying ahead.
Gartner has predicted that by 2024, synthetic data will account for 60% of all data used in AI development.
“While AI has transformed the software internet industry, much work remains to be done to have it similarly help other industries,” Andrew Ng, the founder of Landing AI and cofounder of Coursera, told VentureBeat in a recent interview. “Data-centric AI — the discipline of systematically engineering the data used to build AI systems — is a rapidly rising technology that will be key to democratizing access to cutting edge AI systems.” Enterprise uptake Eric Boyd, a corporate VP at Microsoft’s Azure AI platform, thinks that the data-centric AI movement will bolster the demand for managed solutions in 2022 among businesses that lack data expertise. O’Reilly’s latest AI Adoption in the Enterprise report found that a lack of skilled people, difficulty hiring, and a lack of quality data topped the list of challenges in AI, with 19% of companies citing the skills gap as a “significant” barrier in 2021.
“Demand for AI solutions is increasing faster now than ever before, as businesses from retail to healthcare tap data to unlock new insights. Businesses are eager to apply AI across workloads to improve operations, drive efficiencies, and reduce costs,” Boyd told VentureBeat.
Rob Gibbon, a product manager at Canonical, expects that AI will play a larger role this year in supporting software development at the enterprise level. Extending beyond code generation and autocompletion systems like Copilot and Salesforce’s CodeT5 , Gibbon says that AI will be — and has been, in fact — applied to tasks like app performance optimization, adaptive workload scheduling, performance estimation and planning, automation, and advanced diagnostics. Supporting Gibbon’s assertion, 50% of companies responding to a January 2021 Algorithmia survey said that they planned to spend more on AI for these purposes, with 20% saying they would be “significantly” increasing their budgets.
The uptake, along with growing recognition of AI’s large ecological footprint , could spur new hardware (along with software ) to accelerate AI workloads along the lines of Amazon’s Trainium and Graviton3 processors, Google’s fourth-generation tensor processing units , Intel-owned Habana’s Gaudi , Cerebras’ Cs-2 , various accelerator chips at the edge , and perhaps even photonics components.
The edge AI hardware market alone is expected to grow to $38.87 billion by 2030, growing at a compound annual growth rate of 18.8%, according to Valuates Reports.
“AI will play an increasing role in both the systems software engineers create and in the process of creation,” Gibbon said. “AI has finally come of age, and that’s down in no small part to collaborative open source initiatives like the [Google’s] TensorFlow, Keras, [Meta’s] PyTorch and MXNet deep learning projects. Continuing into 2022, we will see ever broader adoption of machine learning and AI in the widest variety of applications imaginable — from the most trivial and mundane to those that are truly transformative.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,951 | 2,021 |
"D-Wave opens up to gate-model quantum computing | VentureBeat"
|
"https://venturebeat.com/technology/d-wave-opens-up-to-gate-model-quantum-computing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages D-Wave opens up to gate-model quantum computing Share on Facebook Share on X Share on LinkedIn Inside the D-Wave Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Recent advances in quantum computing show progress, but not enough to live up to years of hyperbole. An emerging view suggests the much-publicized quest for more quantum qubits and quantum supremacy may be overshadowed by a more sensible quest to make practical use of the qubits we have now.
The latter view holds particularly true at D-Wave Systems Inc.
, the Vancouver, B.C., Canada-based quantum computing pioneer that recently disclosed its roadmap for work on logic gate-model quantum computing systems.
D-Wave’s embrace of gates is notable. To date, the company focuses solely on quantum annealing processors. Using this probabilistic approach, it has achieved superconducting qubit processor counts that it claims outpaces most others. Its latest Advantage system boasts 5,000 qubits. That’s well ahead of the 127-qubit device IBM reported in November.
There is an important caveat, as followers of the quantum business know. D-Wave’s annealing qubits don’t have the general quantum qualities that competitive quantum gate-model systems have, and the degree of processing speed-up they provide has been questioned.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Questions arise despite placing its systems in research labs at Google, NASA, Los Alamos National Laboratory, and elsewhere. D-Wave’s qubit counts have been faulted by critics for specializing in a purpose-built approach aimed at a certain class of optimization problems.
Bring on the NISQ Still, the company has a leg-up with its experience compared to most competitors, having fabricated and programmed superconducting parts since at least 2011.
For that matter, the gate-model quantum computing crew’s benchmarks have come under attack, too, and its battles with scaling and quantum error (or “noise”) correction have spawned the term “ noisy intermediate-scale quantum ” (or “NISQ”) to describe the present era, where users have to begin to do what they can with whatever working qubits they have.
While it will continue to work on its annealing-specific quantum variety, D-Wave has joined a gate-model quantum competition where there appears plenty of room for growth. According to Statistica, the revenues for global quantum computing in 2020 equaled $412 million, but is predicted to jump to $8.6 billion in 2027.
Bread, butter, and quantum computing As it moves to broaden its product mix, D-Wave hopes to boost its current system sales and availability too. News of its roadmap was followed by the word that NEC Corp. would become the first global reseller of D-Wave’s Leap quantum cloud service. D-Wave also launched a Quantum QuickStart kit that echoes competitors’ efforts to open-up quantum programming to everyday Python developers via the cloud.
The original decision on quantum annealing has proven itself, said Mark Johnson, vice president for quantum technology and systems products at D-Wave. He cited the use of D-Wave systems in solving optimization scheduling and routing problems while discussing D-Wave’s new roadmap with Venture Beat.
Johnson said the practical potential of quantum annealing was among the factors that led him to join the company in the early 2000s, after work on superconducting circuits at defense contractor TRW (now part of Northrop Grumman).
“Today, it’s the most effective sort of quantum technology for solving problems. There is a natural error tolerance to quantum annealing,” he said. Still, competitive gate-level advances continue.
“We’re now learning just in the last three or four years, from a growing body of published theoretical work, that it is likely quantum annealing is always going to be better than gate models at optimization problems,” Johnson said.
That said, error correction is clearly a concern. But, Johnson suggests this is a good time to join the quest to solve that as an industry. “We will all figure out together how to go beyond that,” he said. That is, while still pursuing annealing advances.
“Our bread and butter is going to continue to be quantum annealing, but we are adding another product line,” he said. D-Wave is also at work on quantum-classical hybrid solvers for optimization, he noted, to combine classical enterprise computing with quantum resources.
Accelerators fill the gap D-Wave’s expansion to include gate models is a natural progression for a company that “cut its teeth on annealing architecture,” according to Bob Sorensen, senior vice president, and chief analyst for quantum computing at Hyperion Research.
“The company has good skills in building cryogenically cooled technology usable for quantum computing applications,” he said. “In the transition to a gate-model architecture — although it’s not trivial — they bring a lot of smarts to the table.” At the same time, the ongoing movement that sees quantum processing becoming a part of established computing stacks is the important one to watch, Sorensen said. It’s not unfair to call this a hybrid approach , he said.
“Parts of a job might be done on the one system, go over to the other, and then come together – iterating between the classical and the quantum system to take advantage of the performance capabilities of both,” Sorensen said.
Likely, he said, quantum computing will offer benefits for workloads in specific applications. As such, quantum processors in hybrid quantum-classical combos will resemble the GPU in the role it’s assumed for AI in datacenters; that is: as an auxiliary processor.
“Think of it as an accelerator for specific advanced computing workloads ,” he said.
Effectively handling workloads is more important than reaching a 1,000 qubit gate-model level, or proving quantum supremacy versus classical computing. Instead, what is important for vendors, “is demonstrating the fact that you can solve end-user problems effectively,” Sorensen said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,952 | 2,022 |
"22 digital twins trends that will shape 2022 | VentureBeat"
|
"https://venturebeat.com/technology/22-digital-twins-trends-that-will-shape-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 22 digital twins trends that will shape 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
VentureBeat kicked off its 2021 digital twins coverage with Accenture’s prediction that digital twins would be a top trend for the year. Meanwhile, other consultancies and systems integration leaders expanded their respective digital twins’ practices to improve product development, manufacturing, and supply chains. Lessons from these early implementations are helping to shape best practices that will allow more enterprises to succeed in 2022.
Digital twin capabilities are infused into other tools for product design, engineering, construction, manufacturing, and supply chain management rather than sold as individual tools. Enterprises face numerous challenges around weaving together a mix of data architecture, knowledge graphs, processes, and cultures required to get the most from digital twin implementations. Here are 22 ways that advances in these tools and new uncertainties in the world economy are likely to play out in 2022 across core capabilities, medicine, construction, and sustainability.
Digital Twin Capabilities 1. Cloud providers expand digital twin support All the major cloud providers rolled out significant digital twin capabilities in 2021. Microsoft revealed a digital twin ontology for construction and building management. Google launched a digital twin service for logistics and manufacturing.
AWS launched IoT TwinMaker and FleetWise to simplify digital twins of factories, industrial equipment, and vehicle fleets. Nvidia launched a metaverse for engineers as a subscription service across Nvidia’s partner network. In 2022, all three are likely to learn from early adopters to improve these offerings with support for more kinds of twins, better integration capabilities, and enhanced user experiences.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2. Digital twin leaders build new onramps Leading digital twin pioneers are likely to expand support for and integration with these emerging cloud offerings. In the PLM space this includes Siemens, PTC, and Dassault Systèmes. In architecture and construction, this includes Autodesk and Bentley. Last year, Bentley enhanced integration with Microsoft’s Azure Digital Twins platform and the Nvidia Omniverse ecosystem. Over the next year, digital twin leaders are likely to shift from running existing tools on top of IaaS infrastructure towards native PaaS offerings that reduce barriers for enterprises. Leading manufacturers like Volkswagen are already starting to collaborate with cloud providers on a new generation of industrial cloud services. These new cloud-native services will help enterprises build up their digital twin infrastructure regardless of which cloud platform they choose.
3. Simplifying the interface of things Most digital twin efforts focus on the physical aspects of things in the real world, such as how they are built and how they will respond to the physical environment. New digital twin APIs provide standard ways to communicate with and control equipment made by different manufacturers.
Mapped recently launched one such commercial service on top of the open-source Brick Schema for describing physical, logical, and virtual assets.
Zyter partnered with Qualcomm’s Smart Cities Consortium to standardize interfaces for lighting, construction safety, clinical data management, and remote patient monitoring. In 2022, developers, startups, and established enterprises will find new ways to craft new user experiences, improve efficiency, and generate revenues that take advantage of a better interface of things.
4. Data interop helps grow digital twin ecosystem Early digital twin designs focused on individual digital twins. The Digital Twins Consortium’s interoperability standards will make it easier to compose larger-scale digital twin assemblies from a library of designs. Engineers and systems designers will spend more time designing applications from pre-tested components and less time figuring out how to integrate the applications. This will accelerate efforts to reuse digital twins components across multiple designs in the same way that open-source software has accelerated software development in domains such as supply chains, smart cities, manufacturing, construction, power grids, and water infrastructure.
5. Digital twins catalogs streamline design Broader interoperability will help electronic, mechanical, and construction marketplaces create digital twin design catalogs to help designers evaluate tradeoffs between physical characteristics, lead time, and supply chain bottlenecks. Electronic and mechanical marketplaces are already surfacing interoperable CAD data, and construction catalogs are surfacing building information modeling components. Digital twins are the next step. These catalogs could also present digital twins of assemblies that combine raw components, manufacturing work, and engineering intellectual property that further simplifies design choices and compensates all participants. New NFT marketplaces could provide a way to automate compensation for design components assembled from multiple engineering firms.
6. Faster simulation drives new use cases Simulation tool providers such as Altair, Ansys, Akselos, Cadence, Nnaisense, and Synopsis are discovering new simulation algorithms that provide performance gains in software at a much faster clip than Moore’s law. Hardware innovations from Nvidia and Cereberas can amplify these gains to enable a million-fold improvement in performance. This will allow engineers to explore more complex models reflecting electrical, quantum, and chemical effects in areas like battery design and better solar cells. Faster models will also lead to quicker and more actionable predictive maintenance models.
7. Process intelligence accelerates digital twins of organizations The process mining market demonstrated triple-digit growth last year. These tools analyze ERP and CRM application logs to create a digital twin of business processes. Other task mining tools extend the reach of these tools using machine vision to watch over a user’s shoulder as they type and click their way across multiple apps. Process intelligence further extends these capabilities with sophisticated analytics and can automatically generate scripts to automate processes. Over the last year, major RPA vendors, including Automation Anywhere, Blue Prism, and UiPath, enhanced their process intelligence capabilities. Gartner predicts that combining these capabilities will help drive the market for hyperautomation , which it expects to hit $596 billion in 2022.
8. Human digital twins cautiously embraced Industries will cautiously embrace new tools for capturing data about human processes in manufacturing, logistics, warehouses, and stores. These digital twins can help identify opportunities to streamline processes, optimize schedules, and increase uptime. They can also help incorporate social-distancing factors into new and existing processes. Companies to watch include Drishti, Hitachi Vantara, and Tulip Interfaces. Caution is needed to protect against pushback against algorithmic management techniques that neglect employee needs. New approaches for crafting digital twins reflecting human well-being from companies like ProGlove could help.
Health care 9. Medical records consolidate to the cloud VentureBeat covered 21 ways medical digital twins could improve medical outcomes, streamline healthcare processes, and create new medical services and products. Efforts to bridge gaps between medical processes, data, and applications will help realize these promises. One challenge is that electronic health care records data is often fragmented across multiple systems for insurance, general practitioners, and acute care providers. Bringing this data together is a prerequisite for better medical digital twins. In 2022 innovators will find a way to consolidate these silos to drive meaningful gains. Oracle’s planned acquisition of Cerner, the most prominent electronics health records vendor, suggests one general path for the industry. Microsoft is focusing on improving the record capture side of the process with its Nuance acquisition.
Others will find ways to drive integration through the broader adoption of health care APIs.
10. Medical regulators drive medical digital twins Medical regulators will pursue a more active role in driving the adoption of medical digital twins. The U.S. FDA recently worked with Siemens to demo drug production with a digital twin factory line. The FDA is also working with researchers to develop digital twins of drugs reflecting adverse outcomes with patient medical records to improve medical product safety surveillance.
Regulators have struggled to weave wearable data into patient digital twins. Still, COVID and remote care concerns will help erode the barriers between consumer health tech and medical health data over the next year. 2022 will bring more regulatory support and guidance to improve drug and medical device development, production, and surveillance. In addition, enhanced digital twins of medical organizations will help overworked teams respond to COVID outbreaks while also treating other diseases.
11. Proteomics propels precision medicine Precision medicine has made marginal improvements since the FDA reported that medication was deemed ineffective for 38-75% of patients with common diseases in 2013. Last year, Google AlphaFold open-sourced a way to automatically model the physical structure of proteins from their raw amino acid sequence. This could turbocharge efforts to build personalized digital twins reflecting the effects of drugs and other interventions at the level of proteins (proteomics) and genes. This promises to simplify efforts to build digital twins of people to improve precision medicine. It could also accelerate drug development and environmental research. Efforts like the Swedish Digital Twin Consortium will help combine these proteomic methods with systems-level approaches to construct disease models of unprecedented resolution. But this will require solving a wide range of technical, medical, ethical, and theoretical challenges.
12. Privacy-preserving twins gain traction One long-term goal is for doctors to assess how similar patients respond to various treatments using a digital twin before prescribing a new treatment plan. The research for building these models has been hindered by strict privacy regulations and practices. Emerging new confidential computing techniques are already showing promise in crafting privacy-preserving medical digital twins to predict long COVID outcomes. Researchers will extend these techniques to other diseases in 2022. Additionally, synthetic data techniques will allow researchers to scale up the data from training these models for rare diseases.
13. Lab digital twins support experiments-as-code Even when researchers believe they have exhaustively described an experiment they can repeat in their own lab, some details get missed in other labs. A new breed of robotic laboratories lets researchers use digital twins to manage experiments as code to allow precise replication in the next lab over or around the world. This kind of repeatability could allow medical research to see the same gains in automation, efficiency, and repeatability that IT saw with infrastructure as code. Pioneers include companies such as Strateos, 3Dispense, Opentrons, and Protedyne. Strateos claims this approach can reduce manual labor by 90% and increase productivity by 300%. Next up, similar approaches could transform the farming industry by automating experiments with small container plots to optimize yield, flavor, nutrition, and disease resistance.
Construction 14. Digital twins help Build Back Better and faster The US Build Back Better initiative promises billions of dollars in direct construction investments. New reality capture services for surveying existing and potential infrastructure could help cities prioritize opportunities to qualify for more of this money. Examples include automatically generating digital twins of roads and bridges from firms such as ArcGIS , Where Technologies, Nexar , EyeVI Technologies, and RoadBotics.
Construction digital twins will help track progress on new construction to optimize scheduling, catch missing details, and ensure safe construction practices. Companies to watch include Autodesk , Bentley , Matterport, and Agora.
15. Construction portfolio scale construction efficiently The massive investment could also help galvanize investment in new tools for managing portfolios of digital twins at scale. For example, cities, states, and government agencies will increasingly adopt tools for prioritizing, measuring progress, and identifying bottlenecks across hundreds or thousands of individual projects. Cloud-based capital planning tools management tools from companies like Aurigo Software and Oracle will help streamline these large-scale processes.
16. More construction pros work from home Construction has traditionally been a hands-on process. The combination of pandemic-related concerns and improved construction digital twins could accelerate the adoption of remote construction work. Some firms are finding that this is not only safer but also more efficient.
Construction managers can track more sites without leaving the office. Helmet-mounted cameras, drones, and AI-powered capture processes can streamline documentation and validation processes. Down the road, city building inspectors could also automate inspections and reviews using a combination of AI and certified capture processes. Leading companies include AI Clearing, Buildots, Cupix , OpenSpace, and UrsaLeo.
17. Better energy and structural modeling improve building efficiency Digital twins will accelerate the adoption of more energy-efficient building designs in several ways. First, better energy performance models will help architects assess the energy implications and estimated operating costs of various tradeoffs earlier in the design process. Second, better structural performance simulations could allow progressive building departments to vet the safety of new construction techniques like 3D-printed homes that building codes don’t yet support.
18. Bring back better Trade tensions with China and global supply chain bottlenecks will encourage many firms to bring manufacturing closer to the biggest markets. Leading vendors of manufacturing execution systems, PLM, and process manufacturing will introduce new services to bring back better. These will combine digital twins of products and factories to manage factory configurations and product designs as code to provide further flexibility and quality control. Factory workers will be upskilled to higher-paying IT-related jobs to maintain this equipment.
Sustainability 19. Green taxonomies guide sustainable investments Politicians, businesses, and researchers have developed numerous approaches for measuring and incentivizing sustainability. And sometimes, these goals have led to paradoxical efforts like cutting down and burning forests as part of the “ biggest decarbonization project in Europe.
” Several efforts to develop taxonomies and reporting standards could bring more common sense and collective wisdom to sustainability investing. Taxonomies are a core component of individual digital twins that reflect the impact of companies and can be aggregated into portfolios reflecting the aggregate effect of investments across many companies. European initiatives to create an EU Taxonomy of sustainability activities have driven what Bloomberg calls an ESG gold rush that should spread globally. It predicts ESG assets are on track to exceed $53 trillion by 2025, representing one-third of global investments. Now the US SEC is planning climate disclosure rules , and the International Financial Reporting Standards Foundation is working on sustainability disclosure standards.
Over the next year, keep an eye on more substantive ESG investment services that use digital twins to analyze and forecast investments’ environmental and financial returns. The leaders will differentiate by providing better context and alignment for investors.
20. Circulytics drives sustainable design The circular economy is a new meme that considers how products are sourced, repaired, reused and recycled at the end of life. Product design digital twins can help designers consider the environmental impact of the components that go into a product. Product lifecycle digital twins can also simulate different repair and reuse scenarios to minimize the impact on landfills and those gigantic plastic patches growing in the oceans. Early efforts include the work of the Ellen MacArthur Foundation on Circulytics to measure circularity across the entire enterprise. Google is working with the textile industry on the Global Fiber Impact Explorer to help make more sustainable sourcing choices. Smaller vendors like Cirularise and Circular Tree are promoting blockchain approaches. Major supply chain software and PLM vendors are likely to find ways to augment the digital twins in their existing tools, including buying best-of-breed sustainability analytics tools vendors.
21. Earth(s) 2.0 start to crawl In 1992, I worked on Biosphere 2 , a 2.5-acre analog model of the earth complete with ocean, desert, rainforest, marsh, and farm. We sealed eight people into the closed system for two years. It was a momentous idea, but it didn’t scale well and stopped running manned crews after three years. New digital models introduced last year could scale much better. Nvidia announced plans for Earth 2 (E2). Blackshark, which built the digital world for Microsoft Flight Simulator, attracted $20 million for a more extensible digital earth.
And the European Union has begun work on DestinE (Destination Earth), a federated cloud-based modeling and simulation platform. Similar offerings that help developers build on top of game engines from Unity and Epic along with GIS platforms like ArcGIS could emerge in 2022.
22. Play the news with digital twin games The proliferation of digital earths will usher in a new wave of more realistic and flexible city builders that allow gamers to play the news. Games like SimCity, Tropico, Transport Tycoon, and Cities: Skylines have demonstrated the playability and profitability of the genre. The next generation of these games will let players practice different strategies to rebuild after a tornado, fire, earthquake, or flood. Other versions will allow players to work around supply chain bottlenecks affecting ports or mitigate the economic and health outcomes of pandemics. These games will take advantage of semantic layers and knowledge graphs built into the digital twin platforms that make it easy to describe the behavior and relationship of things, like the effect of flooding on roads and power lines, earthquakes on foundations, and fires on infrastructure. Over time, gameplay will improve both strategies and the digital twins for modeling the real world.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,953 | 2,021 |
"Patching Log4j to version 2.17.1 can probably wait | VentureBeat"
|
"https://venturebeat.com/security/patching-log4j-to-version-2-17-1-can-probably-wait"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Patching Log4j to version 2.17.1 can probably wait Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A number of security professionals say that the latest vulnerability in Apache Log4j, disclosed on Tuesday, does not pose an increased security risk for the majority of organizations. As a result, for many organizations that have already patched to version 2.17.0 of Log4j, released December 17, it should not be necessary to immediately patch to version 2.17.1, released Tuesday.
While the latest vulnerability “shouldn’t be ignored,” of course, for many organizations it “should be deployed in the future as part of usual patch deployment,” said Ian McShane, field chief technology officer at Arctic Wolf, in comments shared by email with VentureBeat.
Casey Ellis, founder and chief technology officer at Bugcrowd, described it as a “weak sauce vulnerability” and said that its disclosure seems more like a marketing effort for security testing products than an “actual effort to improve security.” Patching woes The disclosure of the latest vulnerability comes as security teams have been dealing with one patch after another since the initial disclosure of a critical remote code execution (RCE) flaw in the widely used Log4j logging software on December 9.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The latest vulnerability appears in the Common Vulnerabilities and Exposures (CVE) list as 2021-44832 , and has a severity rating of “medium” (6.6). It enables “an attacker with permission to modify the logging configuration file [to] construct a malicious configuration,” according to the official description.
However, for teams that have been working nonstop to address Log4j vulnerabilities in recent weeks, it’s important to understand that the risks posed by the latest vulnerability in Log4j are much lower than the previous flaws—and may not be a “drop everything and patch” moment, according to security professionals. While possible that an organization might have the configurations required for exploiting CVE-2021-44832, this would in fact be an indicator of a much larger security issue for the organization.
The latest vulnerability is technically an RCE, but it “can only be exploited if the adversary has already gained access through another means,” McShane said. By comparison, the initial RCE vulnerability, known as Log4Shell, is considered trivial to exploit and has been rated as an unusually high-severity flaw (10.0).
A niche issue The latest Log4j vulnerability requires hands-on keyboard access to the device running the component, so that the threat actor can edit the config file to exploit the flaw, McShane said.
“If an attacker has admin access to edit a config file, then you are already in trouble—and they haven’t even used the exploit,” he said. “Sure, it’s a security issue—but it’s niche. And it seems far-fetched that an attacker would leave unnecessary breadcrumbs like changing a config file.” Ultimately, “this 2.17.1 patch is not the critical nature that an RCE tag could lead folks to interpret,” McShane said.
Indeed, Ellis said, the new vulnerability “requires a fairly obscure set of conditions to trigger.” “While it’s important for people to keep an eye out for newly released CVEs for situational awareness, this CVE doesn’t appear to increase the already elevated risk of compromise via Log4j,” he said in a statement shared with VentureBeat.
Overhyping? In a tweet Tuesday, Katie Nickels, director of intelligence at Red Canary, wrote of the new Log4j vulnerability that it’s best to “remember that not all vulnerabilities are created equally.” “Note that an adversary *would have to be able to modify the config* for this to work…meaning they already have access somehow,” Nickels wrote.
Wherever RCE is mentioned in relation to the latest vulnerability, “it needs to be qualified with ‘where an attacker with permission to modify the logging configuration file’ or you are overhyping this vuln,” added Chris Wysopal, cofounder and chief technology officer at Veracode, in a tweet Tuesday. “This is how you ruin relationships with dev teams.” Log4j 2.17 RCE CVE-2021-44832 in a nutshell pic.twitter.com/GPaHcDHlj0 — Florian Roth ⚡️ (@cyb3rops) December 28, 2021 “In the most complicated attack chain ever, the attacker used another vuln to get access to the server, then got CS running, then used CS to edit the config file/restart the service to then remotely exploit the vuln,” tweeted Rob Morgan, founder of Factory Internet. “Yep, totally the best method!” A widespread vulnerability Many enterprise applications and cloud services written in Java are potentially vulnerable to the flaws in Log4j. The open source logging library is believed to be used in some form — either directly or indirectly by leveraging a Java framework — by the majority of large organizations.
Version 2.17.1 of Log4j is the fourth patch for vulnerabilities in the Log4j software since the initial discovery of the RCE vulnerability, but the first three patches have been considered far more essential.
Version 2.17.0 addresses the potential for denial of service (DoS) attacks in version 2.16, and the severity for the vulnerability has been rated as high.
Version 2.16, in turn, had fixed an issue with the version 2.15 patch for Log4Shell that did not completely address the RCE issue in some configurations. The initial vulnerability could be used to enable remote execution of code by unauthenticated users.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,954 | 2,021 |
"Microsoft investigating Defender issue with Log4j scanner | VentureBeat"
|
"https://venturebeat.com/security/microsoft-investigating-defender-issue-with-log4j-scanner"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft investigating Defender issue with Log4j scanner Share on Facebook Share on X Share on LinkedIn Microsoft.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft is investigating reports that the Apache Log4j vulnerability scanner in Defender for Endpoint is triggering erroneous alerts.
Update: The company told VentureBeat on Wednesday afternoon it has resolved the issue (see below).
Microsoft released the scanner with the aim of assisting with the identification and remediation of the flaws in Log4j, a popular logging software component. Microsoft disclosed an expansion of the Log4j scanning capabilities in Defender on Monday evening.
False positives Today, reports emerged on Twitter about false positive alerts from the scanner, which reportedly tell admins that “Possible sensor tampering in memory was detected by Microsoft Defender for Endpoint.” Twitter users reported seeing the issue as far back as December 23.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The reports prompted a response on Twitter from Tomer Teller, an executive in Microsoft’s security business. “Thank you for reporting this. The team is looking into that,” Teller said in a tweet.
“The team is analyzing why it triggers the alert (it shouldn’t, of course),” he wrote in a second tweet.
In response to a question from VentureBeat about the reports, a Microsoft spokesperson said in a statement Wednesday afternoon that “we have resolved an issue for some customers who may have experienced a series of false-positive detections.” On Monday, Microsoft announced it has rolled out new capabilities in its Defender for Containers and Microsoft 365 Defender offerings for addressing Log4j vulnerabilities.
The Defender for Containers solution is now enabled to discover container images that are vulnerable to the flaws in Log4j. Container images are scanned automatically for vulnerabilities when they are pushed to an Azure container registry, when pulled from an Azure container registry, and when running on a Kubernetes cluster, Microsoft’s threat intelligence team wrote in an update to its blog post about the Log4j vulnerability.
Defender updates Meanwhile, for Microsoft 365 Defender, the company said it has introduced a consolidated dashboard for managing threats and vulnerabilities related to the Log4j flaws. The dashboard will “help customers identify and remediate files, software, and devices exposed to the Log4j vulnerabilities,” Microsoft’s threat intelligence team tweeted.
These capabilities are supported on Windows and Windows Server, as well as on Linux, Microsoft said. However, for Linux, the capabilities require an update to version 101.52.57 or later of the Microsoft Defender for Endpoint Linux client.
This “dedicated Log4j dashboard” provides a “consolidated view of various findings across vulnerable devices, vulnerable software, and vulnerable files,” the threat intelligence teams wrote in the blog post.
Additionally, Microsoft said it has launched a new schema in advanced hunting for Microsoft 365 Defender, “which surfaces file-level findings from the disk and provides the ability to correlate them with additional context in advanced hunting.” Microsoft said it’s working to add support for the capabilities in Microsoft 365 Defender for Apple’s macOS, and said the capabilities for macOS devices “will roll out soon.” Widespread vulnerabilities Many enterprise applications and cloud services written in Java are potentially vulnerable to the flaws in Log4j prior to version 2.17.0. The open source logging library is believed to be used in some form — either directly or indirectly by leveraging a Java framework — by the majority of large organizations.
The latest patch for Log4j, version 2.17.1, was released Tuesday and addresses a newly discovered vulnerability (CVE-2021-44832). It is the fourth patch for flaws in the Log4j software since the initial discovery of a remote code execution (RCE) vulnerability on December 9.
However, a number of security professionals say that the latest vulnerability does not pose an increased security risk for the majority of organizations. As a result, for many organizations that have already patched to version 2.17.0 of Log4j, released December 17, it should not be necessary to immediately patch to version 2.17.1.
Article updated to include a response from Microsoft about the resolution of the false positives issue, along with new details about the version 2.17.1 patch for Log4j.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,955 | 2,021 |
"China-based group used Log4j flaw in attack, CrowdStrike says | VentureBeat"
|
"https://venturebeat.com/security/china-based-group-used-log4j-flaw-in-attack-crowdstrike-says"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages China-based group used Log4j flaw in attack, CrowdStrike says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cybersecurity firm CrowdStrike says its threat hunters identified and disrupted an attack by a state-sponsored group based in China, which involved an exploit of the vulnerability in Apache Log4j.
CrowdStrike said today that threat hunters on its Falcon OverWatch team intervened to help protect a “large academic institution,” which wasn’t identified, from a hands-on-keyboard attack that appears to have used a modified Log4j exploit. The China-based group has been dubbed “Aquatic Panda” by CrowdStrike, and has likely been operating since mid-2020 but had previously not been identified publicly, according to the company.
“As OverWatch disrupted the attack before Aquatic Panda could take action on their objectives, their exact intent is unknown,” said Param Singh, vice president of CrowdStrike OverWatch, in an email to VentureBeat. “This adversary, however, is known to use tools to maintain persistence in environments so they can gain access to intellectual property and other industrial trade secrets.” According to CrowdStrike, the group sought to leverage recently disclosed flaws in Apache Log4j, a popular logging software component. Since Log4j is widely used in Java applications, defense and remediation efforts have become a major focus for security teams in recent weeks, following the disclosure of the first in a series of vulnerabilities in the software on December 9. A remote code execution (RCE) vulnerability in Log4j, known as Log4Shell, was initially disclosed on that day.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Additional vulnerabilities have been disclosed in the following weeks, with the latest coming out on Monday along with a new patch in the form of version 2.17.1 of Log4j.
Vulnerable VDI software The exploit attempts by Aquatic Panda targeted vulnerable elements of VMware’s Horizon virtual desktop infrastructure (VDI) software, according to CrowdStrike. VMware is a major user of Java in its products, and has issued a security advisory on numerous products that have been potentially impacted by the Log4j vulnerabilities. VentureBeat has reached out to VMware for comment.
Following an advisory by VMware on December 14, CrowdStrike said that its OverWatch team began hunting for unusual processes related to VMware Horizon and the Apache Tomcat web server service.
That led the OverWatch team to observe Aquatic Panda attackers performing connectivity checks via DNS lookups and executing several Linux commands. In particular, the execution of Linux commands on a Windows host operating under Tomcat stuck out to the threat hunters at OverWatch, CrowdStrike said in a blog post today.
At that point, OverWatch provided alerts to the Falcon platform used by the victim organization and shared details directly with the organization’s security team as well, according to CrowdStrike.
Malicious activities Additional malicious activities by Aquatic Panda observed by OverWatch included reconnaissance to understand privilege levels and system/domain details; an attempt to block an endpoint detection and response (EDR) service; downloading of additional scripts and execution of commands using PowerShell to retrieve malware; retrieval of files that most likely constituted a reverse shell; and attempts at harvesting credentials.
In terms of credential harvesting, the OverWatch team observed Aquatic Panda making repeated attempts through dumping the memory of the Local Security Authority Subsystem Service (LSASS) process using “living-off-the-land” binaries, CrowdStrike said in its blog post.
OverWatch’s efforts to track the group and provide updates to the victim organization enabled quick implementation of the organization’s incident response protocol and containment of the threat actor, which was followed by patching of the vulnerable application, according to CrowdStrike.
The response ultimately prevented the group from achieving their objectives, Singh said.
Intelligence collection CrowdStrike says it has been tracking Aquatic Panda since May 2020. The company previously released several reports on the group to subscribers to its Intelligence service, prior to this public disclosure about the group, CrowdStrike said.
In the blog post today, CrowdStrike described the group as a “China-based targeted intrusion adversary with a dual mission of intelligence collection and industrial espionage.” Aquatic Panda operations have mainly focused on companies in telecommunications, technology, and government in the past, according to CrowdStrike. The group is a heavy user of the Cobalt Strike remote access tool, and has been observed using a unique Cobalt Strike downloader that has been tracked as “FishMaster,” CrowdStrike said. Aquatic Panda has also used another remote access tool, njRAT, in the past, according to the company.
Many enterprise applications and cloud services written in Java are potentially vulnerable to the flaws in Log4j, prior to version 2.17.1 of the open source logging library. Log4j believed to be used in some form — either directly or indirectly by leveraging a Java framework — by the majority of large organizations.
Earlier this month, Microsoft had disclosed it has observed activity from nation-state groups — tied to countries including China — seeking to exploit the Log4j vulnerability. Microsoft, a CrowdStrike rival , also reported observing Log4Shell-related activities by threat actors connected to Iran, North Korea, and Turkey.
Additionally, cyber firm Mandiant has reported observing Log4Shell activity by state-sponsored threat actors tied to China and Iran.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,956 | 2,021 |
"How event engagement scoring supports marketing and sales teams | VentureBeat"
|
"https://venturebeat.com/marketing/how-event-engagement-scoring-supports-marketing-and-sales-teams"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights How event engagement scoring supports marketing and sales teams Share on Facebook Share on X Share on LinkedIn This article is part of an Events Insight series paid for by Cvent. See the first two in the series here and here.
Events, both in-person and virtual, are full of data points. Attendees are registering, attending keynote sessions, asking questions, sending messages, networking, visiting booths, talking to staff, and more. But often times, event organizer and marketers struggle to capture and activate all of that event data.
On its own, event data simply tells us what an attendee did at the event, with sometimes overwhelming details on how they engaged. But the real question is how can event organizers package the data so sales and marketing teams can action it with minimal lift? One strategy is to use technology to score your event interactions so you can quickly gauge interest and engagement, and therefore qualify and prioritize your leads. Assigning a weighted score to each event activity can create useful insights. This also allows you to compare current engagement score rates with previous event rates, with scores across different activities.
Event organizers can then use this information to pinpoint the activities that are most effective at generating attendee interest. Further, marketing teams can use individual attendee engagement scores to prioritize those for immediate follow up, and to establish where in the marketing funnel contacts should be entered. Here’s how to do it.
The value of event data The data gathered at all stages of the event can provide powerful insights for your sales and marketing team. Your marketing team wants to know which session topics are most likely to resonate with potential customers, and how far along each contact is in the buyer’s journey so they can share the most relevant marketing materials with sales.
The data you gather about attendees before the event gives your sales and marketing teams information about who your attendees are. This demographic and psychographic data is particularly useful for identifying key similarities between qualified and unqualified contacts. In turn, the data gathered post-event can provide useful information about why attendees engaged, what they achieved by attending the event, and what they plan to do next.
During the event, you typically gather engagement data which provides information about what attendees did during your event. Data about which sessions were attended and which booths were visited can be highly useful information for sales and marketing when packaged correctly. It can indicate what attendees are interested in, and where they are in the buyer’s journey.
What is event engagement scoring ? Scoring your event data is a process used to assign scores to every type of pre-, during-, and post-event interaction along the attendee journey, which then add up to an attendee’s overall qualification score (typically ranging between 0 and 100). The higher an attendee’s score is, the more qualified they are and the more they will be prioritized by sales and marketing.
Depending on what makes the most sense for your business, scores can be assigned both at an individual level and at an account level.
How you attribute scores depends on which activities matter most to your organization. For example, you may want to assign higher scores to people who attend the keynote session about your latest product rollouts versus attending a generic keynote session. Look at all the data you gather for your events and discuss with sales and marketing which interactions they consider to be high or low value.
Why is scoring important to sales and marketing? Scoring event data has three clear benefits which aren’t entirely exclusive to the sales and marketing teams — they also positively impact the events team. Benefits include: Making it easier for sales to convert leads and fill their pipeline Determining the follow-up paths for registrants Providing quantifiable evidence of the marketing impact that events have At Cvent CONNECT Virtual 2020 , Cvent showed their product’s engagement scoring system in action, using it to determine the follow-up paths for registrants.
The highest 10% of engagement scores went to a direct sales rep The next 50% were sent to sales development representatives (SDRs) The remaining 40% were placed into marketing nurture buckets.
Scoring their data allowed them to identify the most appropriate way to follow up and fill their pipeline.
What’s the best way to implement scoring? This may sound like a huge mountain to climb if you have nothing like it set up already, but it’s not. There are technology platforms out there that can help you get set up.
Cvent is an event marketing and management platform which prides itself on making it easier for event professionals to report on all key metrics across their event program, including engagement scoring. Customers have access to easy-to-use templates, as well as guidance from a customer service team with expertise in designing the most effective engagement scoring systems.
Events produce a wealth of data about attendees. However, oftentimes this data goes ignored because it all the different numbers can get messy. Event scoring can package it in a way that makes it easy for sales and marketing teams to use. Scoring your event interactions is a powerful way to create insights that teams can use to determine how to follow-up, fill their pipelines, and convert leads. Using a technology platform that can automate the scoring process — and integrate with your CRM and marketing automation systems — is ultimately the ideal way to implement this strategy.
Want to learn even more about engagement scoring? See how Cvent’s meetings and events team handled this at their hybrid user conference in the Post Event Activation chapter of their new ebook: Keeping Up With the Connectors.
VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,957 | 2,021 |
"SaaS security automation could learn to heal itself | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/saas-security-automation-could-learn-to-heal-itself"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community SaaS security automation could learn to heal itself Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article was contributed by Thomas Donnelly, chief information officer of BetterCloud.
Despite massive cybersecurity investments, SaaS security remains a major enterprise challenge.
One reason is the tremendous growth in SaaS adoption. According to research we recently conducted , organizations use an average of 110 SaaS apps, representing a nearly a 7x increase in SaaS app usage since 2017, and almost a 14x increase since 2015. SaaS security automation could help to solve current security issues.
But it’s not just SaaS growth that overwhelms security. The use of shadow applications continues to plague most organizations. Nearly three-quarters of IT pros worry about unsanctioned SaaS applications, just to paint the picture.
SaaS growth has broadened attack surfaces, which has also created more opportunities for data breaches. Alarmingly, we’ve seen a 20-fold jump in the number of files containing PII created at companies using SaaS applications. Attackers are well aware of this and are getting better and better at finding the back door — whether it is an infrastructure vulnerability or an unintentional misconfiguration.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But continuing to pile onto your security stack to solve the problem can be counterproductive. Enterprises have too many security tools. These often conflict or gradually drift out of configuration, and coverage gaps emerge.
The answer? It’s certainly not having a bigger SOC with more bodies to manually manage user permissions, files shared, configurations, etc. — that’s a recipe for more mistakes. SaaS security needs to find a way to “heal itself” — to detect vulnerabilities, remediate them, and then check them automatically. This cycle of Detect→ Fix→ Verify requires automation. It also requires that multiple platforms work together.
SaaS security: Automation and visibility The big challenge in SaaS security is visibility. Our research shows that the number of applications a company uses is twice as much as what they thought.
And that’s just the applications. Most security teams cannot handle the day-to-day management of access privileges of thousands users across hundreds SaaS applications without overlooking something. And if they find any issues — thousands of exposed files with confidential information — they can’t control them.
SaaS applications are conceived and built for collaboration and sharing data. That’s critical for employee and business productivity. But sensitive information flows through these apps, and employees can often make mistakes , like leaving files open to the public without knowing it. Bad actors are well aware most employees are not security pros — and they prey on that.
A lack of standardized onboarding/offboarding processes are also open doors for hackers. If employees and contractors are not offboarded automatically when they depart, they often retain access to sensitive files with sensitive data.
Once IT solves the visibility challenge and starts on automation, there can be serious progress toward “self-healing security” — which implies security that gets progressively better, instead of degrading constantly.
Self-healing SaaS security: Piecing the puzzle together But how does self-healing security actually work? It takes a group of platforms that work together, with significant automation, to make it fast and accurate. These platforms address visibility across SaaS applications , management of files and users, and automated “red team” testing to find security gaps and prioritize them. They then orchestrate remediation and validate that the fixes are effective. Without commenting on specific products, some industry ecosystems already integrate platforms to at least partially address this cycle of Visualize→ Detect→ Prioritize fixes→Automated remediation→Validation of “healing.” Depending on the issue, much of the response can be automated. One example: a user publicly shares a file that contains social security numbers. Your security should automatically detect the problem, unshare the file, and notify your security team. Another example that is universally relevant: every company needs automated detection of employee terminations and immediate user de-provisioning across every application and confidential information resource.
Automation is critical for speed because data exfiltration can happen quickly. The mean time to repair (MTTR) application security breaches is usually estimated at an unacceptable 50 days. Cutting that by 99.99% would be a good start! Myth or reality? Is self-healing security, or SaaS security automation, a practical reality for today’s IT? The answer is a cautious yes. IT can deploy several components that work together today. Depending on the tech providers and ecosystem you choose to work with, some of the integration and automation is already in place.
Self-healing SaaS security should not require an enormous number of vendors and platforms, nor dozens of point security controls. With careful product selection to acquire and align SaaS management and security platforms, there’s reason to be optimistic about reversing the constant breakdown of security. Self-healing security should offload the most tedious and error-prone aspects of SaaS oversight and free up your security teams to be more strategic and proactive.
Thomas Donnelly is chief information officer of BetterCloud.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,958 | 2,021 |
"Why AR, not VR, will be the heart of the metaverse | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/future-augmented-reality-will-inherit-the-earth"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why AR, not VR, will be the heart of the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article was contributed by Louis Rosenberg, CEO and chief scientist at Unanimous AI My first experience in a virtual world was in 1991 as a PhD student working in a virtual reality lab at NASA. I was using a variety of early VR systems to model interocular distance (i.e. the distance between your eyes) and optimize depth perception in software. Despite being a true believer in the potential of virtual reality, I found the experience somewhat miserable. Not because of the low fidelity, as I knew that would steadily improve, but because it felt confining and claustrophobic to have a scuba mask strapped to my face for any extended period.
Even when I used early 3D glasses (i.e. shuttering glasses for viewing 3D on flat monitors), the sense of confinement didn’t go away. I still had to keep my gaze forward, as if wearing blinders to the real world. There was nothing I wanted more than to take the blinders off and allow the power of virtual reality to be splattered across my real physical surroundings.
This sent me down a path to develop the Virtual Fixtures system for the U.S. Air Force, a platform that enabled users to manually interact with virtual objects that were accurately integrated into their perception of a real environment. This was before phrases like “augmented reality” or “ mixed reality ” had been coined. But even in those early days, watching users enthusiastically experience the prototype system, I was convinced the future of computing would be a seamless merger of real and virtual content displayed all around us.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Cut to 30 years later, and the phrase “metaverse” has suddenly become the rage.
At the same time, the hardware for virtual reality is significantly cheaper, smaller, lighter, and has much higher fidelity. And yet, the same problems I experienced three decades ago still exist. Like it or not, wearing a scuba mask is not pleasant for most people, making you feel cut off from your surroundings in a way that’s just not natural.
This is why the metaverse, when broadly adopted, will be an augmented reality environment accessed using see-through lenses. This will hold true even though full virtual reality hardware will offer significantly higher fidelity. The fact is, visual fidelity is not the factor that will govern broad adoption.
Instead, adoption will be driven by which technology offers the most natural experience to our perceptual system. And the most natural way to present digital content to the human perceptual system is by integrating it directly into our physical surroundings.
Of course, a minimum level of fidelity is required, but what’s far more important is perceptual consistency. By this, I mean that all sensory signals (i.e. sight, sound, touch, and motion) feed a single mental model of the world within your brain. With augmented reality, this can be achieved with relatively low visual fidelity, as long as virtual elements are spatially and temporally registered to your surroundings in a convincing way. And because our sense of distance (i.e. depth perception) is relatively coarse, it’s not hard for this to be convincing.
But for virtual reality, providing a unified sensory model of the world is much harder.
This might sound surprising because it’s far easier for VR hardware to provide high-fidelity visuals without lag or distortion. But unless you’re using elaborate and impractical hardware, your body will be sitting or standing still while most virtual experiences involve motion. This inconsistency forces your brain to build and maintain two separate models of your world — one for your real surroundings and one for the virtual world that is presented in your headset.
When I tell people this, they often push back, forgetting that regardless of what’s happening in their headset, their brain still maintains a model of their body sitting on their chair, facing a particular direction in a particular room, with their feet touching the floor (etc.). Because of this perceptual inconsistency, your brain is forced to maintain two mental models. There are ways to reduce the effect, but it’s only when you merge real and virtual worlds into a single consistent experience (i.e. foster a unified mental model) that this truly gets solved.
This is why augmented reality will inherit the earth. It will not only overshadow virtual reality as our primary gateway to the metaverse but will also replace the current ecosystem of phones and desktops as our primary interface to digital content. After all, walking down the street with your neck bent, staring at a phone in your hand is not the most natural way to experience content to the human perceptual system. Augmented reality is, which is why I firmly believe that within 10 years, AR hardware and software will become dominant, overshadowing phones and desktops in our lives.
This will unleash amazing opportunities for artists and designers, entertainers, and educators, as they are suddenly able to embellish our world in ways that defy constraint (see Metaverse 2030 for examples). Augmented reality will also give us superpowers, enabling each of us to alter our world with the flick of a finger or the blink of an eye. And it will feel deeply real, as long as designers focus on consistent perceptual signals feeding our brains and worry less about absolute fidelity. This principle was such an important revelation to me as I worked on AR and VR back in the early ’90s that I gave it a name: perceptual design.
As for what the future holds, the vision currently portrayed by large platform providers of a metaverse filled with cartoonish avatars is misleading. Yes, virtual worlds for socializing will become increasingly popular, but they will not be the means through which immersive media transforms society. The true metaverse — the one that becomes the central platform of our lives — will be an augmented world. And by 2030 it will be everywhere.
Louis Rosenberg is CEO & Chief Scientist at Unanimous AI.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,959 | 2,021 |
"The 5 most popular infrastructure stories of 2021 and what they reveal about 2022 | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/the-5-most-popular-infrastructure-stories-of-2021-and-what-they-reveal-about-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The 5 most popular infrastructure stories of 2021 and what they reveal about 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In Gartner’s Leadership Vision for 2022: Infrastructure and Operations report, Gartner analysts Nathan Hill and Tim Zimmerman share that in 2022, “infrastructure and operations leaders must deliver adaptive, resilient services that support continuous and rapid business change.” In a similar vein, VentureBeat’s top trending stories on infrastructure from the past year have focused on the resiliency, adaptivity, integrity, interoperability, and flexibility of infrastructure and data. Improving infrastructure across industries is necessary to increase innovation and efficiency globally.
Can open-sourcing agriculture infrastructure optimize crop growing? Earlier this year, the Linux Foundation unveiled an open source digital infrastructure project aimed at optimizing the agriculture industry , known as the AgStack Foundation.
It’s designed to advance collaboration among key stakeholders throughout the global agriculture space, ranging from private businesses to governments to even academia.
Across the agriculture sector, digital transformation has ushered in connected devices for farmers and myriad AI, as well as automated tools to bring optimization to crop growth and evade obstacles like labor shortages.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a May press release , the Linux Foundation outlined what may result from the initiative, which “will build and sustain the global data infrastructure for food and agriculture to help scale digital transformation and address climate change, rural engagement and food and water security.” Introducing digital twins and strengthening infrastructure to improve systems and fight global crises like climate change isn’t unique to the Linux Foundation, however. In November at its GTC conference, Nvidia announced its creation of a digital twin of Earth , also aimed at using the technology to model potential improvements and solutions to apply in the real world.
The matchup of infrastructure improvements and incorporating digital twin technologies is sure to continue as global leaders aim to solve problems that were previously deemed next to impossible.
In the short term, these advancements will help address the loss of productivity and will lay the groundwork for further and larger-scale innovations by making access to open digital tools and data for revamping infrastructures, available to industry professionals.
The vitality of infrastructure-as-a-service Rescale, a San Francisco-based startup developing a software platform and hardware infrastructure for scientific and engineering simulation, used funding it raised earlier this year to further the efforts of its research, development, and expansion. Since then, the company has signed new partnerships and catapulted to explosive growth. In November, Rescale was named as one of Deloitte’s 2021 Technology Fast 500 fastest-growing companies.
The fast-paced growth should be unsurprising, given the company’s focus on providing infrastructure-as-a-service, in what has progressively become a digital-first world for workplaces spanning across industries.
Since the COVID-19 pandemic has shifted several industries and businesses online, partially — and in many cases, fully — infrastructure has proven to be a core component to successful operation.
“Industries like aerospace, jet propulsion, and supersonic flight all require massive computer simulations based on AI and specialized hardware configurations. Historically, the science community has run these workloads on on-premises datacenters that they directly built and maintain,” a Rescale spokesperson told VentureBeat via email last February. “Rescale was founded to bring HPC [high-performance computing] workloads to the cloud to lower costs, accelerate R&D innovation, power faster computer simulations, and allow the science and research community to take advantage of the latest specialized architectures for machine learning and artificial intelligence without massive capital investments in bespoke new datacenters.” Rescale hopes to enable customers to operate jobs on public clouds such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, IBM, and Oracle — and it additionally makes a network available to those customers across eight million servers with more than 80 specialized architectures and resources like Nvidia Tesla P100 GPUs, Intel Skylake processors, as well as additional features.
The hype for larger industry use cases is big. Rescale’s infrastructure-as-a-service supports approximately 600 simulation applications for aerospace, automotive, oil and gas, life sciences, electronics, academia, and machine learning, including desktop and visualization capabilities that let users interact with simulation data regardless of whether the jobs have finished. This, in turn, allows professionals from nearly every sector to utilize testing, simulations, modeling, and more to improve their own products, services, and tools that are B2B or B2C-facing.
Scaling infrastructure for a cloud-centric world APIs and microservice s have become critical tools to drive innovation and automation for companies, but also bring management challenges. It’s natural that enterprises are drawn to services that offer the potential to create greater flexibility, but in doing so, they must also find ways to coordinate with cloud-based services.
Kong is one of several new companies aiming to address the issue. Because many of the conveniences of our digitally connected lives rely on APIs that connect companies with vendors, partners, and customers — like using Amazon’s Alexa to play music over your home speakers from your Spotify account, asking your car’s navigation system to find a route with Google Maps, or ordering food for a night in from DoorDash — it would be next to impossible to do all of this efficiently and at scale without APIs and cloud technologies.
Kong’s flagship product, Kong Konnect, is a connectivity gateway that links APIs to service meshes. Using AI, the platform eases and automates the deployment and management of applications while bolstering security. Among its notable customers are major household names including GE, Nasdaq, and Samsung — with others to surely follow in 2022.
Managing the ever-changing landscape of infrastructure If there is anything the past years have made clear, it’s that the importance and reliance on technology, regardless of industry, is increasing, and hyperconnectivity in our lives both individually and professionally is here to stay.
It is against this backdrop that Rocket Software acquired ASG Technologies this year to boost infrastructure management tools.
There is no shortage of competitors for Rocket Software.
Tools and technologies to manage IT infrastructure are ever-present in the enterprise computing sector — spanning from public clouds to the edge. The management of data and the apps used to create that data are becoming more disaggregated with the influx of companies and individuals moving everything online. Expect the industry to demand more sophisticated tools that can efficiently and reliably manage infrastructure in this evolving space.
Nvidia and Bentley team up to streamline U.S. infrastructure In April, Bentley Systems forged several partnerships that make it easier to share realistic construction simulations with a broader audience. This goal is to help drive the adoption of digital twins, which have increasingly been used for advanced simulations across the construction industry.
Bentley, a leader on the technical side of modeling infrastructure, extended its digital twins platform to support the Nvidia Omniverse ecosystem.
The integrations should make it easier to share realistic models to stakeholders, including decision-makers, engineers, contractors, and citizens affected by new projects.
Bentley’s software — which is part of the State of Minnesota’s plans to save more than $4 million per year using the company’s tools to improve inspection and documentation of 20,000 bridges — is just an example of the efficiency that may come for the country as a whole.
“The integration of the capabilities of the Bentley iTwin platform and Nvidia’s Omniverse [will] enable users to virtually explore massive industrial plants and offshore structures as if they are walking through the infrastructure in real time, for purposes such as wayfinding and safety route optimization. The industry is moving in a positive direction toward more automated and sophisticated tools that improve client outcomes,” according to a press release on the partnership between Bentley and Nvidia.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,960 | 2,021 |
"Real-time analytics in 2022: What to expect? | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/real-time-analytics-in-2022-what-to-expect"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Real-time analytics in 2022: What to expect? Share on Facebook Share on X Share on LinkedIn Sports analytics Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, every company is in the process of becoming a data company. Decision-makers leverage data not just to see how their organization performed in the past few months, but also to generate detailed insights (the what and why) into business processes, operations. These analytics, driven by tools such as Tableau , inform business decisions and strategies and play a crucial role in driving efficiencies, improving financial performance, and identifying new revenue sources.
A few years ago, business data used to be processed in batches for analytics. Now, real-time analytics has come on the block, where organizational data is processed and queried as soon as it is created. In some cases, the action is not taken instantly, but a few seconds or minutes after the arrival of new data. However, both the practices are increasingly being adopted by enterprises, especially in sectors where the need is to analyze data immediately to deliver products or services, understand trends, and take on rivals. After all, an ecommerce company would need instant information about when and why its payment gateway went down to ensure customer experience and retention. In the case of historic data analyzed in batches, the detection and resolution of such an issue could easily get delayed.
Here are some trends that will shape and drive the adoption of real-time analytics further in 2022.
Surge in data volumes, velocity Continuing the trend from recent years, data volumes and velocity at the organization level will follow the upward trajectory, surging more than ever before. This, combined with the convergence of data lakes and warehouses and the need to take quick decisions, is expected to drive improvements in the response time on real-time analytics.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Systems will be able to ingest massive amounts of incoming raw data – no matter whether it peaks for a few hours every day or for a few weeks every year – without latency and faster analytical queries are likely to become possible, ensuring instant reactions to events and maximum business value. On top of that, serverless real-time analytics platforms are also expected to go mainstream, which will allow organizations to build and operate data-centric applications with infinite on-demand scaling to handle the sudden influx of data from a particular source.
“Overall, 2022 will be a challenging year for keeping up with growing data volumes and performance expectations in data analytics,” Chris Gladwin, the CEO, and cofounder of Ocient , told Venturebeat. “We will see more organizations looking for continuous analytics and higher resolution query results on hyperscale data sets (trillions of records) to gain deeper, more comprehensive insights from an ever-growing volume and diversity of data sources.” Rise in developer demand As the lines between real-time analytics (which provides instant insights to humans to make decisions) and real-time analytical applications (which automatically take decisions as events happen) continue to blur on the back of the democratization of real-time data, developers are expected to join technical decision-makers and analysts as the next big adopter of real-time analytics.
According to a report from Rockset , which offers a real-time analytics database, real-time data analytics will see a sharp rise in demand from devs who will use the technology to build data-driven apps capable of personalizing content/customer services as well as to A/B test quickly, detect fraud, and serve other intelligent applications like automating operational processes.
“Every other business is now feeling the pressure to take advantage of real-time data to provide instant, personalized customer service, automate operational decision-making, or feed ML models with the freshest data. Businesses that provide their developers [with] unfettered access to real-time data in 2022, without requiring them to be data engineering heroes, will leap ahead of laggards and reap the benefits,” Dhruba Borthakur, cofounder and CTO of Rockset, said.
Off-the-shelf real-time analytics capabilities In 2022 and beyond, real-time analytics based on off-the-shelf capabilities are expected to become more mainstream, easier to deploy, and customize, Donald Farmer, the principal of Treehive Strategy, told Venturebeat. This will be a departure from the current practice where the code is written in-house or sourced from highly specialized vendors and drive the adoption of real-time analytics in retail, healthcare, and the public sector.
So far, real-time analytics based on off-the-shelf capabilities has mostly been used in sectors such as transport (for customer support) and manufacturing (for monitoring production), Farmer noted. Professionally, Farmer has worked on several of the top data and analytics technologies in the market. Additionally, he previously led design and innovation teams at Microsoft and Qlik.
Business benefits across sectors Business benefits of real-time analytics, regardless of the sector, will also continue to drive adoption in 2022. As per IDC’s Future Enterprise Resiliency and Spending survey, the ability to make real-time decisions will make enterprises more nimble, boost their customer loyalty/outreach, and offer a significant advantage over the competition. Plus, continuous data analytics, which alerts users as events happen, would help towards improving supply chains and reducing costs, bringing about fast ROI on streaming data pipeline investments.
As per Rockset , one oil and gas company was able to increase its profit margins by 12% to 15% after adopting real-time analytics.
Meike Escherich, associate research director for European Future of Work at IDC, notes they have already recorded a significant uptake in the implementation of real-time analytics, with one in three European companies either already using them for measuring team performances or planning to do so in the next 18 months. Similarly, Gartner too predicts that more than half of major new business systems will incorporate continuous data intelligence in 2022.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,961 | 2,021 |
"How open source is powering data sovereignty and digital autonomy | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-open-source-is-powering-data-sovereignty-and-digital-autonomy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How open source is powering data sovereignty and digital autonomy Share on Facebook Share on X Share on LinkedIn Illustration of server racks Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Let the OSS Enterprise newsletter guide your open source journey! Sign up here.
As privacy regulations such as Europe’s GDPR and California’s CCPA have come into force, and with countless others in the pipeline , companies across the industrial and technological spectrum have had to elevate their data management efforts to a whole new level. Jurisdictions globally are enforcing their own data-residency regulations too, with the likes of China and Russia saying to companies: “ If you want to do business with us, keep your data here.
” Cumulatively, these various regulations have thrust the closely aligned issues of data residency, localization, and sovereignty to the forefront of companies’ consciousness. They can no longer play fast and loose with data — they must pay close attention to where they store data and the jurisdiction it falls under. This is often used to bolster arguments in the burgeoning multi- and hybrid-cloud movement, as it not only helps companies avoid vendor lock-in, but also gives them flexibility in terms of where their data and applications are hosted.
Control, ultimately, is the name of the game — both at a nation-state and at an individual company level, where digital autonomy is paramount.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The adoption of data localization laws has been increasing, driven by the fear that a nation’s sovereignty will be threatened by their inability to exert full control over data stored outside their borders,” Russell Christopher, director of product strategy at data analytics company Starburst , told VentureBeat. “In an environment of shifting privacy laws, it’s increasingly difficult for businesses to analyze all critical data quickly, and at scale, while ensuring compliance.” Permeating all of this is trusty ol’ open source software.
Kubernetes, for example, is one of the most popular open source projects out there, serving as a common operating environment that allows companies to embrace the hybrid cloud, powering their applications across all public and private infrastructure. And there are countless companies that bake data sovereignty into the core of their product, even if it’s not immediately obvious that’s what it’s there for — with open source taking center stage.
Cross-cloud conundrum Above: Trino logo Starburst is the venture-backed business behind the open source Presto -based SQL query engine Trino.
The company recently announced a new fully managed, cross-cloud analytics product that allows companies to query data hosted across the “big three’s” infrastructure — without moving the data from its original location. For some companies, the core benefit here is simply circumventing the data silo problem, as they don’t have to pool the data in a single cloud or data warehouse. But even in situations where a company is using a single cloud provider, they will often have to store data in different “regions” to satisfy data residency requirements — cross-cloud analytics enables them to leave the data where it is, with only aggregated insights transferred out of that location.
“The data warehouse model is predicated on consolidating data assets to create a ‘single source of truth,’ but the regulatory realities of today are proving this to not only be a lie, but legally impossible,” Christopher said. “Adopting a multi-cloud strategy is a significant and necessary initiative for modern computing, but it’s not a final solution. The challenge of accessing and analyzing data across clouds and regions without moving that data to a central location is forcing a paradigm shift in the way we approach modern data management.” Global biotech giant Sophia Genetics uses Starburst to query data from several regions across the world, while adhering to all the local data sovereignty and compliance requirements.
“Local users in the region can query atomic level data, and those outside of the region can only query aggregated data,” Christopher said.
Above: An illustration of how Sophia Genetics leverages Stargate to query data while adhering to local data residency requirements But how much of this solution is reliant on “open source,” exactly? Well, given that Starburst is built on such a foundational open source project as Trino, it is pretty integral, even if the commercial Starburst product makes the data sovereignty process easier.
“The ability to ‘plug in’ third-party integrations is baked into Trino — Trino also has some native capacity to ‘push down’ portions of a query to local compute, like a database,” Christopher said. “Starburst has taken advantage of this foundational potential to build technology which really does the heavy lifting.” One of the major difficulties in adhering to a strict data sovereignty regimen is the sheer complexity of it. With systems and data spread across departments and regions, some attained through acquisitions that exist in their own siloed worlds, it’s enough of a challenge in itself for global companies to unlock big data insights — add the thorny issue of regional data residency regulations to the mix and things get just that little bit harder. Companies are dealing with this conundrum in different ways.
“While not every customer I speak to is far enough along on the maturity model (or big enough) to need to ‘slay this dragon,’ there is awareness across the board that data sovereignty is something that they’ll need to address at some point in the near future,” Christopher explained. “Some are just ignoring data sovereignty in the hopes that it will go away because of how big of an undertaking it is to solve, while others are taking the stance of ‘don’t share data, problem solved.'” And this is why some companies might not be utilizing their data to its full potential — it’s just easier to keep data where it is, rather than risk regulatory wrath.
“Organizations need to share data or potentially find themselves at a competitive disadvantage,” Christopher added. “Now is the time for dealing with data sovereignty issues, and there are tools out there to help with this undertaking.” Data sovereignty and decentralized team chats Above: Element / Matrix cofounders Matthew Hodgson (left) and Amandine Le Pape (right) Element is the company behind an end-to-end encrypted team messaging platform powered by Matrix , a decentralized open standards-based communication protocol that promises not to lock people into a closed ecosystem. Similar to how people can send emails to each other across providers and clients (e.g. Gmail to Yahoo), in a Matrix world, WhatsApp users can message people on Slack or Skype. It’s all about interoperability.
For context, thousands of separate organizations permeate Germany’s health care system, including hospitals, clinics, local doctors, and insurance companies.
News emerged earlier this year that Gematik , the national agency responsible for digitization Germany’s health care system, was switching to Matrix following a series of separate digital transformation efforts that resulted in the various health care bodies unable to communicate effectively with each other. There were also questions over the security and privacy of the systems they had chosen to transmit confidential medical data.
By switching to Matrix, the different bodies involved didn’t necessary have to use the exact same apps, but given they were all built on a single common standard, they had the flexibility to create systems to suit their own unique use-cases while still being able to connect to each other.
“There are organizations where no one gets fired for buying Microsoft Teams or Slack, and there are organizations where people get fired for a lack of data security,” Matthew Hodgson, Element CEO and technical cofounder of Matrix, told VentureBeat. “We serve the latter group — organizations that need the best security available.” Above: Element: An instant message app built on Matrix All this feeds back to the core concepts around data sovereignty, digital autonomy, and control — not putting all your eggs in one digital basket (or multiple digital baskets that don’t play nice with each other).
“Data sovereignty is one of the main reasons why people and organizations choose Element, particularly in the public sector,” Hodgson explained. “A vendor-owned and managed SaaS model run from the U.S. — like Microsoft Teams or Slack — simply doesn’t work for the majority of governments, even if the datacenter happens to be local.” Of course, even without an open source and open standards ethos, companies can go some way toward achieving data sovereignty through proprietary software. They can run their own email system through Microsoft Exchange, for example, which takes their data out of the cloud — but the company is still being locked into Microsoft’s gargantuan ecosystem. This type of approach to regaining control and digital autonomy “significantly undermines sovereignty,” according to Hodgson.
“Instead, open source solutions embrace open standards and empower the user to have full ownership over their data — the idea of vendor-locking users into proprietary data formats is a contradiction in terms for an open source app, where vendor-specific IP is considered toxic,” Hodgson said. “Open source solutions are leading the charge in empowering data sovereignty, and at last empowering the user or admin to have total control and ownership over their data.” Data visibility Above: Elastic’s IPO day in October 2018 Elastic is best known for Elasticsearch, a database search engine companies use for any application that relies on the access and retrieval of data or documents. While it was formerly available under an Apache 2.0 open source license, the company transitioned to a duo of proprietary “source available” licenses earlier this year following an ongoing spat with Amazon Web Services (AWS). Elastic still adheres to most of the core principles of open source through a model it refers to as “free and open.” “Many of Elastic’s customers are multinational, which necessitates that they have total control over their data to abide by the privacy and security laws of the countries in which they operate,” Elastic’s chief product officer Ash Kulkarni told VentureBeat. “Not surprisingly, we are seeing data sovereignty coming up in more customer conversations.” Elastic operates what is known as a “single-tenancy” architecture — this means that each customer has its own database and instance of the software, affording them full control over the entire environment. Crucially, data is kept completely separate from other customers’ data. This is in contrast to a multi-tenancy architecture, which means that a single instance of the software and underlying infrastructure is used across multiple customers. While the data is kept separate, it still exists in the same environment in a multi-tenancy system, meaning individual companies have less customization options and multiple users — from different organizations — have access to the same database.
There are pros and cons to both architectures, but single-tenancy is ultimately the preferred option in terms of retaining full data control.
“For Elastic, data sovereignty means giving customers full jurisdictional control over their data and the infrastructure systems that the data flows through — an important aspect of that control is how the data is secured internally,” Kulkarni said. “Data has gravity, and our customers want foundational architecture that gives them country-specific controls to manage the data in the country where the data resides while allowing for analytics across all their data globally.” Above: Elasticsearch in action Elastic recently rolled out cross-cluster search (CCS) on Elastic’s Cloud Enterprise plan, enabling more companies to search their data across all their datacenters — so a business that runs an AWS instance in North Virginia, a Google Cloud instance in London, and a Microsoft Azure instance in Cape Town can search all their data in a single pane without moving their data.
“This enables customers to maintain compliance with the privacy and security laws in the counties they operate in, while simultaneously helping them break down data silos and derive greater insights from their data,” Kulkarni explained.
Global hedge fund Citadel uses Elastic for exactly that, allowing it to grow globally while “meeting data sovereignty requirements” where their customers reside. This is particularly important in highly regulated markets such as finance.
“They chose to work with Elastic to help scale their business, and ensure that any data being processed in specific countries was being run on infrastructure physically located in the country,” Kulkarni said.
But where does open source (or “free and open”) come into all this — wouldn’t it be possible to offer data sovereignty with a fully proprietary closed stack? Irrespective of the specific license that Elastic now issues its software under, the visibility it affords through its “source available” approach is the important factor.
“One of the benefits of open code is that the entire lifecycle of an organization’s data is open for inspection or compliance auditing by any legal party enforcing a law,” Kulkarni said. “Openness provides an additional level of transparency for a government agency to inspect and verify that the organization is compliant. You’re not getting the same level of transparency ‘goodness’ with closed-source software as you do with open code. Organizations have to trust that what their vendor says they are doing with their data is true. In contrast, open code allows an organization to verify that those compliance claims are accurate.” The open source factor A quick peek across the technological landscape reveals a slew of commercial open source companies going to market with “data sovereignty” as one of their core selling points.
Cal.com , an open source alternative to meeting-scheduling platform Calendly, launched back in September and just last week raised a $7.4 million seed round of funding.
“Transparency and control of companies’ data is what can make or break their choice in which software they use,” cofounder and co-CEO Bailey Pumfleet told VentureBeat. “We’ve spoken with many companies who simply cannot use any other solution out there — due to the inability to self-host, a lack of transparency, and other data protection related characteristics which Cal.com has. This is absolutely vital for industries like health care and government, but an increasing number of non-regulated industries are [also] looking at how their software products treat and use their data.” Above: Cal.com in action Countless examples permeated the past year that not only highlight the growing importance of data sovereignty and digital autonomy, but the role that open source plays in that.
Back in September, Google announced a partnership in Germany with Deutsche Telekom’s IT services and consulting subsidiary T-Systems , with a view toward building a “sovereign cloud” for German organizations. While all the main cloud providers already offer some data residency controls as part of their regional datacenters, they don’t go far enough for industries and regulatory frameworks that require a tighter control over how data is handled, particularly as it relates to personally identifiable information (PII).
And so T-Systems will “manage sovereignty controls and measures” such as encryption and identity management of the Google Cloud Platform for German businesses that need it. It will also oversee other integral parts of the Google Cloud infrastructure, including supervising physical or virtual access to sensitive infrastructure.
The problem that this partnership ultimately seeks to address is one that the open source world has long set out to solve — it’s about bring data control and oversight closer to home. As part of Google’s partnership with T-Systems, the duo have made specific provisions for “openness and transparency,” including collaborating on open source technologies, supporting integrations with existing IT environments, and serving up access to Google’s “open source expertise that provides freedom of choice and prevents lock-in,” Google Cloud CEO Thomas Kurian said at the time.
Elsewhere, a report from the European Commission (EC) shed light on the impact open source software has had — and could have — on the European Union (EU) economy. Notably, it observed that open source helps avoid vendor lock-in and increases an organizations’ digital autonomy — or “technological independence,” as the report called it.
Meanwhile, SaaS-powered password management giant 1Password launched a survey this year to determine whether its (potential) customers would like a self-hosted version of its service.
“Currently, we believe that a 1Password membership is the best way to store, sync, and manage your passwords and other important information,” 1Password CTO Pedro Canahuati said in an interview with VentureBeat this year. “However, we’re constantly looking into new avenues to make sure we always offer what’s best for our customers. Right now, we’re in the exploratory phase of investigating a self-hosted 1Password. We’ll assess the demand for this as we gather results.” While it’s not clear to what degree — if any — such a solution would embrace open source, it further highlights the growing push toward giving companies and individuals more control over their data. And open source will play a fundamental part in that.
“More and more of our personal data is moving into services that are hosted online,” Kulkarni said. “For the vast majority of people, their digital and physical worlds are indistinguishable — from ecommerce and social media to entertainment and communication. It’s all happening online. This mass migration of data is driving more scrutiny from regulators about what is happening to the data, who sees it, and who has access to it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,962 | 2,021 |
"The top 5 enterprise analytics stories of 2021 (and a peek into 2022) | VentureBeat"
|
"https://venturebeat.com/business/the-top-5-enterprise-analytics-stories-of-2021-and-a-peek-into-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The top 5 enterprise analytics stories of 2021 (and a peek into 2022) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In 2021, everything from databases, to baseball, no-code AI for data scientists, graph analytics, and even events got an analytics makeover this year.
Heading into 2022, Chris Howard, the chief of research at Gartner, and his team wrote in its Leadership Vision for 2022 report on the Top 3 Strategic Priorities for Data and Analytics Leaders that “progressive data and analytics leaders are shifting the conversation away from tools and technology and toward decision-making as a business competency. This evolution will take time to achieve, but data and analytics leaders are in the best position to help orchestrate and lead this change.” In addition, Gartner’s report predicts that adaptive governance will become more prominent in 2022: “Traditional one-size-fits-all approaches to data and analytics governance cannot deliver the value, scale, and speed that digital business demands. Adaptive governance enables data and analytics leaders to flexibly select different governance styles for differing business scenarios.” The enterprise analytics sector this year foreshadowed much of what’s to come. Here’s a look back at the top stories in this sector from 2021, and where these themes may carry the industry towards next.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Databases get real-time analytics capabilities and integrations Rockset integrated its analytics database with both MySQL and PostgreSQL relational databases to enable organizations to run queries against structured data in real time.
Rather than having to shift data into a cloud data warehouse to run analytics, organizations can now offload analytics processing to a Rockset database running on the same platform.
The company’s approach is designed to analyze structured relational data, as well as semi-structured, geographical, and time-series data in real time. Complex analytical queries can also be scaled to include JOINS with other databases, data lakes, or event streams. In addition to integrations with open source relational databases, the company also provides connectors to MongoDB, DynamoDB, Kafka, Kinesis, Amazon Web Services, and Google Cloud Platform, among others.
What stood out most about this advancement, though, isn’t specific to Rockset. “As the world moves from batch to real-time analytics,” the company stated in its press release , “and from analysts running manual queries to applications running programmatic queries, traditional data warehouses are falling short.” This trend in real-time analytics is further propelled by the swift move several companies made to a virtual and all-online infrastructure due to the pandemic. Real-time analytics in the virtual space will allow companies to more accurately index, strategize, and create new applications using their data.
Popular baseball analytics platform moves to the cloud It’s well-known to baseball fans that the data now made available by the MLB goes beyond the traditional hits, runs, and errors — it’s a sport both increasingly as complex in its data and statistics as it is in its ever-growing list of new time-limits and league rules.
Fans now regularly consult a raft of online sites that use this data to analyze almost every aspect of baseball: top pitching prospects, players who hit the most consistently in a particular ballpark during a specific time of day, and so on.
One of those sites is FanGraphs , which has transitioned the SQL relational database platform it relies on to process and analyze structured data to a curated instance of the open source MariaDB database, which is deployed on the Google Cloud Platform.
FanGraphs uses the data it collects to enable its editorial teams to deliver articles and podcasts that project, for example, playoff odds for a team based on the results of the SQL queries the company crafts. These insights can assist a baseball fan participating in a fantasy league, someone who wants to place a more informed wager on a game at a venue where gambling is legalized, or a game developer creating the latest MLB The Show video game. All of the above require high volumes of data.
One of the things that attracted FanGraphs to MariaDB is the level of performance that it could attain using a database-as-a-service (DBaaS) platform.
“On top of [Maria DB’s] SkySQL’s ease and performance, the exceptional service from our SkyDBAs have enabled us to completely offload our database responsibilities. That help goes far beyond day-to-day maintenance, backup, and disaster recovery. We find our SkyDBA looks at things we wouldn’t necessarily keep an eye on to secure and optimize our operations,” David Appelman, founder, and CEO of FanGraphs stated in a press release.
The explosion of data calls for an explosion of efficiency to manage it, and it’s a trend the industry can expect to see more of heading into 2022.
Data scientists will soon get a hand from no-code analytics SparkBeyond, a company that helps analysts use AI to generate new answers to business problems without requiring any code, released SparkBeyond Discovery.
The company aims to automate the job of a data scientist. Typically, a data scientist looking to solve a problem may be able to generate and test 10 or more hypotheses a day. With SparkBeyond’s machine, millions of hypotheses can be generated per minute from the data it leverages from the open web and a client’s internal data, the company says. Additionally, SparkBeyond explains its findings in natural language, so a no-code analyst can understand it.
The company says its auto-generation of predictive models for analysts puts it in a unique position in the marketplace of AI services. Most AI tools aim to help the data scientist with the modeling and testing process once the data scientist has already come up with a hypothesis to test.
The significance here essentially comes down to “time is money.” For example, the more time a data scientist can save solving problems and testing hypotheses, the more money a company saves in turn. “Analytics and data science teams can now leverage AI to uncover hidden insights in complex data, and build predictive models with no coding required [while leveraging the] AI-driven platform to make better business decisions, faster,” SparkBeyond stated in an October press release.
A service with the capacity to explore such a vast amount of hypotheses per minute based on internal and external data sources to reveal previously unrecognized drivers of business and scenario outcomes, and explains its findings in natural language to individuals that may not even need to code whatsoever, is quite the breakthrough in the analytics space.
Notable companies using SparkBeyond Discovery include McKinsey, Baker McKenzie, Hitachi, PepsiCo, Santander, and others.
Life is increasingly split between virtual and in-person – analytics must follow Hubilo, a platform that helps businesses of all sizes host virtual and hybrid events and gain access to real-time data and analytics, raised $23.5 million in its series A funding round earlier this year.
Investments in companies like Hubilo that integrate tools for virtual and in-person tasks, events, meetings, and activities will likely continue into 2022 as the world enters into year two of a global pandemic. Digital conferences, meetups, and events can be scaled more easily and with fewer resources than their brick-and-mortar counterparts, and the shift to hybrid and virtual platforms generates a significant amount of data in-person events otherwise may not have, which can prove valuable to companies for tracking and correlating business objectives.
Hubilo’s promises its customers enhanced data and measurability capacities. Event organizers using Hubilo’s platform can access engagement data on visitors, including the number of logins and new users versus active users. Additionally, event sponsors can also determine whether a visitor is likely to purchase from them based on engagement with their virtual booth. Data includes the number of business cards received, profile views, file downloads, and more.
The platform can also track visitors’ activities, such as attending a booth or participating in a video demonstration, and then recommend similar activities. From a business perspective, a sponsor or sales personnel can use these features to access potential prospects through a feature Hubilo calls “potential leads.” Its integration capabilities are also key for companies now operating in a hybrid or fully remote capacity. Hubilo features a one-click approach for common “go-to-market platforms including HubSpot, Salesforce, and Marketo, enabling companies to demonstrate ROI through event data integrated with their existing workflows,” its press release stated.
Integrating analytics tools with CRM and sales platforms is a vital trend that will continue to evolve as the world navigates not to get things back in-person, but rather, if they should do so, and what they can gain from hybrid approaches and tools instead.
Graph database gets a revamp What do the Panama Papers researchers, NASA engineers, and Fortune 500 leaders have in common? They all heavily rely on graphs and databases.
Neo4j, a graph database company that claims to have popularized the term graph database and aims to be a leader in the graph database industry, has shown signs through its growth this year that graphs are becoming a foundational part of the technology stack.
Across industry sectors, graph databases serve a variety of use cases that are both operational and analytical. A key advantage they have over other databases is their capability to intuitively generate models and rapidly generate data models and queries for highly interconnected domains. In an increasingly interconnected world, that is proving to be of value for companies.
What was then an early-adopter game has snowballed to the mainstream, and it’s still growing. “Graph Relates Everything” is how Gartner put it when including graphs in its top 10 data and analytics technology trends for 2021.
At this year’s Gartner Data & Analytics Summit 2021, graphs were, unsurprisingly, front and center.
Interest from tech and data decision-makers is continuously expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine.
With the noted increase in the volume of data that companies are now storing and processing in an increasingly digital world, tools that provide flexibility for interpreting, modeling, and using data will be key and their usage is sure to increase going forward.
According to Neo4j , that is precisely what it’s capable of providing to its users.
“ A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it,” the press release reads.
So what’s ahead for 2022? The analytics landscape will become increasingly complex in its capabilities, while simultaneously becoming even more user-friendly for researchers, developers, data scientists, and analytics professionals alike.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,963 | 2,021 |
"Rapidly Emerging Digital Health Company DiRx Raises $10 Million in Series A Funding | VentureBeat"
|
"https://venturebeat.com/business/rapidly-emerging-digital-health-company-dirx-raises-10-million-in-series-a-funding"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Rapidly Emerging Digital Health Company DiRx Raises $10 Million in Series A Funding Share on Facebook Share on X Share on LinkedIn EAST BRUNSWICK, N.J.–(BUSINESS WIRE)–December 28, 2021– DiRx, a digital health company that delivers significant savings on commonly prescribed, FDA-approved generic medicines through its online pharmacy platform without the need for insurance, today announced that it has successfully raised a total of $10 million in Series A funding, updating its previous announcement of having initially raised $5.75 million in September 2021 during the first phase of the round.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211228005157/en/ DiRx pharmacy – affordable medicine for all (Photo: Business Wire) This funding will continue to power the expansion and national market reach of its innovative pharmacy model to more Americans. Similar to its $5 million Seed Round raised last year, this Series A round was also a private placement with healthcare-specialized investors including new as well as returning participants.
With over 40 million uninsured and 80 million underinsured Americans struggling with unaffordable out-of-pocket medicine costs, the DiRx model reduces the number of supply chain layers and offers low priced options for over 1,000 FDA-approved prescription generic medications, without requiring health insurance or any discount cards or coupons.
With medicine priced as low as $3 a month, DiRx offers a 12-month price guarantee – an industry first, protecting consumers from unexpected price fluctuations that are now part of the industry norm.
In addition to its direct-to-consumer (DTC) digital platform, DiRx is also gaining significant traction with institutional (B2B) partnerships that would offer similar pharmacy benefit cost advantages to larger groups within the health ecosystem such as self-insured employers, third party administrators, benefit managers and brokers.
“We’re so glad that our team’s successful launch of a high quality and meaningful digital platform supported by a premium customer experience has powered continued investor confidence in our strategic direction and execution capabilities. We’re encouraged by our investors’ clear understanding of the economic pain points in the current system and our ability to enhance medicine access and affordability for everyday Americans.” Satish Srinivasan, Founder and CEO of DiRx “We’re delighted that Americans in over 40 states have already started ordering their prescription medicines from us within just our first few weeks of launch and, in keeping with our ‘medicine for all’ focus, we will continue to evolve our platform to reach more people, as we champion everyone’s right to affordable medicine,” added Simone Grapini-Goodman, Chief Marketing Officer.
About DiRx DiRx is an online pharmacy that delivers savings on commonly prescribed, FDA-approved generic medicines without the need for insurance. Founded by industry experts, DiRx draws a straight line from supply to demand to streamline the path between the manufacturer and the consumer. This lowers costs and makes more medicine accessible to more people. DiRx offers a viable model for businesses and community organizations while simplifying how consumers fill, pay for and receive maintenance medicine. To learn more, visit www.DiRxHealth.com.
For media inquiries, please contact DiRx media relations at [email protected].
View source version on businesswire.com: https://www.businesswire.com/news/home/20211228005157/en/ Simone Grapini-Goodman, Chief Marketing Officer Email: [email protected] https://www.dirxhealth.com/press VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,964 | 2,021 |
"Pantum Expands Product Lineup and Global Presence to Ensure Continued Business Success | VentureBeat"
|
"https://venturebeat.com/business/pantum-expands-product-lineup-and-global-presence-to-ensure-continued-business-success"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Pantum Expands Product Lineup and Global Presence to Ensure Continued Business Success Share on Facebook Share on X Share on LinkedIn ZHUHAI, China–(BUSINESS WIRE)–December 29, 2021– Pantum, a brand that develops, manufactures, and sells laser printers and toner cartridges for users around the world, is celebrating its 11 th anniversary this December, marking more than a decade of unwavering product optimization and innovation. Over the past 11 years, the company has built up a comprehensive product portfolio of single-function, multi-function, monochromatic, color, A4 and A3 printers. During this time, it has also expanded its sales network to cover 80 countries and regions around the world, with the aim of providing global users with exceptional, efficient products and services.
This year, Pantum launched the all-new Elite Series, and with A4 printing speeds of up to 40 pages per minute (PPM), it is the company’s fastest model yet. The Elite Series, which includes BP5100 Series(BP5100DN/BP5100DW)and BM5100 Series(BM5100ADN/BM5100ADW,BM5100FDN/BM5100FDW), can substantially assist in resolving the daily challenges including slow printing speeds and time-consuming waiting. In addition to the 250-page standard tray and the 60-page multi-function tray, two additional 550-page large-capacity trays will increase the maximum feed capacity to 1410 pages. It is targeted to the needs of a wide range of commercial customers, from SMEs and governments to large institutions, with more convenient features such as one-step driver installation and automatic duplex printing. One-step driver installation can aid in the intelligent identification of printer connection methods such as USB, network, or Wi-Fi connection. Automatic duplex printing allows users to save 50% of their paper usage and reduce 50% of their expenditure.
As the pandemic persisted, the company also promoted the more “speedy, stable, smart, and simple” 4S Efficiency series to provide economical printing solutions. Pantum is constantly modifying to improve the user experience for SMB users by observing and researching workplace environments and habits. Focusing on the ease of daily use and long-term savings, Pantum has identified and improved printer features that benefit SMBs the most. The printing speed and paper input are optimized for SMB users that print small to medium volumes of documents and do not need to refill input paper tray frequently. Also aimed at remote work, it is the perfect pandemic companion thanks to features such as mobile printing.
In 2021, Pantum is committed to continuously improving its product line. Pantum ‘s Elite Series not only filled the higher-end sector of the company’s offering, but also gave customers more options to suit their various needs. As the world looks to 2022, Pantum will continue to channel its efforts into optimizing its printer products and services, so as to provide users around the world with more diverse product choices and convenient user experience.
For more information, please visit our website , Facebook , Instagram , and YouTube.
For any media inquiries, please contact: [email protected].
View source version on businesswire.com: https://www.businesswire.com/news/home/20211228005231/en/ Jack Hao [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,965 | 2,013 |
"Babson Graduate Pooja Ika Launches eternalHealth, the First New Medicare Advantage Health Plan in Massachusetts Since 2013 | VentureBeat"
|
"https://venturebeat.com/business/babson-graduate-pooja-ika-launches-eternalhealth-the-first-new-medicare-advantage-health-plan-in-massachusetts-since-2013"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Babson Graduate Pooja Ika Launches eternalHealth, the First New Medicare Advantage Health Plan in Massachusetts Since 2013 Share on Facebook Share on X Share on LinkedIn Women Owned. Women Run. Women Built. Leveraging state-of-the-art technology that embraces AI and ML, this comprehensive and accessible Medicare advantage plan covers all the bases.
BOSTON–(BUSINESS WIRE)–December 30, 2021– eternalHealth , the first new health plan to be approved in Massachusetts since 2013, has officially announced its launch of the company, delivering high-quality and affordable Medicare Advantage plans.
Headquartered in Boston’s Back Bay, the company was founded by 2019 Babson College graduate Pooja Ika , who serves as its CEO. Since then, eternalHealth has already grown to a team of 20 and has raised a seed round of $10 million from successful healthcare and tech entrepreneurs. It also attracted the attention of Red Sox legend David Ortiz , popularly known as Big Papi. Ortiz has decided to partner with the organization as their Brand Ambassador, stating that he truly believes in eternalHealth’s mission.
“Eighty percent of healthcare decisions are made by women for their families. However, women remain unrepresented in the healthcare industry,” said Ms. Ika, who has been passionate about healthcare since childhood. “Having women at the forefront of our company increases engagement, improves outcomes, and enables us to make more comprehensive decisions around healthcare for the entire family.” Ms. Ika said that eternalHealth is committed to being a new kind of plan that is focused on establishing lasting relationships with its members.
“At our core, we believe in operating with trust, transparency, and integrity with all of our partners. This includes our members, health systems, doctors, and all other healthcare delivery partners,” added Ms. Ika. “We went through a rightful and rigorous vetting process by state officials and now, we are excited to say we’re the first new health plan launched in Massachusetts since 2013.” Through its technology-driven, innovative platform, eternalHealth substantially reduces the administrative and operating costs across its entire enterprise. The cost savings allows more dollars to be allocated towards actual medical care, while also passing down savings to members through robust, yet affordable products. By the time eternalHealth acquires approximately 10,000 members, it aims to manage its SG&A at 8%, which has been a challenge for many start-up Medicare Advantage health plans across the country.
eternalHealth understands the importance of having a member-centric platform. Alongside friendly and helpful customer service representatives, members also have access to an easy-to-use member portal and app that allows them to feel empowered to take their care into their own hands. Exemplified by their mission, “Your Forever Partner in Healthcare”, eternalHealth establishes unique relationships with all of their members. The company’s commitment and investment into preventative and chronic care management delineates their proactive, not reactive approach to healthcare. This is supported by members who want to utilize the consumer centric tools to better manage their care and wellness.
About eternalHealth Headquartered in Boston, eternalHealth provides high-quality care with low out-of-pocket costs to the residents of Massachusetts, while prioritizing preventive care and transparency. Founded, owned, and built by women, eternalHealth is a Medicare Advantage health plan that offers HMO and PPO products. For more information about our plans and services, please visit our website at www.eternalHealth.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20211230005296/en/ Emma Griffith The Mishra Group [email protected] 781-373-3220, ext. 210 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,966 | 2,021 |
"13 reasons CIOs worry about citizen developers building enterprise apps | VentureBeat"
|
"https://venturebeat.com/business/13-reasons-cios-worry-about-citizen-developers-building-enterprise-apps"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 13 reasons CIOs worry about citizen developers building enterprise apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Businesses need to operate faster and more efficiently to survive. They need more digital capabilities—now. But most enterprise IT organizations have significant supply constraints. There are simply too many business demands and too few skilled developers to deliver new solutions. The number of requirements IT departments receive far exceeds their capacity to fulfill them. The backlog of change requests often number in the hundreds or thousands and represent months or years of labor. Long delays frustrate business leaders and cause them to seek alternative solutions for digital transformation projects.
One remedy for this bottleneck that has gotten a lot of attention recently is shifting application development labor from IT to business users. These so-called “ citizen developers ” create applications for themselves or others, using tools that are not actively forbidden by IT or business units. While it might sound like a great idea, remember the problems brought about by shadow IT , when non-IT workers brought devices, software, and services into their organizations outside the ownership or control of IT. Shadow IT wreaked havoc in organizations when workers installed MS Access on their desktops and created their own databases.
We can expect to see similar problems as a result of the current citizen developer movement.
Two popular technologies citizen developers use to build new apps are robotic process automation , RPA, and low code application platforms, LCAPs. RPA helps automate tasks typically using UI-based record and playback technology, eliminating the need to integrate systems in a workflow. The user interface is the integration layer so users can bypass system connectivity that requires IT. LCAPs enable business technologists to build apps outside of IT controls. Both tools enable citizen developers to build new apps or hire third-party firms to avoid IT delivery backlogs and delays.
Democratizing technology and enabling non-IT resources to build apps sounds excellent, but this can cause downstream problems for the CIO and enterprise IT. Distributing this work to less skilled people makes more work down the road, segregates enterprise data, and introduces more risk to the business because “citizen developers” are not developers.
As long as your citizen developers are not interacting with IT systems or producing data that requires enterprise security and management, your CIO has nothing to worry about. But if that changes, things get complicated very quickly.
Here are 13 reasons a CIO doesn’t want citizens developing their own enterprise apps, ordered from the least important to the most important.
13. Apprenticeship is lost Brand new developers joining IT don’t start by creating mission-critical apps without oversight. Instead, they are mentored by senior developers who have both formal and informal education about what works and what fails in their enterprise. With a citizen development team this guidance is lost, and the risk of costly development errors is high.
12. Deploying and managing platforms is no different As soon as the app in question is accessing mission-critical or sensitive data, IT must extend its change management processes to that platform. That means dev environments, test environments, integration environments, performance test environments, and others. We hold IT accountable for system and data integrity; thus, these steps are necessary. Your citizen developers will essentially build apps under the same processes that IT follows.
These apps are thus subject to the same development delays as your IT-built apps. Most delays are due to environment and test data availability and management. If this development aspect is the same, citizen development won’t be any faster than traditional IT development.
11. Separation of duties In software development, there is a firm separation of duties. Strict governance doesn’t allow developers to perform their own quality assurance, so errors are caught before production (hopefully!). After a few unexpected “sev 1” issues, the citizen development process will be forced to mirror that of existing IT development practices to ensure requirements are properly captured, code is tested by independent quality assurance people, and changes are deployed cautiously.
10. Economics Building RPA apps to automate repetitive processes may seem like a cost saver. However, most of the people building these apps for the business are with third-party service firms. For instance, companies spend four dollars on services for every dollar spent on RPA software licenses. Spending so much on services to create or edit automations increases the total cost of ownership and may not have been accounted for at the beginning of the project. And since IT also leverage third parties for much of its software development, there’s again no good argument here for bypassing IT in the first place.
9. Security posture Citizen developers are everyday employees who introduce security risks to the business. They often employ poor security practices like reusing passwords, leaking data, and not keeping systems up to date. Therefore, companies can expect to spend billions of dollars on security software like firewall protection, antivirus, and anti-phishing software to protect the organization and lower the risk of inadequate security practices and hygiene from “citizens.” The Infosec team’s governance of IT software projects must extend to these projects no differently.
8. Control and governance IT governance combines rules, regulations, and policies that define, control, and ensure effective operations of an IT department. Democratizing technology and allowing non-IT employees to build applications can cause data and processes that weaken governance and centralized ROI reporting. This is especially true if data created inside a citizen-built app isn’t available for enterprise reports and dashboards. The absence of proper governance of citizen development projects either limits their scope dramatically or represents dangerous activities that must be brought under the same control structure as other IT initiatives.
7. Citizens don’t want to do it So-called “citizens” aren’t necessarily excited by being given such “power” to develop apps. It’s not a matter of tools and technology but whether or not they were hired to perform such tasks. There is always a fraction of non-IT people interested in app development; those people typically make their way into IT roles. Those who are not interested want to use technology, not create it.
6. Task orientation – the opposite of the big picture Typically, citizen developers only partially automate the steps they take in a process, not the end-to-end business process outcome.
Without a big picture view, we fall victim to Constrain Theory and end up with suboptimizations that may or may not produce an actual ROI on the outcome desired.
5. Makes Transformation Harder This task orientation of low-code platforms will often make business transformation harder. These platforms expose the business logic they embody via a UI. They are built for people to launch and click around. So building automation end-to-end, i.e., incorporating that logic in a larger context, becomes an even more challenging proposition than before the app was created.
Hardcoding the way tasks are performed today is often not getting you closer to a transformed outcome. How do we take six steps down to 2, or even 1? That’s not the goal of most low-code platforms, and it’s not a goal that citizen developers have in scope.
4. Production outages are difficult to triage Enterprise applications with various people and system integrations get pretty complex. Understanding issues and resolving them often takes experts representing the many systems involved. Folks in IT know all too well about the 50-person conference call to triage a high severity issue. IT must run production support and have significant involvement if a system is to remain up. Otherwise, downtime could ruin the value of the whole low-code initiative.
3. Most low-code tools oversell capabilities Many LCAPs state that developing apps using their platforms is easy and fit the citizen developer model, but “low code” doesn’t mean “no code.” When it comes to integrating with other systems, they follow what we call the “paste your code here” model. One LCAP developer stated on Gartner Peer Insights , “Processes that require business logic beyond what is built and available off the shelf require professional developers. I personally have worked to develop over 50 applications and would not have been able to develop a single one without the support of professional developers.” It takes an average of 101 days of training , mentoring, or upskilling citizen developers to overcome the skills gap problem. Just go to your favorite job posting board and look at the requirements of low/no-code platform jobs. You’ll see they require five years of Java and three years of SQL experience! What’s the difference between those postings and typical IT developer postings? 2. Businesses already have too many apps This one really gets me going. Here we are as an industry trying very hard to make new web app building faster and faster. But what business leader ever said, “What my team needs are more apps to deal with!” Businesses are already overwhelmed with the burgeoning list of apps in the workplace.
An enterprise uses 397 apps on average.
These apps have separate user interfaces and terminology, functional features, license costs, and/or a development team with a backlog of change and support requests. The average employee trying to manage processes through all of these apps switches between 35 job-critical applications more than 1,100 times every day. More apps increase costs and frustrate employees.
1. A productive “citizen developer” is a “developer” Note how many of the concerns above essentially resolve to “citizen developers have to do the same thing IT already does”? If they are doing all the same things a developer in IT has to do, they too are developers. By the time you get the citizen developer productive and safely contributing, you might as well drop the word “citizen.” John Michelsen is CEO of Krista Software.
He has 28 patents awarded or in process in database, distributed computing, virtual/cloud management, multi-channel web application portals, Service Virtualization (LISA), and mobile security.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,967 | 2,021 |
"How Northwestern’s Catalyst Lab scales healthy behavior program with Couchbase | VentureBeat"
|
"https://venturebeat.com/apps/northwesterns-catalyst-lab-scales-healthy-behavior-program-with-couchbase"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Northwestern’s Catalyst Lab scales healthy behavior program with Couchbase Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There’s no shortage of digital health apps on the market — more than 350,000, in fact, according to a recent study from the IQVIA Institute for Human Data Science.
But for all the breadth, there’s little depth. “Most apps are similar, no matter the behavior. There’s a graphical depiction of a user’s progress relative to their goal,” said Angela Fidler Pfammatter, PhD, an assistant professor at Northwestern University’s Feinberg School of Medicine.
Pfammatter focuses on the field of preventive medicine , which aims to help people build behaviors that improve physical health and mental well-being.
She and her colleague Bonnie Spring, Ph.D., direct the Catalyst Lab at Northwestern, which for several years has run a healthy lifestyle research study known as Evo.
Participants join the 12-month study with the hopes of improving their dietary quality, physical activity, stress, or sleep.
The unique needs of this program have led the Catalyst Lab to focus on a home-grown application for participants, dating back to the days of the Palm Pilot. For the last several years, the lab has partnered with Couchbase to host the application, sync data on the back end, and scale the app to support multiple studies at the same time. Earlier this year, Couchbase recognized the Catalyst Lab as one of seven winners of its Community Customer and Partner Awards.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Building health and confidence The lab’s trials recruit between 400 and 600 participants. As trials are funded by grants, typically from the National Institutes of Health (NIH), there isn’t much budget for recruitment, Pfammatter said. Prior to COVID-19, that budget went to ads on Chicago Transit Authority trains and buses. During the pandemic, the strategy shifted to Facebook ads and the NIH site ResearchMatch, which had the unintended advantage of giving the lab a more national presence, she added.
The lab looks to recruit individuals who recognize that they have poor health behaviors and want to improve them. This is a different model than the 47% of digital health apps that focus on managing chronic conditions, according to IQVIA’s research.
That’s one key reason the lab hasn’t opted for an off-the-shelf app for the Evo program, Pfammatter said. They’re hoping to help people avoid a diagnosis of a condition such as Type 2 diabetes or high blood pressure, not manage it once they have it.
The goal of the lab’s work is also different, as the emphasis on behavior change and cognitive behavioral therapy are more focused than the nudges and notifications that are common in commercial apps.
“We’re about putting power into people’s hands. We want them to build confidence and self-efficacy, so they have the skills to take with them when the research study is over,” Pfammatter said. “We want to give them the tools to figure things out for themselves.” Data provides accountability – but a storage challenge For the Catalyst Lab, these tools help participants objectively measure things such as physical activity, weight, and sleep. The app also provides study booklets and lessons. Virtual and in-person appointments with health coaches are available as needed.
In the early days of Evo, the program emphasized self-reported outcomes, with participants manually entering data about physical activity, food, or weight onto a sheet of paper or rudimentary PDA app. Today, the smartphone app connects to peripherals such as fitness trackers, blood pressure cuffs, and scales to track activity and measure vital signs, and it uses drop-down menus to let participants find the food they’ve eaten.
“Seeing the data in the app provides accountability ,” Pfammatter said. “There’s a full loop of data flow. The patient knows that the coach knows, and the coaches know they can rely on the data instead of asking the patient to recall everything, which can be biased.” Several years ago, though, the app was reaching its limit. The lab needed to create a new version of the app for each study. This posed a problem for three reasons: It limited the lab to two simultaneous research trials, it ate into the budget for each grant-funded program, and it led to an unmanageable amount of physical data storage.
“Data was everywhere – on 3.5-inches disks, on ZIP drives, and still in some binders,” said J.C. Subida, the lab’s senior software developer. “We realized enough was enough. We needed something we could reuse.” Potential for individualized interventions Working with the Couchbase gives the Catalyst Lab a platform on which to base the app, Subida said. This has alleviated the app’s data management and synchronization issues, allowing for better connections to participants’ smartphones as well as the sensors they use to collect data.
Feeding everything into a single cloud-based database has also improved the app’s scalability. The lab can run as many as seven studies at the same time, using the same app (with a tweaked interface) for each study. It’s also easier to onboard additional participants – a valuable plus if the lab’s work continues to attract participants from around the nation.
Scalability should also allow Evo to add more “novel” sensors, including those that will help participants better track what they eat, Pfammatter said.
“The more sensors we can support, the more information we have, and the more we can create interventions that are scalable, affordable, and efficient but also effective,” she said. “The more we streamline our systems and have a better handle on data management, the more we can discover those novel markers of what is going to help an individual.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,968 | 2,021 |
"What will applied AI look like in 2022? | VentureBeat"
|
"https://venturebeat.com/ai/what-will-applied-ai-look-like-in-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What will applied AI look like in 2022? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI adoption has skyrocketed throughout the last 18 months. Besides Joe McKendrick, who wrote the foundational piece on HBR , professionals who work on AI would readily attest to this statement. Google search seems to be in on this not-so-secret too: When prompted with “AI adoption,” its auto-complete spurts out “skyrocketed over the last 18 months”.
Both anecdotal evidence and surveys we are aware of seem to point in this same direction. Case in point: The AI Adoption in the Enterprise 2021 survey by O’Reilly , conducted in early 2021, had three times more responses than in 2020, and company culture is no longer the most significant barrier to adoption.
In other words, more people are working with AI, it’s now being taken seriously, and maturity is increasing. That’s all good news. It means AI is no longer a game that researchers play — it’s becoming applied, taking center stage for the likes of Microsoft and Amazon and beyond.
The following examines the pillars we expect applied AI to build on in 2022.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI chips Typically, when discussing AI, people think about models and data — and for good reason. Those are the parts most practitioners feel they can exert some control over, while hardware remains mostly unseen and its capabilities seen as being fixed. But is that the case? So-called AI chips, a new generation of hardware designed to optimally run AI-related workloads, are seeing explosive growth and innovation. Cloud mainstays such as Google and Amazon are building new AI chips for their datacenters — TPU and Trainium , respectively.
Nvidia has been dominating this market and built an empire around its hardware and software ecosystem.
Intel is looking to catch up , be it via acquisitions or its own R&D. Arm’s status remains somewhat unclear, with the announced acquisition by Nvidia facing regulatory scrutiny.
In addition, we have a slew of new players at different in their journey to adoption, some of which — like Graphcore and SambaNova — have already reached unicorn status.
What this means for applied AI is that choosing where to run AI workloads no longer means just deciding between Intel CPUs and Nvidia GPUs. There are now many parameters to consider, and that development matters not just for machine learning engineers, but also for AI practitioners and users. AI workloads running more economically and effectively means there will be more resources to utilize elsewhere with a faster time to market.
MLOps and data centricity Selecting what hardware to run AI workloads on can be thought of as part of the end-to-end process of AI model development and deployment, called MLOps — the art and science of bringing machine learning to production. To draw the connection with AI chips, standards and projects such as ONNX and Apache TVM can help bridge the gap and alleviate the tedious process of machine learning model deployment on various targets.
In 2021, with lessons learned from operationalizing AI, the emphasis is now shifting from shiny new models to perhaps more mundane, but practical, aspects such as data quality and data pipeline management, all of which are important parts of MLOps. Like any discipline, MLOps sees many products in the market, each focusing on different facets.
Some products are more focused on data, others on data pipelines, and some cover both. Some products monitor and observe things such as inputs and outputs for models, drift, loss, precision, and recall accuracy for data. Others do similar, yet different things around data pipelines.
Data-centric products cater to the needs of data scientists and data science leads, and maybe also machine learning engineers and data analysts. Data pipeline-centric products are more oriented towards DataOps engineers.
In 2021, people tried to give names to various phenomena pertaining to MLOps , slice and dice the MLOps domain , apply data version control and continuous machine learning , and execute the equivalent of test-driven development for data , among other things.
What we see as the most profound shift, however, is the emphasis on so-called data-centric AI.
Prominent AI thought leaders and practitioners such as Andrew Ng and Chris Ré have discussed this notion, which is surprisingly simple at its core.
We have now reached a point where machine learning models are sufficiently developed and work well in practice. So much so, in fact, that there is not much point in focusing efforts on developing new models from scratch or fine-tuning to perfection. What AI practitioners should be doing instead, according to the data-centric view, is focusing on their data: Cleaning, refining, validating, and enriching data can go a long way towards improving AI project outcomes.
Large language models, multimodal models, and hybrid AI Large language models (LLMs) may not be the first thing that comes to mind when discussing applied AI. However, people in the know believe that LLMs can internalize basic forms of language, whether it’s biology, chemistry, or human language, and we’re about to see unusual applications of LLMs grow.
To back those claims, it’s worth mentioning that we are already seeing an ecosystem of sorts being built around LLMs, mostly the GPT-3 API commercially available by OpenAI in conjunction with Microsoft. This ecosystem consists mostly of companies offering copywriting services such as marketing copy, email, and LinkedIn messages. They may not have set the market on fire yet, but it’s only the beginning.
We think LLMs will see increased adoption and lead to innovative products in 2022 in a number of ways: through more options for customization of LLMs like GPT-3 ; through more options for building LLMs, such as Nvidia’s NeMo Megatron ; and through LLMs-as-a-service offerings, such as the one from SambaNova.
As VentureBeat’s own Kyle Wiggers noted in a recent piece, multimodal models are fast becoming a reality.
This year, OpenAI released DALL-E and CLIP , two multimodal models that the research labs claims are “a step toward systems with [a] deeper understanding of the world.” If LLMs are anything to go by, we can reasonably expect to see commercial applications of multimodal models in 2022.
Another important direction is that of hybrid AI, which is about infusing knowledge in machine learning. Leaders such as Intel’s Gadi Singer, LinkedIn’s Mike Dillinger, and Hybrid Intelligence Centre’s Frank van Harmelen all point toward the importance of knowledge organization in the form of knowledge graphs for the future of AI. Whether hybrid AI produces applied AI applications in 2022 remains to be seen.
Applied AI in health care and manufacturing Let’s wrap up with something more grounded: promising domains for applied AI in 2022. O’Reilly’s AI Adoption in the Enterprise 2021 survey cites technology and financial services as the two domains leading AI adoption. That’s hardly surprising, given the willingness of the technology industry to “eat its own dog food” and the willingness of the financial industry to gain every inch of competitive advantage possible by using its deep pockets.
But what happens beyond those two industries? O’Reilly’s survey cites health care as the third domain in AI adoption, and this is consistent with our own experience. As State of AI authors Nathan Benaich and Ian Hogarth noted in 2020, biology and health care are seeing their AI moment.
This wave of adoption was already in motion, and the advent of COVID-19 accelerated it further.
“Incumbent pharma is very much driven by having a hypothesis a priori, saying, for example, ‘I think this gene is responsible for this disease, let’s go prosecute it and figure out if that’s true.’ Then there are the more software-driven folks who are in this new age of pharma. They mostly look at large-scale experiments, and they are asking many questions at the same time. In an unbiased way, they let the data draw the map of what they should focus on,” Benaich said to summarize the AI-driven approach.
The only way to validate whether the new age pharma approach works is if they can generate drug candidates that actually prove useful in the clinic, and ultimately get those drugs approved, Benaich added. Out of those “new age pharma” companies, Recursion Pharmaceuticals IPO’d in April 2021, and Exscientia filed to IPO in September 2021.
They both have assets generated through their machine learning-based approach that are actually being used clinically.
As for manufacturing, there are a few reasons why we choose to highlight it among the many domains trailing in AI adoption. First, it suffers a labor shortage of the kind AI can help alleviate. As many as 2.1 million manufacturing jobs could go unfilled through 2030, according to a study published by Deloitte and The Manufacturing Institute. AI solutions that perform tasks such as automated physical product inspections fall into that category.
Second, the nature of industrial applications requires combining swathes of data with the physical world in very precise ways. This, some people have noted , lends itself well to hybrid AI approaches.
And last but not least, hard data. According to a 2021 survey from The Manufacturer , 65% of leaders in the manufacturing sector are working to pilot AI. Implementation in warehouses alone is expected to hit a 57.2% compound annual growth rate over the next five years.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,969 | 2,021 |
"How a startup uses AI to put worker safety first | VentureBeat"
|
"https://venturebeat.com/ai/this-startup-uses-ai-to-put-worker-safety-first"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How a startup uses AI to put worker safety first Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Unpredictable spikes and drops in demand combined with chronic supply chain and labor shortages are accelerating the pace of digital transformation in manufacturing, starting with worker safety. Forty-eight percent of manufacturers say their progress on digital transformation initiatives has accelerated so much that it’s years ahead of what was originally anticipated, according to a KPMG study.
Keeping workers safe and connected is the primary goal of most digital transformation and hiring plans, with on-site distancing & workplace safety listed as the two highest priorities.
Everguard.ai , a startup based in Irvine, California, combines AI, computer vision, and sensor fusion to reduce the risk of injuries and accidents by preventing them before they happen. The company’s SENTRI360 platform proves effective in preventing workplace injuries and operational downtimes at several steel-heavy manufacturing companies, including Zekelman Industries and SeAH Besteel.
Worker safety is the future of manufacturing From redesigning shop floors, to meeting social distancing guidelines, and doubling their investment in training and development, worker safety now dominates manufacturing — even more so due to the pandemic. Frontline workers saved many manufacturing companies from going out of business by applying their expertise and insights in real-time, enabling entire plants to pivot and produce new products at record speed. Continued trade tensions, tariffs, and supplier shortages put more pressure on manufacturers to reshore production and have worker safety programs in place now. As manufacturing returns to the U.S., AI and computer vision are stepping up to improve worker safety.
Frontline workers reconfigured entire production lines and machines, and also learned new work instructions to produce much-needed Personal Protective Equipment (PPE), medical supplies, devices, and products — in some cases overnight. What began as an emergency response to the world’s PPE and medical products’ shortage quickly turned into a validating event that proved that protecting and connecting workers is the future of manufacturing.
Gartner says that manufacturers who prioritize worker safety, training, and development create a strong foundation for the future of a connected workforce. Healthcare-optimized devices, wristbands, and lone worker protection are on the slope of enlightenment, according to Gartner’s research.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Worker collaboration, safety, and security technologies are delivering results across distribution and manufacturing enterprises today, driven by the combination of advances in supervised and unsupervised machine learning algorithms and computer vision.
Improving working safety with AI and computer vision Computer vision has progressed from an experimental technology to one that can interpret patterns in images and classify them using machine learning algorithms to scale. Advances in deep learning and neural networks enable computer vision uses to increase for enterprises, improving worker safety in the process. Computer vision techniques to reduce worker injuries and improve in-plant safety are based on unsupervised machine learning algorithms that excel at identifying patterns and anomalies in images. Computer vision platforms, including Everguard’s SENTRI360, rely on convolutional neural networks to categorize images and industrial workflows at scale.
The quality of the datasets used to train supervised and unsupervised machine learning algorithms determines their accuracy. Convolutional neural networks also require large amounts of data to improve their precision in predicting events, fine-tuned through iterative cycles of machine learning models. Each iteration of a machine learning model then extracts specific attributes of an image and, over time, classifies attributes. Everguard uses real-time video feeds from production plants combined with Industrial Internet of Things (IIoT) sensor data to create the data that convolutional neural networks needed to improve their predictive accuracy of potential accidents and safety incidents. The greater the volume and quality of data provided to machine learning models, the higher the predictive accuracy and the more effective prescriptive analytics will become.
The SENTRI360 platform is differentiated from pure computer vision-based systems because it relies on a proprietary sensor fusion approach. Sensor fusion leverages various sensors fused together at the edge to help contextualize the workers’ harsh environment more completely than any single-sensor approach ever could.
Above: AI and computer vision-based platforms are evolving from providing baseline descriptive analytics that are often lagging indicators of safety events to more predictive and prescriptive analytics that prove successful in averting accidents and injury. Combining AI and computer vision is to control and eliminate workplace injury risks.
Everguard’s SENTRI360 platform relies on these techniques to generate leading indicators and produce real-time prescriptive metrics and interventions to protect workers’ safety. Their goal is to provide real-time, predictive analytics-based alerts to reduce worker injury risks and improve shop floor productivity.
Advanced analytics techniques have been used for years to provide descriptive, after-the-fact metrics on worker safety. What Everguard.ai’s says makes its approach unique is providing actionable alerts before a potential event occurs, combining AI and sensor fusion to provide a more practical approach to averting accidents and injury. Like many companies whose core technology is AI-based predictive analytics and outcomes, Everguard.ai relies on synthetic data and simulations of potential accidents and injuries to fine-tune predictive and prescriptive analytics.
Manufacturers are redesigning shop floors, re-routing workflows, and modifying work cells to ensure worker safety. Protecting their workers from COVID-19 and assuring every plant is safe is the highest priority they’re pursuing today. Computer vision identifies which workers have PPE equipment in compliance with OSHA guidelines. Real-time locating systems (RTLS) identify a worker, provided that they’ve opted to participate. Everguard.ai’s Sensor Fusion technology merges computer vision and RTLS to provide a real-time safety assessment of a given plant and provides alerts back to employees via wearable devices. Audible, haptic, LED, and text-based messaging keep workers informed of potential risks or dangerous conditions in real time. The wearable was designed by Everguard and can also sense biometrics, such as dehydration.
Above: Sensor fusion combines computer vision and Real-Time Locating Systems (RTLS) to produce alerts of potentially dangerous conditions for workers. The above picture is from the SeAH Besteel steel mill in South Korea. Everguard.ai’s computer vision models have proven effective in detecting different human postures and looking for unsafe activity, including repetitive motion, unsafe load pick-up posture, improper hand on load handling, and worker orientation relative to heavy equipment (if worker facing oncoming crane or vehicle load) Prioritizing workers’ privacy Everguard’s a CEO, Sandeep Pandya shared details about workers’ privacy, given the massive amount of data it captures and analyzes at client sites. “The most important thing is to give shop floor workers and their leaders [the] complete visibility into how the data collected is used. Our implementation teams work with them and provide complete access to our systems, how data is anonymized for specific tasks, and how we are careful to protect each workers’ identity,” Sandeep said.
“All effective change management starts on the shop floor. Our goal is to be transparent with the workers there because their choice to own the system will mean the difference in it succeeding or not,” he said.
Sandeep told VentureBeat that “workers can choose to wear the device that can alert them of a safety issue to any data being captured or not. We advise clients to have the systems be 100% opt-in to improve adoption rates and protect workers’ privacy.” Note that some workplace safety platforms, including Everguard, don’t disclose how they trained their computer vision algorithms or whether they retain any recordings of workers. In lieu of this information, how — or whether — these companies ensure data remains anonymous is an open question, as is whether they require their customers to alert employees that their movements are being analyzed.
Forcing someone to wear a sensor for biometric data is a sure way to lose valuable employees. Production workers in its client’s plants are so valuable that Sandeep said, “our clients are doing all they can to retain them. Talented manufacturing workers are in high demand and nearly impossible to replace today.” Instead of using the data to r ank employees’ productivity or threatening employees to produce more or be let go, Everguard says its system’s primary function is accident prevention, and the data is used for coaching workers, so they stay safe.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,970 | 2,021 |
"Researchers are working toward more transparent language models | VentureBeat"
|
"https://venturebeat.com/ai/researchers-are-working-toward-more-transparent-language-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers are working toward more transparent language models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The most sophisticated AI language models, like OpenAI’s GPT-3 , can perform tasks from generating code to drafting marketing copy. But many of the underlying mechanisms remain opaque, making these models prone to unpredictable — and sometimes toxic — behavior. As recent research has shown, even careful calibration can’t always prevent language models from making sexist associations or endorsing conspiracies.
Newly proposed explainability techniques promise to make language models more transparent than before. While they aren’t silver bullets, they could be the building blocks for less problematic models — or at the very least models that can explain their reasoning.
Citing sources A language model learns the probability of how often a word occurs based on sets of example text. Simpler models look at the context of a short sequence of words, whereas larger models work at the level of phrases, sentences, or paragraphs. Most commonly, language models deal with words — sometimes referred to as tokens.
Indeed, the largest language models learn to write humanlike text by internalizing billions of examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences in near-real-time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Many studies demonstrate the shortcomings of this training approach. Even GPT-3 struggles with nuanced topics like morality, history, and law; language models writ large have been shown to exhibit prejudices along race, ethnic, religious, and gender lines. Moreover, language models don’t understand language the way humans do. Because they typically pick up on only a few key words in a sentence, they can’t tell when words in a sentence are jumbled up — even when the new order changes the meaning.
A recent paper coauthored by researchers at Google outlines a potential, partial solution: a framework called Attributable to Identified Sources. It’s designed to evaluate the sources (e.g., Reddit and Wikipedia) from which a language model might pull when, for example, answering a particular question. The researchers say that the framework can be used to assess whether statements from a model were derived from a specific source. With it, users can figure out to which source the model is attributing its statements, showing evidence for its claims.
“With recent improvements in natural language generation … models for various applications, it has become imperative to have the means to identify and evaluate whether [model] output is only sharing verifiable information about the external world,” the researcher wrote in a paper. “[Our framework] could serve as a common framework for measuring whether model-generated statements are supported by underlying sources.” The coauthors of another study take a different tack to language model explainability. They propose leveraging “prototype” models — Proto-Trex — incorporated into a language model’s architecture that can explain the reasoning process behind the model’s decisions. While the interpretability comes with a trade-off in accuracy, the researchers say that the results are “promising” in providing helpful explanations that shed light on language models’ decision-making.
In the absence of a prototype model, researchers at École Polytechnique Fédérale de Lausanne (EPFL) generated “knowledge graph” extracts to compare variations of language models. (A knowledge graph represents a network objects, events, situations, or concepts and illustrates the relationship between them.) The framework can identify the strengths of each model, the researchers claim, allowing users to compare models, diagnose their strengths and weaknesses, and identify new datasets to improve their performance.
“These generated knowledge graphs are a large step towards addressing the research questions: How well does my language model perform in comparison to another (using metrics other than accuracy)? What are the linguistic strengths of my language model? What kind of data should I train my model on to improve it further?” the researchers wrote.
“Our pipeline aims to become a diagnostic benchmark for language models, providing an alternate approach for AI practitioners to identify language model strengths and weaknesses during the model training process itself.” Limitations to interpretability Explainability in large language models is by no means a solved problem. As one study found, there’s an “interpretability illusion” that arises when analyzing a popular architecture of language model called bidirectional encoder representations from transformers (BERT). Individual components of the model may incorrectly appear to represent a single, simple concept, when in fact that they’re representing something far more complex.
There’s another, more existential pitfall in model explainability: over-trust. A 2018 Microsoft stud y found that transparent models can make it harder for non-experts to detect and correct a model’s mistakes. More recent work suggests that interpretability tools like Google’s Language Interpretability Tool , particularly those that give an overview of a model via data plots and charts, can lead to incorrect assumptions about the dataset and models, even when the output is manipulated to show explanations that make no sense.
It’s what’s known as the automation bias — the propensity for people to favor suggestions from automated decision-making systems. Combating it isn’t easy, but researchers like Georgia Institute of Technology’s Upol Ehsan believe that explanations given by “glassbox” AI systems, if customized to people’s level of expertise, would go a long way.
“The goal of human-centered explainable AI is not just to make the user agree to what the AI is saying. It is also to provoke reflection,” Ehsan said, speaking to MIT Tech Review.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,971 | 2,022 |
"Preparation is key to AI success in 2022 | VentureBeat"
|
"https://venturebeat.com/ai/preparation-is-key-to-ai-success-in-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Preparation is key to AI success in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence is unlike previous technology innovations in one crucial way: it’s not simply another platform to be deployed, but a fundamental shift in the way data is used. As such, it requires a substantial rethinking as to the way the enterprise collects, processes, and ultimately deploys data to achieve business and operational objectives.
So while it may be tempting to push AI into legacy environments as quickly as possible, a wiser course of action would be to adopt a more careful, thoughtful approach. One thing to keep in mind is that AI is only as good as the data it can access , so shoring up both infrastructure and data management and preparation processes will play a substantial role in the success or failure of future AI-driven initiatives.
Quality and quantity According to Open Data Science , the need to foster vast amounts of high-quality data is paramount for AI to deliver successful outcomes. In order to deliver valuable insights and enable intelligent algorithms to continuously learn, AI must connect with the right data from the start. Not only should organizations develop sources of high-quality data before investing in AI, but they should also reorient their entire cultures so that everyone from data scientists to line-of-business knowledge workers understand the data needs of AI and how results can be influenced by the type and quality of data being fed into the system.
In this way, AI is not merely a technological development but a cultural shift within the organization. By taking on many of the rote, repetitive tasks that tend to slow down processes, AI changes the nature of human labor to encompass more creative, strategic endeavors – ultimately increasing the value of data, systems, and people to the overall business model. In order to achieve this, however, AI should be deployed strategically, not haphazardly.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Before you invest in AI, then, tech consultancy New Line Info recommends a thorough analysis of all processes to see where intelligence can make the biggest impact. Part of this review should include the myriad ways in which AI may require new methods of data reporting and the development of all-new frameworks for effective modeling and forecasting. The goal here is not to produce sporadic gains or one-off initiatives, but to foster a more holistic transformation of data operations and user experiences.
By its very nature, this transformation will be evolutionary, not revolutionary. There is no hard line between today’s enterprise and a futuristic intelligent one, so each organization will have to cut its own path through the woods. On Inside Big Data recently, Provectus solution architect Rinat Gareev identified seven steps to AI adoption, beginning with figuring out exactly what you hope to do with it. AI can be tailored to almost any environment and optimized for any task, so having a way to gauge its success is crucial at the outset.
Chart a course for AI Furthermore, organizations should identify priority use cases and establish development roadmaps for each one based on technical feasibility, ROI, and other factors. Only then should you move on to a general foundation for broad implementation and rapid scale across the organization, not to someday complete this transformation but to perpetually build a more efficient and effective data ecosystem.
However, perhaps the most important thing to keep in mind about AI is that it is not a magic bullet for everything that ails the enterprise. As CIO Dive’s Roberto Torres pointed out recently, there is currently a gap between what’s possible and what’s expected of AI, and this disconnect is hurting implementation. Sometimes, the limitations lie within the AI itself, as people come to think that an algorithmic-based intelligence is capable of far greater feats than it can actually accomplish. But problems can also arise within support infrastructure, in the data prep, as mentioned above, or sometimes in simply applying a given AI model to the wrong process.
The fact is that the enterprise has taken only the very first steps on a long journey to a new cultural paradigm, and there will undoubtedly be many missteps, wrong turns, and about-faces along the way. So while it’s important to get your hands dirty with AI sooner rather than later, you also need to pause a moment and figure out what you need to do to prepare for this change, and what you hope to get out of it.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,972 | 2,021 |
"How to discover AI code, know-how with CatalyzeX | VentureBeat"
|
"https://venturebeat.com/ai/how-to-discover-ai-code-know-how-with-catalyzex"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to discover AI code, know-how with CatalyzeX Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When it comes to building an AI project, whether it’s for speech recognition or some other use case, data scientists and developers tend to spend plenty of time on Google, sifting through existing research that has already been conducted in the same area.
The goal of the effort is to understand which techniques and models have been applied in the space and which of those are good enough to refer to or build on. However, the problem is that with tens of thousands of research articles already on the internet, finding relevant technical material for the project at hand comes off as an extremely tedious task.
CatalyzeX accelerates AI code discovery California-based CatalyzeX solves this challenge with a dedicated search engine to discover AI models and code. The solution, powered by the company’s in-house crawlers, aggregators, and classifiers, automatically goes through technical papers on sites such as Arxiv as well as code platforms to match and link machine learning models and techniques with various corresponding code implementations.
Here’s how you can use it to build your own AI project.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To begin, simply visit catalyzex.com and enter the project topic in the search bar. This could be anything from object detection to building a recommendation engine or algorithm for disease detection. Users could also use the recommendations suggested at the bottom of the search bar.
After searching, the platform lists most, if not all, available researches on the topic in question. For instance, a search for Covid-19 detection shows dozens of papers detailing various techniques, approaches, and frameworks for diagnosing the disease through chest X-rays. Users could go through the abstracts of these research papers right from the results page and then click through the most suitable one to investigate.
Once a research paper is opened, CatalyzeX aggregates all available information on it, starting from the names of the authors to the actual ML research, the findings, and the figures. Users could read through and understand what exactly the researchers wanted to accomplish as well as all related technical details.
Most importantly, this page also provides the link to the database and model code (right under the title) used for the research by the authors or contributed by the CatalyzeX community. So, if the project has some value, users could delve into the database and code implementation and build on top of it.
Above: An AI research paper with its code and database on CatalyzeX In case the code and the database used for the research have not been made public, the platform also provides an option to contact the authors of the paper. All users have to do is follow the above-mentioned steps and click the “Ask Authors” button to reach out. Furthermore, users can even use the CatalyzeX browser extension to get links to code directly in Google search results. It further simplifies the search process, and is available on both Chrome and Firefox.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,973 | 2,022 |
"Data will continue to move to the edge in 2022 | VentureBeat"
|
"https://venturebeat.com/ai/edge-computing-in-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data will continue to move to the edge in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
How can software be faster, cheaper, and more resilient? For many developers, the answer in 2021 was to move the computation out of a few big datacenters and into many smaller racks closer to users on the metaphorical edge of the internet, and 2022 promises more of the same.
The move is driven by physics and economics. Even when data travels at the speed of light, the time it takes to send packets halfway around the world to one central location is noticeable by users whose minds start to wander in just a few milliseconds. The price of data transmission is often surprising, and many CIOs have learned to make sure to include the cost of data exfiltration alongside the price of servers and disk drives.
These fundamental advantages are indisputable, but edge computing will continue to be limited by countervailing forces that may, in some cases, be stronger. Datacenter operators are able to negotiate lower prices for electricity and that typically means right next to the point of generation like a few miles from some large hydroelectric dams. Keeping data in multiple locations synchronized can be a challenge, and some algorithms like machine learning also depend heavily on working with large, central collections.
Despite these challenges, many architects continue to embrace the opportunity, thanks to the efforts of cloud companies to simplify the process. In May 2021, Amazon, for instance, changed its billing granularity for their Lambda@Edge functions from 50 milliseconds to 1 millisecond, opening up more opportunities. Developers are now paying closer attention to the time a function runs and splitting up work into smaller, simpler units that can take advantage of the lower prices.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AWS vs. Cloudfare AWS’s decision was no doubt driven by competition from Cloudflare , a company with a strong focus on edge computing. In 2021, the company continued to push hard, focusing especially on the high egress fees that some cloud providers charge on data leaving their centers. Cloudflare’s R2 storage layer, introduced in September 2021, is pushing prices lower, especially for data that’s only accessed occasionally. The service is tightly integrated with Cloudflare Workers, their edge functions, opening up more opportunities for both storage and computation in their local nodes.
Cloudflare also announced more opportunities to simplify adoption by adding partnerships with database companies MongoDB and Prisma. Developers can rely upon modern query languages and well-understood database models with their Worker functions, expanding the opportunities to move workloads to the edge.
Other database companies are following the same path. PlanetScale, for example, is managing Vitess databases as a service, simplifying the work of horizontally scaling large datasets that span multiple locations.
A big question for 2022 will be how many workers return to offices. These locations are the original edges, hosting local databases and file storage, often down the hall. If the pandemic recedes and people return to the offices for a substantial amount of the workweek, the office building will again be a popular edge location. Cloud companies continue to push into company datacenters, offering hybrid solutions for on-premises hosting. It’s now possible to use much of the same software infrastructure from the cloud hosts in your local datacenter, saving time and money. Some CIOs continue to feel better about having the servers under the same roof.
Edge computing’s optimal location: phones and laptops The ultimate edge location, though, will continue to be in the phones and laptops. Web app developers continue to leverage the power of browser-based storage while exploring more efficient ways to distribute software. WebASM is an emerging standard that can bring more powerful software to handsets and desktops without complicated installation or permissioning.
Computer scientists are also working at a theoretical level by redesigning their algorithms to be distributed to local machines. IBM, for instance, is building AI algorithms that can split the jobs up so the data does not need to move. When they’re applied to data collected by handsets or other IoT devices , they can learn and adapt while synchronizing only essential data.
This distributed buzzword is also more commonly found in debates about control. While the push by some to create a distributed web, sometimes called Web3, is driven more by political debates about power than practical concerns about latency, the movement is in the same general direction. Edge computing developers looking for speed and blockchain advocates looking for distributed algorithms will be tackling the same problems of splitting up control and blowing up centralized models.
This push will also be helped, perhaps unintentionally, by governments that battle to exert more and more control over an internet that was once relatively border-free.
Edge computing allows companies to localize computing and data storage inside political boundaries, simplifying compliance with the combinatorics of a burgeoning regulatory thicket or swamp. AWS, for instance, talks seriously about adding city-level control and embargoes to their CloudFront. Remember the age of the city-state in history class, when Athens and Sparta reigned? That model is returning as the governments work to atomize the internet and hasten the growth of edge computing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,974 | 2,021 |
"2021 was a breakthrough year for AI | VentureBeat"
|
"https://venturebeat.com/ai/2021-was-a-breakthrough-year-for-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 2021 was a breakthrough year for AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Enterprises continued to accelerate the adoption of AI and machine learning to solve product and business challenges and improve revenues in 2021. Meanwhile, AI startups have experienced significant growth, roping in major investments to improve their product offerings and meet the growing demand for AI solutions across sectors. In fact, data from CB Insights Research shows that while the number of equity funding deals in the global AI space this year is just slightly less than the last (2,384 deals in 2021 versus 2,450 in 2020), the amount of capital invested has almost doubled to $68 billion.
As we head into 2022, here’s a quick look back at the milestones that shaped the AI space over the past 12 months.
January To start the year, OpenAI announced DALL-E , a multimodal AI system that generated images from text. The company asserted that DALL-E could manipulate and rearrange objects in generated imagery and also create things that don’t exist, like a cube with the texture of a porcupine or a chair that looks like an avocado.
Among other notable developments that month, the U.S. Department of Homeland Security tested AI to recognize masked faces; Uber researchers proposed a model that emphasized more positive and polite responses; Salesforce released a framework to test NLP model robustness, and AI models from Microsoft and Google surpassed human performance on the SuperGLUE language benchmark designed to summarize AI research progress on a diverse set of language tasks. Facebook and NYU researchers also announced the development of a model that predicted how the condition of a COVID-19 patient might evolve over four days.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the business side, multiple AI startups raised funding, including Lacework ($525M), TripActions ($155M), K Health ($132M), Harness ($115M), Workato ($110M), and iLobby ($100M).
February In February, Microsoft launched a custom neural voice in limited access and announced the details of Speller100 , an AI system that checks spelling in over 100 languages, while Google released TensorFlow 3D to help enterprises develop and train models capable of understanding 3D scenes. Amazon launched Lookout , a computer vision service to detect defects in manufactured products. The month saw a significant number of equity funding rounds in the AI space, with the biggest ones going to data lakehouse startup Databricks ($1 billion), RPA startup UiPath ($750 million), autonomous trucking startup Plus ($200 million), SentinelOne ($150 million), and Locus Robotics ($150 million).
March March saw innovative AI announcements from giants such as Facebook, Microsoft, Nvidia, and IBM.
Facebook first announced an AI model that has been trained on a billion parameters and can learn from any random group of images — without needing curation or annotation of any kind. Nvidia developed with Harvard an AI toolkit called AtacWorks to lower the cost and time required for complicated genome analysis. Finally, IBM launched a cloud-based, AI-driven molecular design platform that automatically invents new molecular structures, and Microsoft launched Azure Percept , a platform of hardware and services aimed at simplifying the ways customers can use AI technologies at the edge. The biggest funding rounds of the month went to Dataminr ($475 million), PatSnap ($300 million), WorkFusion ($220 million), and Jumio ($150 million).
April In April, the European Commission announced draft rules for the use of AI, proposing a ban on applications involved in social scoring and strict safeguards for high-risk AI apps used in recruitment, critical infrastructure, credit scoring, migration, and law enforcement. The proposal also suggested that companies breaching the rules might face fines up to 6% of their global revenue of 30 million euros ($36 million USD), whichever is the higher.
Facebook claimed to have developed an AI model that predicted drug combinations to treat complex diseases, Grid.ai launched a platform to train AI models on the cloud, Cerebras launched an AI supercomputing processor with 2.6 trillion transistors, and Huawei trained the Chinese equivalent of GPT-3. On the funding side, SambaNova Systems , a company developing chips for AI workloads, got the biggest investment at $676 million. This was followed by ActiveCampaign ($240 million), Vectra AI ($130 million), and Gupshup ($100 million).
May Google held I/O 2021 and made some significant announcements. The biggest was the fourth generation of tensor processing units (TPUs) for AI and ML workloads, a language model for dialog applications called LaMDA , and the Vertex AI managed platform to help companies accelerate the deployment and maintenance of their AI models.
Yelp built an AI to identify spammy photos, AWS launched Redshift ML to enable model training with SQL, and Asapp released an action-based conversation dataset to help enterprises develop improved customer service AI. Redwood Software, a cloud-based business, and IT automation solution provider raised the biggest round of the month at $379 million, followed by Asapp ($120 million) and Sima.ai ($80 million).
June In June, IBM open-sourced Uncertainty Quantification 360, a toolkit focused on enabling AI to understand and communicate its uncertainty, while GitHub launched Copilot , an AI-powered pair programming tool to help developers write better code.
Then, following Amazon’s footsteps, Google Cloud introduced an AI-powered solution to detect defects in manufactured goods. Open AI, meanwhile, claimed to have mitigated the bias and toxicity of GPT-3 , and Mythic launched an AI processor that consumed 10 times less power than a typical system-on-chip or graphics processing unit (GPU). The biggest investment takers of the month were Gong ($250 million), Iterable ($200 million), Moveworks ($200 million), and Verbit ($157 million).
July Facebook open-sourced Droidlet , a platform for building robots leveraging NLP and computer vision, and Alphabet spun out Intrinsic, a new independent company building software tools for the industrial robotics space.
OpenAI , on the other hand, made headlines by announcing it had disbanded its robotic division and is planning to shift focus toward areas where more data is readily available. The move from the company came after years of research into machines that could learn how to perform tasks like solving a Rubik’s cube. Just around the same time, DeepMind, OpenAI’s major rival, announced AlphaFold 2 , an AI system that performs the complicated task of predicting the shape of proteins.
On the business side, enterprise AI development platform DataRobot raised the biggest round ($300 million) of the month, followed by Gupshup ($240 million) and Untethered AI ($125 million).
August Tesla announced plans to launch an AI humanoid for performing repetitive tasks, OpenAI detailed an API to translate natural language into code, and Google announced the SoundStream neural codec to suppress noise and provide compressed high-quality audio.
Databricks raised $1.6 billion in funding, taking its valuation to $38 billion and bolstering its position against Snowflake, while Dataiku , which helps data scientists build their own predictive AI models, raised $400 million.
Other AI startups that raised major rounds during the same period were corporate spend management company Ramp ($300M), call center automation platform Talkdesk ($230M), travel-tech platform Hopper ($175M), and sales enablement platform Seismic ($170M).
September DeepMind claimed its AI-driven weather forecasting model performed better than conventional models, while researchers at Bloomberg Quant Research and Amazon Web Services claimed to have successfully trained a machine learning model that estimates the emissions of businesses that don’t disclose their emissions. Google also released a study that showed deep learning can detect abnormal chest x-rays with accuracy that matches that of professional radiologists.
The funding department was led by workflow automation platform Conexiom ($130 million), followed by ContractPodAI ($115 million).
October The North Atlantic Treaty Organization (NATO) — the military alliance of 30 countries that border the North Atlantic Ocean — announced in October that it would adopt an 18-point AI strategy and launch a future-proofing fund with investment up to $1 billion. Meanwhile, Intel open-sourced an AI-powered tool to spot bugs in code, IBM launched an AI service to assist with climate change analysis, and DeepMind acquired and open-sourced a robotics simulator called MuJoCo.
The biggest investments of the month were raised by Fabric ($200 million), Hailo ($136 million), and Domino Data Lab ($100 million).
November Nvidia and Amazon made major headlines, with the former officially jumping on the metaverse bandwagon and announcing Omniverse Avatar — a platform enabling users to leverage speech AI, computer vision, natural language understanding, and simulation to create avatars that recognize speech and communicate with human users within the real-world simulation and collaboration platforms. The tech giant also announced ReOpt , an AI-driven tool for supply chain route planning, as well as Modulus , a framework for developing “physics-grounded” AI models, at GTC 2021.
Amazon, meanwhile, debuted Graviton3 processors for AI inferencing, AWS RoboRunner to support robotics apps, and SageMaker Canvas , which enables users to create ML models without having to write any code. On the funding side, Nuro ($600M), Simpro ($350M), Cerebras ($250M), Verbit ($250M), Lusha ($205M), Workato ($200M), and Grammarly ($200M) raised the biggest rounds.
December The Hungarian government announced that it has teamed up with an Eastern European bank to develop an AI supercomputer that will be used to create a large language model of the Hungarian language. DeepMind, meanwhile, continued its work in the gaming segment and announced Player of Games, an AI system that can perform well at both perfect information games (like chess) and imperfect information games (like poker). In a separate development, the company also claimed that its AI technology helped uncover a new formula for a previously unsolved conjecture, as well as a connection between different areas of mathematics.
Tipalti , an automated accounts payable platform, bagged the biggest investment of the month at $270 million, followed by Dialpad ($170 million), SnapLogic ($165 million), and Smartling ($160 million).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,975 | 2,020 |
"These nine unique products are all on sale for Black Friday | VentureBeat"
|
"https://venturebeat.com/commerce/these-nine-unique-products-are-all-on-sale-for-black-friday"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals These nine unique products are all on sale for Black Friday Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This is it. The eye of the Black Friday storm. It may seem eerily quiet at times, but don’t fool yourself. Just inches away, Black Friday deals are whizzing past your brain at blinding speeds nonstop.
Just to prove it, here are nine Black Friday deals happening right now that you should probably hear about. Even though we scooped ‘em out of the air at random, they’re Black Friday deals — and that means major savings. In addition to their regular discounted prices, you can also take an additional 20, 40, or 70 percent off each of these items by entering the coupon codes below.
LaMetric Time No lie…this might be the smartest clock in the world. Along with the time, you can get current weather conditions, Google email notifications, stock quotes and a whole armada of social media alerts and stats for Facebook, Twitter, Instagram and more. A Red Dot Design Award winner, the LaMetric can control smart objects like Philips Hue light bulbs and a Nest thermostat right from this unit.
Get the LaMetric Time for $159.99 (Reg. $199) with promo code BFSAVE20.
xFyro xS2 Waterproof Wireless Earphones Want to wear your earbuds in the pool? That’s all good with the xFyro xS2’s. Along with brilliant sound, cutting-edge Bluetooth CSR 4.2 tech for a seamless connection and cool carrying tube that doubles as a power bank, these buds are also IP67-certified, meaning their 100 percent waterproof and 100 percent dustproof. Meanwhile, the proprietary, noise-isolating silicone structure blocks out external sound so you can focus on your music.
Get the xFyro xS2 Waterproof Wireless Earphones for $63.99 (Reg. $249) with promo code BFSAVE20.
StackSkills Unlimited: Lifetime Access Whatever you need to learn, there’s a good chance you can find it in StackStill’s library of more than 1,000 courses. With training in everything from coding and design to marketing, blockchaining and growth hacking, their catalog is a treasure trove for mastering all of today’s most in-demand skills. If you want to take your career to the next level, chances are that the knowledge to do that can be found in this archive.
Get the StackSkills Unlimited: Lifetime Access for $17.70 (Reg. $1,495) with promo code BFSAVE70.
Altec Lansing ALT-500 Turntable Catch the retro feels of an old-school turntable — with all of the ease and convenience of new-school technology. This turntable lets you play music three different ways: through your phone or another device, through a Bluetooth speaker, or with a direct RCA plugin. And of course, the turntable and built-in stereo speakers can have you spinning vinyl like back in the day.
Get the Altec Lansing ALT-500 Turntable for $51.98 (Reg. $150) with promo code BFSAVE20.
Whizlabs Online Certifications: Lifetime Membership For anyone looking to advance their professional career, few moves can make as immediate an impact as adding some advanced certifications to the resume. With a lifetime Whizlabs subscription, you’ll have access to their complete roster of online certification training in hot business areas like cloud computing, Java, big data, project management, Linux, AWS, digital marketing and more.
Get the Whizlabs Online Certifications: Lifetime Membership for $39 (Reg. $4,499) with promo code BFSAVE70.
Urbanears Rålis Portable Bluetooth 5.0 Speaker No, this isn’t just some cheap-o knockaround Bluetooth speaker. This unit sports dual front and rear speakers for serving up a complete multidirectional soundscape. It also syncs easily to your Spotify or ITunes playlists and can dish out 20 hours of playback ability. It can also double as a battery with built-in power bank capable of giving your phone an energy jump start when needed.
Get the Urbanears Rålis Portable Bluetooth 5.0 Speaker for $87.99 (Reg. $199) with promo code BFSAVE20.
Retro TV Game Console 620 games. If you remember nothing else about this compact, stacked-to-the-rafters little retro gaming console, remember that. It’s packing 620 different games. Plug it into your TV, grab a controller, and before you know it, you’re firing, punching, blasting, and playing like 1998 never ended.
Get the Retro TV Game Console for $28.76 (Reg. $99) with promo code BFSAVE20.
Degoo Premium Mega Backup Plan: Lifetime Subscription Keep all your most valuable data backed up and protected with 15TB of premium Degoo cloud storage space. Between its full 256-bit AES encryption, automatic backup feature, fast transfer speeds and streamlined file sharing techniques, it’ll make you wonder why you keep anything on your smartphone or hard drive anymore. Even if you suffer a system meltdown, your data is eternally safe.
Get the Degoo Premium Mega Backup Plan: Lifetime Subscription for $89.99 (Reg. $4,320) with promo code BFSAVE40.
The Ultimate Raspberry Pi and ROS Robotics Developer Super Bundle Whether you want to know how to use a single board Raspberry Pi microcomputer to create your own smart dustbin, a security camera or a bunch more cool stuff, this 15-course collection points the way. Including nearly 40 hours of instruction, you’ll also get experience working with electronics and robotics to start crafting Internet of Things marvels, Arduino wizardry and more.
Get The Ultimate Raspberry Pi and ROS Robotics Developer Super Bundle for $15 (Reg. $2,391) with promo code BFSAVE70.
Prices subject to change.
Do you have your stay-at-home essentials? Here are some you may have missed.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,976 | 2,020 |
"These 10 tech gifts are on sale for Cyber Monday | VentureBeat"
|
"https://venturebeat.com/commerce/these-10-tech-gifts-are-on-sale-for-cyber-monday"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals These 10 tech gifts are on sale for Cyber Monday Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Did you know 2020 marks the 15th anniversary of the first officially coined Cyber Monday ? You’re forgiven if you didn’t get it a card or anything. You’ve probably been too busy with the trainwreck that is 2020 to keep that milestone top of mind.
Thankfully, Cyber Monday remembered you. As it’s done every year, it brought you a bunch of online deals on scores of different items, including the 10 listed here, all at appropriately gaudy Cyber Monday discounts.
Considering its generosity, you probably feel bad now that you forgot. It’s ok. Cyber Monday is forgiving. Do better next year though, alright? TRNDlabs Spectre Drone Here’s the perfect beginner drone for your favorite budding pilot. Easy to learn, yet super-responsive 6-axis gyro sensitive controls will have flyers turning cool aerial tricks in minutes. The Spectre Drone is also packing an awesome HD camera, capable of capturing gorgeous mid-air photos and videos from up to 50 meters up.
Get the TRNDlabs Spectre Drone for $49.97 (Reg. $59.99).
PIQO Powerful 1080p Mini Projector With a 200 lumen bulb blasting an image up to 20 feet across on to any surface, this mini projector retains the right to call itself powerful. The PIQO offers full 1080p HD resolution, built-in speakers and complete WiFi and Bluetooth compatibility so you can stream movies, TV shows and other video right from any device instantly.
Get the PIQO Powerful 1080p Mini Projector for $214.97 (Reg. $799.99).
Altec Lansing ALT-500 Turntable If you thought records were so 20th century, let this minimally modern looking turntable change your mind. It plays all your mom and dad’s old vinyl as the needle delivers the music through 2 built-in stereo speakers. Or you can ditch the old ways and embrace the now, using the ALT-500’s Bluetooth connectivity to stream music from your favorite device or through a synced Bluetooth speaker.
Get the Altec Lansing ALT-500 Turntable for $64.97 (Reg. $150).
Hombli Smart Indoor Camera A home security camera is supposed to offer peace of mind. The Hombli delivers with crystal clear 1080p HD video, night vision, 2-way audio, and the ability to store all your surveillance footage on MicroSD or directly to your favorite cloud service. And when you connect through the Hombli app on your phone, you can see live pictures from inside your home via the web whenever you want to check in.
Get the Hombli Smart Indoor Camera for $34.97 (Reg. $99).
PhiGolf: Mobile and Home Smart Golf Simulator with Swing Stick Whether you practice like a demon or just want the virtual experience of playing the world’s best courses, PhiGolf is going to blow you away. When you can’t get out on the links for real, you can sync the app to your TV and play an entertaining round that uses a state-of-the-art sensor to control gameplay with your real golf swing. The killer graphics also transport you around the globe for tee times at all the greatest courses anywhere.
Get the PhiGolf: Mobile and Home Smart Golf Simulator with Swing Stick for $190 (Reg. $249) with the promo code GOLF10.
Black Box 1080p Dash Cam Here’s the video backup that could save your bacon in the event of a crash. With its own g-sensor, this dash cam captures video and audio in the moment of impact, ensuring you’ll always have a second set of eyes if you’re in an accident. You get crisp, sharp 1080p resolution in day or night. And the compact design lets you record without obscuring your line of sight behind the wheel.
Get the Black Box 1080p Dash Cam for $17.97 (Reg. $149).
Mobile Pixels Duex Pro Portable Dual Monitor Double your productivity with this IndieGoGo favorite, a portable, lightweight, 1080p secondary monitor that effortlessly attaches to your laptop. Either work on dual screens or flip the DUEX Pro around and use it as a brilliant visual aide for presentations. It’s energy efficient, remarkably durable and with this offer, available at a heck of a savings.
Get the Mobile Pixels Duex Pro Portable Dual Monitor for $180 (Reg. $249) with the promo code SAVEDUEXPRO.
ChronoWatch Multi-Function Smart Watch Who needs an Apple Watch? Sporting a lengthy list of built-in features that smart-enabled customers want and demand, the ChronoWatch is pretty stacked itself. With 16 major functions, including everything from activity tracking, a sleep monitor, a blood pressure monitor, message and call notification, an alarm, and more, you’ll have what you need to keep your life and health on track.
Get the ChronoWatch Multi-Function Smart Watch for $34.97 (Reg. $199).
EarFun Air True Wireless BT 5 Earbuds The EarFun Air give more expensive earbuds a run for their money, powered by custom-built composite cellulose drivers for superior sound. Winners of both 2020 CES Innovation and iF Design awards, they sound incredible, cancel bleed-over noise, include easy, intuitive touch controls, and are completely water and sweat resistant. Plus, you can enjoy up to 35 hours off playtime from anywhere.
Get the EarFun Air True Wireless BT 5 Earbuds for $42.97 (Reg. $99).
Treblab X5 True Wireless Bluetooth Earbuds The X5s have the attention of shoppers, scoring an impressive 4.3 out of 5-star rating among Amazon purchasers. It might be because of the crisp, stereo quality sound from the advanced 8.2mm drivers. Or it could be expandable silicone tips that cut down on outside noise. Or possible the killer call reception from the built-in CVC 8.0 mic. Put them all together and it’s definitely reason enough to consider them.
Get the Treblab X5 True Wireless Bluetooth Earbuds for $49.97 (Reg. $99).
Prices subject to change.
Do you have your stay-at-home essentials? Here are some you may have missed.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,977 | 2,020 |
"Nine Black Friday deals on professional software and eLearning | VentureBeat"
|
"https://venturebeat.com/commerce/nine-black-friday-deals-on-professional-software-and-elearning"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals Nine Black Friday deals on professional software and eLearning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Everywhere you look, Black Friday deals are falling out of the sky. We’re doing all we can to gather up as many of the best offers as we can find and put them in front of you now before it’s too late.
Unfortunately, most of these big savings will be gone after just a limited time. But for all the software, apps and learning packages included below, you can earn an extra 40 or even 70 percent off your final total by adding the promo codes below while checking out with your purchase.
The Complete SEO and Digital Mega Marketing Bundle If you want to understand how to boost web traffic and turn your brand into a digital powerhouse, this 15-course collection can make it happen. With more than 90 hours of training, these courses explain copywriting, affiliate marketing, email campaigns, Facebook advertising, and more. You’ll even learn how to use Snapchat to your business advantage.
Get The Complete SEO and Digital Mega Marketing Bundle for $14.70 (Reg. $2,330) with promo code BFSAVE70.
XSplit VCam: Lifetime Subscription With XSplit, you can instantly replace the backgrounds in your webcam videos, creating a professional-level effect without expensive green screens or complicated lighting arrays. XSplit also works with all the big live streaming services to make your podcasts, vlogs, game streaming, talk shows, video calls and more feel a lot more professional.
Get the XSplit VCam: Lifetime Subscription for $11.99 (Reg. $49) with promo code BFSAVE40.
Whizlabs Online Certifications: Lifetime Membership There’s always room to learn more — and this access to the entire archive of Whizlabs’ vast online certification coursework can unlock it all. Whizlabs has helped more than 3 million professionals grasp subjects like cloud computing, Java, big data, project management, Agile, Linux, CCNA, and digital marketing.
Get the Whizlabs Online Certifications: Lifetime Membership for $39.99 (Reg. $4,499) with promo code BFSAVE70.
Mashvisor: Lifetime Subscription Mashvisor pulls together all the available real estate insider data on for-sale properties into one place, then assesses which ones could offer the biggest returns as rental or Airbnb units. Legwork that used to take investors months to dig up can be found in under 15 minutes with Mashvisor.
Get the Mashvisor: Lifetime Subscription for $23.99 (Reg. $1,499) with promo code BFSAVE40.
The Wall Street Survival and Stock Trading Guide Bundle Over eight courses, you’ll truly get inside how Wall Street works and find the methods for creating profit through day trading. After this training, even beginners will know how to start trading stocks, handling technical analysis, understanding chart indicators, even the psychology of trading that can help you make the smartest possible decisions.
Get The Wall Street Survival and Stock Trading Guide Bundle for $9 (Reg. $1,600) with promo code BFSAVE70.
Big Think Edge Expert-Taught Lectures: Lifetime Subscription Big Think Edge gets some of the world’s greatest experts to talk about how they reached the pinnacle of their craft in this amazing video series. You can explore video lessons from more than 150 luminaries, including Malcolm Gladwell, Elon Musk, and Arianna Huffington. Everyone from Ivy League professors to entrepreneurs to Nobel Prize winners explain emotional intelligence, problem-solving, critical thinking and more to help you fuel your personal and professional growth.
Get the Big Think Edge Expert-Taught Lectures: Lifetime Subscription for $48 (Reg. $250) with promo code BFSAVE70.
Haroun Education Ventures MBA Degree Program Over more than 400 hours, award-winning MBA professor Chris Haroun takes you to business school…only better. This venture capitalist and Goldman Sachs veteran offers up insight into everything those big academic programs skip, like how to network, how to find customers, how to write business documents, and how to start and grow a business.
Get the Haroun Education Ventures MBA Degree Program for $119.70 (Reg. $499) with promo code BFSAVE70.
WhiteSmoke Grammar Checker: Lifetime Subscription Don’t let the grammar cops catch up with you. WhiteSmoke can help get your writing into top-notch shape, checking your work for any spelling, punctuation and style mistakes that you definitely don’t want spilling into a work email or formal document. It even translates into over 50 languages.
Get the WhiteSmoke Grammar Checker: Lifetime Subscription for $23.99 (Reg. $600) with promo code BFSAVE40.
Knowable Audio Learning Platform: Lifetime Subscription You can learn life-changing skills in these audio courses led by over 200 leading experts in their field. One of Google Play’s “New Apps We Love,” Knowable’s roster of visionaries make learning how to advance at work, boost your productivity, or improve your relationships as easy as binging a podcast. New courses are added every week in this mobile-friendly, audio-first learning that you can take with you anywhere.
Get the Knowable Audio Learning Platform: Lifetime Subscription for $35.99 (Reg. $249) with promo code BFSAVE40.
Prices subject to change.
Do you have your stay-at-home essentials? Here are some you may have missed.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,978 | 2,020 |
"Get a lifetime of Rosetta Stone and more on sale for Black Friday | VentureBeat"
|
"https://venturebeat.com/commerce/get-a-lifetime-of-rosetta-stone-and-more-on-sale-for-black-friday"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals Get a lifetime of Rosetta Stone and more on sale for Black Friday Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
While everyone is pleased with the news of vaccines and other promising treatments in the fight against COVID-19, the reality is that some changes we’ve all undergone the past year will likely live on indefinitely.
So while we’d like to think that masks and social distancing will be a thing of the past as soon as everyone gets inoculated, the reality is that those measures will likely remain a presence for some time to come.
While that means physical precautions continue, that also means our pursuit to better spend our alone time should continue as well. The training in The Social Distancing Lifetime Subscription ft. Rosetta Stone package should not only help broaden your mind, but it might even help put you in touch with the rest of the world during this isolating time.
The star of this collection is Rosetta Stone , one of the biggest and brightest names in online language learning. In fact, it really doesn’t get any bigger than the system that won PCMag’s Editors’ Choice Award for Best Language-Learning Software five years in a row.
Rosetta Stone’s approach: immersion. Just like real life trips abroad, Rosetta Stone makes their digital education like an online version of that immersive experience. Across all 24 of the languages they offer, Rosetta Stone’s interactive lessons help students understand vocabulary, grammar, and written rules for your new language naturally.
Once you’re starting to speak in your new tongue, that’s when the TruAccent speech recognition technology kicks in, analyzing your speech and offering an instant assessment of what you’re doing well and what still needs improvement.
As you sharpen up your language skills, you can also read up on that culture — and pretty much everything else — with a lifetime subscription to the 12min Premium Micro Book Library.
The editors at 12min have condensed hundreds of non-fiction best-sellers and other important texts down to their cores, extracting key takeaways and presenting that work to you in just 12 minutes. Whether it’s in text or audio form, you can download any book in the 12min library and take it all in whenever you have a short pause in your day.
And if you’re going to be online, you better be protected — so that’s why you need a lifetime of KeepSolid VPN Unlimited coverage. KeepSolid has long been an elite VPN service provider, now with more than 10 million customers worldwide. Through their heavily shielded and encrypted web connection, you can surf the web and go anywhere online with complete anonymity. Your IP address and all identifying details about you stay hidden and keep secure with any online crooks or cyber snoops. Thanks to no speed or bandwidth limits, you can also enjoy ultra-fast connections, even to geo-restricted content like streaming services blocked in certain parts of the world.
To get all three services for life would usually cost nearly $750, but with this package, you can get it all for only $189. Then use the coupon code GETSOCIAL10 during checkout for an additional 10 percent off.
Prices subject to change.
Do you have your stay-at-home essentials? Here are some you may have missed.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,979 | 2,020 |
"Zebra's enterprise AR glasses add XMReality Remote Guidance software | VentureBeat"
|
"https://venturebeat.com/business/zebras-enterprise-ar-glasses-add-xmreality-remote-guidance-software"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zebra’s enterprise AR glasses add XMReality Remote Guidance software Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Augmented reality headsets are becoming important tools for enterprises, enabling frontline workers to instantly access reference data as they’re in the field. Today, industrial AR headset maker Zebra announced that it will upgrade its smart glasses with Remote Guidance software developed by XMReality , a Swedish knowledge sharing company with nearly 100 enterprise customers in 60 countries.
Used on smartphones, XMReality’s software enables a remote technician to not only see what a customer is seeing , but also combine the customer’s view with a technician’s real-time “hands overlays” and/or drawings, providing visual directions on how to perform a diagnosis or repair without the need for an in-person visit. Wearing a Zebra headset, technicians could use the same remote software without having to hold a phone in one hand to film the other or make on-site visits while being guided by more experienced experts located elsewhere.
The Zebra-XMReality partnership is significant for technical decision makers because it demonstrates how wearable augmented reality displays are increasingly connecting frontline workers to enterprise data hubs, placing both human expertise and warehoused data within the grasp of remote employees. XMReality’s desktop-scale camera system can turn any Windows computer into a guidance station to help remote workers, and its software also supports select industrial AR glasses from Realwear and Vuzix.
While XMReality’s remote guidance software will also work with Zebra handheld devices, the deal is notably supporting the HD4000 Enterprise HMD , a head-mounted display that weighs only 1.06 ounces and is designed for warehouse management, manufacturing, field mobility, and retail applications. Equipped with a 9-axis head tracking sensor for sophisticated user tracking, the HD4000 has a color screen that can display text, graphics, and video content, and it uses a 5-megapixel camera to capture imagery from the field.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Illinois-based Zebra is best known globally as a leader in bar code reading devices, but it has been involved in RFID-based enterprise asset-tracking solutions for years, acquiring Motorola’s enterprise solutions business in 2014. The company’s products are used by over 10,000 channel partners in 45 countries.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,980 | 2,020 |
"TinyBuild invests $3 million in Secret Neighbor developer Hologryph | VentureBeat"
|
"https://venturebeat.com/business/tinybuild-invests-3-million-in-secret-neighbor-developer-hologryph"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages TinyBuild invests $3 million in Secret Neighbor developer Hologryph Share on Facebook Share on X Share on LinkedIn Hologryph created the multiplayer game Secret Neighbor.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Game publisher TinyBuild announced that it has invested $3 million into game studio Hologryph , the development team behind the hit multiplayer game Secret Neighbor. The investment grants TinyBuild a majority stake in the studio, and the funds will go toward supporting a soon-to-be-announced new intellectual property for PlayStation 5 and Xbox Series X/S.
The Seattle-based publisher also announced some fresh figures for the Hello Neighbor franchise : the series has exceeded 60 million downloads across PC, Xbox One, PS4, Google Stadia, Android and iOS, and Nintendo Switch.
TinyBuild CEO Alex Nichiporchik said in an email to GamesBeat that the investment will be used to expand the Hologryph team as it works on a next-generation console game.
TinyBuild said it is a firm believer in the value-based subscription models. The Hello Neighbor franchise is available on Xbox Game Pass with Hologryph-developed Secret Neighbor has had more than 3.5 million downloads. Hologryph recently worked with TinyBuild as a co-developer for Party Hard 2, and it also created Hello Neighbor’s multiplayer spinoff, Secret Neighbor.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Hologryph founders (left to right): Serhiy Grinets, CTO; Eugene Dranov, art director; Maksym Khrapai, CEO; and Andriy Moskal, art lead.
The company is now working on Hello Neighbor 2 , which revealed during Geoff Keighley’s Summer Games Fest as a PC/Xbox Series X exclusive title with a 2021 release date.
A new entry to the franchise, Hello Engineer , was announced in October as a Google Stadia exclusive.
The Lviv, Ukraine-based Hologryph has 11 employees, and it is expected to expand to more than 20. TinyBuild has more than 100 employees and it has raised $18.75 million.
TinyBuild also recently acquired the Dynamic Pixels development team behind Hello Neighbor, a horror game about a neighbor with a secret in his house.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,981 | 2,020 |
"Supercell invests $2.8 million in 2Up, a co-op mobile game studio | VentureBeat"
|
"https://venturebeat.com/business/supercell-invests-2-8-million-in-2up-a-co-op-mobile-game-studio"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Supercell invests $2.8 million in 2Up, a co-op mobile game studio Share on Facebook Share on X Share on LinkedIn 2Up Games Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Supercell is investing $2.8 million in New Zealand game studio 2Up Games , which is working on a co-op mobile game.
This further proof of the funding frenzy games have enjoyed during the pandemic, driven in large part by heightened use. In the past nine months, game studios have seen more than 100 investments.
Supercell has been among the most aggressive mobile game investors, as the Helsinki, Finland-based company is still printing money with its Clash Royale and Clash of Clans games.
2Up cofounder Joe Raeburn said in an interview with GamesBeat that the company strongly believes co-op games can make the world a better place, particularly as people struggle with isolation during lockdown. He also believes humans aren’t always great at cooperation and need more practice.
Raeburn started the company six months ago with former Magic Leap developer Tim Knauf, and the two didn’t meet in person for the first five months they worked together.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Raeburn said his first co-op memory was playing Silkworm with his best friend on the Amiga computer, while Knast has enjoyed multiplayer point-and-click adventure games. Knast previously founded Launching Pad Games, and he worked with Magic Leap and Weta Workshop on Dr. Grordbort’s Invaders , which was one of my favorite games of 2018.
“Tim and I came together talking about our shared love of co-op games,” Raeburn said. “But too often the games are too short.” This started them thinking about games designed for co-op from the ground up.
“So we came up with this focus on real-time co-op games, ones that you can play for years,” Raeburn said. “We want to bring back that sense of connecting with someone through an experience you share. You feel like two humans that have done something together, as opposed to humans that have battled.” Above: 2Up Games is based in New Zealand.
Supercell developer relations lead Jaakko Harlas noted that this is the first time Supercell has invested in a New Zealand startup. Raeburn said the company is hiring now and that thanks to the pandemic it can hire people anywhere in the world. He noted that another bright spot is that game companies no longer need to have a head office that can lord it over their satellite offices.
Raeburn said 2Up has developed a prototype idea but isn’t ready to talk about it yet. The company has two full-time people and two contractors and is outsourcing art for the game to external companies. Raeburn worked as an early employee and game creator at Space Ape Games , maker of Samurai Siege, before the company was acquired by Supercell. About a year ago, Raeburn made the move back to his native New Zealand. During lockdown, he decided to start a game studio, which was how he met Knast.
“It’s early days, and the one thing I can say is that real-time games are quite difficult,” Raeburn said. “One of the most important things we’ve learned is allowing players to have a pause in the action so that they can look and see what their partner is doing and what they need. Because when you do that, you see the situation they are in. If it’s nonstop action all of the time, you don’t even know what your partner is doing.” Raeburn said the company talked to a few game funds but decided Supercell offered the biggest opportunity.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,982 | 2,020 |
"Studio MDHR will delay Cuphead's The Delicious Last Course DLC | VentureBeat"
|
"https://venturebeat.com/business/studio-mdhr-will-delay-cupheads-the-delicious-last-course-dlc"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Studio MDHR will delay Cuphead’s The Delicious Last Course DLC Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Studio MDHR announced in a tweet that it has delayed the release of its Cuphead downloadable content, dubbed The Delicious Last Course.
The Canadian game studio said it made the difficult decision amid the pandemic to delay the DLC to an unspecified date. In a letter, cofounders Chad and Jared Moldenhauer said, “In true Studio MDHR fashion, we aren’t content for this final chapter to be anything less than our best work. Throughout development, we’ve challenged ourselves to put everything we learned from making Cuphead into the quality of The Last Delicious Course’s animation, design, and music.” But the brothers said that meeting this standard has been extremely challenging in the pandemic.
“Rather than compromise on our vision in response to COVID, we’ve made the difficult decision to push back the release of The Delicious Last Course until we are confident it will delight the Cuphead community the way we feel it should.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Previously, the studio had simply said that the DLC would be coming out in 2020. In the DLC, the Cuphead brothers will be joined by Ms. Chalice.
Here’s the tweet: In the wake of the ongoing global pandemic affecting so many, we have made the difficult decision to push back the release of The Delicious Last Course. For our wonderful Cuphead community, we've prepared a letter from Studio MDHR founders Chad & Jared Moldenhauer to share more.
pic.twitter.com/XiU57Wcn1y — Studio MDHR (@StudioMDHR) November 25, 2020 Humble origins Cuphead was a nostalgic run-and-gun platformer that made a big splash when it debuted in 2017. I was part of that because I found it very difficult to play, and the entire internet laughed at me for that. While everybody had a great laugh at my struggles with the tutorial and first level, the studio leaders were quite kind to me with their words of support. And a pretty large percentage of people never finished the game. (This figure was at 8% completed a few months after the game came out). But the difficulty is part of its appeal. By the second anniversary of the launch, Cuphead had sold more than 5 million copies and it reached 6 million by the time of its PlayStation 4 launch in July.
Cuphead is a game that harkens back to the old-style console games that were hard to play. Its art style was like a 1930s cartoon, with blaring music and a story about Cuphead and his brother, Mugman. Their journey was a series of surreal boss fights, all done in the name of paying off a debt to the Devil. The game sold millions and part of its appeal was the rags-to-riches story of the Moldenhauer brothers and Maya Moldenhauer, who oversaw the business side of the studio. With such great success on the first game, the studio has earned the right to make its own decisions on the right time to launch. And it has more than enough money to finance the delay.
It was the last thing you might expect, considering the game’s humble origins. Brothers Chad and Jared grew up playing old-school video games, and they loved platformers. They also took after their father, enjoying old cartoons like Disney’s Silly Symphonies , Chad said. They always dreamed of making a video game. When they finished playing a game, they would talk about. “Wouldn’t it be great if this happened instead?” But Chad was a construction worker for his dad’s company, and then he moved on in 2003 to become a web designer. He decided with Jared, who worked in construction, to try to make a game part time. Chad’s wife Maja, who had a background in biomedical physics and finance, was fully supportive. During her maternity leave, she joined in. Chad taught himself animation. Microsoft saw their work and decided to publish it under its ID@Xbox indie game label, offering marketing support. By the time it was done, more than 25 people had worked on it. The game took off organically, and I certainly learned that there are a ton of people who appreciate a challenging game.
Netflix said it in 2019 that it was creating an animated video series based on Cuphead.
Perhaps this delay will give me more time to beat the original Cuphead. But don’t count on it. I’ve come to terms with the fact that there are some games I don’t have the skill to play. GamesBeat’s Mike Minotti, however, beat it soundly for his review.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,983 | 2,020 |
"Slack could quickly become Salesforce’s golden goose | VentureBeat"
|
"https://venturebeat.com/business/slack-could-quickly-become-salesforces-golden-goose"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Slack could quickly become Salesforce’s golden goose Share on Facebook Share on X Share on LinkedIn A view of Slack HQ from Salesforce Park in San Francisco Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last week, news broke that Salesforce was thought to be in advanced talks to acquire Slack.
This inevitably fuelled much excitement and debate, not least because of the scale of the potential acquisition. Slack’s market capitalization was about $17 billion before the news broke and jumped to almost $23 billion soon after.
And with Saleforce’s earnings call scheduled for tomorrow, we could be in for an official announcement soon.
There’s no doubt this would be a major acquisition for Salesforce. It wouldn’t be its first of this scale — Salesforce acquired Tableau in 2019 for over $15 billion in an all-stock deal — but there would need to be a rock-solid argument for such a decision.
Why would Salesforce buy Slack? To maintain the high rate of growth that it has achieved for the last few years, Salesforce has been investing in various initiatives that enable it to expand its reach in customer organizations. Some of this has been happening organically, for example improving its cross-selling and upselling within each customer. However, acquisitions have played a significant role here too.
Salesforce’s most recent acquisitions, MuleSoft and Tableau, were both designed to build out the Salesforce platform, enabling it not just to become embedded in CRM processes but to extend across customers’ entire operations. Despite these growth efforts, the bulk of Salesforce’s current applications portfolio doesn’t give it significant reach beyond sales and marketing teams.
The company wants its applications to be critical to every employee with an organization; to be the place they go to get their work done. And it has long been eyeing the collaboration software space as a way to achieve this.
It launched Salesforce Chatter in 2010, followed a couple of years later by Community Cloud, but neither really provided that extended reach outside sales. Salesforce’s $750 million acquisition of real-time document creation company Quip in 2016 was another step in this direction, but although the product has found a strong purpose in enabling work in the CRM environment, Quip hasn’t significantly expanded Salesforce’s reach.
Acquiring Slack, however, would finally give Salesforce the boost it’s been looking for. Although Slack was initially successful in tech-savvy IT teams, usage has spread significantly over the last couple of years — something that has accelerated dramatically with the increase in remote working due to the pandemic. The Slack team also has a good understanding of how to drive adoption and business change within customers, which would augment Salesforce’s customer success organization.
Plus Slack has made investments in areas that would be interesting to Salesforce. It has a large developer community, and is strong in bots and app development. And, much like Salesforce, Slack has been investing in low-code technology with its Workflow Builder tool, which enables individual, non-technical employees to automate day-to-day tasks. Finally, Slack Connect enables B2B collaboration and is gearing up to allow the creation of a B2B business network, which would be another great opportunity for Salesforce. Alongside Slack’s extensive list of customers, each of these areas provide differentiation and growth opportunity that could underpin a potential acquisition.
Why would Slack agree to a deal? There have been rumors about tech companies wanting to acquire Slack for several years — arch-rival Microsoft was reportedly considering a purchase back in 2016 — for a much lower price, needless to say. However, the deal never came to anything. Microsoft decided to build its own competing solution, and Slack continued to grow at an astonishing pace.
Things are a bit different now.
Slack’s revenue growth has been starting to slow over the past 18 months, with its fiscal 2021 (which ends January 31) expected to show about 38% growth, versus 82% in fiscal 2019. It has seen a significant boost in 2020 in terms of adoption of the Slack application, with paid customers up 20,000 in the past six months, compared with an increase of 15,000 in the whole of 2019. Slack has also seen its number of large-ticket customers (with over $100,000 in trailing 12-month revenue) double since 2019.
Despite this, Slack has disappointed investors who were hoping for Zoom-like revenue growth in response to the pandemic. Although Slack and Zoom were the same size a year ago, Zoom is expecting to have grown 280% this year, dwarfing Slack’s 38% guidance.
Slack is also facing ever-stronger competition from Microsoft Teams. Slack still has some considerable points of differentiation over Teams, not least the two areas I highlighted above. But the effects of the pandemic and the shift to remote working have made the competition with Microsoft even tougher, especially given Microsoft Teams’ strength in video meetings, an area that has become business critical this year.
With strong ambitions, Slack now needs a way to step up its market reach and product investment, but doing that as an independent can be very challenging. Salesforce could provide a great platform for Slack and has plenty of experience and success in integrating major acquisitions, which would give Slack’s customers confidence if the purchase does go ahead.
If it does acquire Slack, Saleforce would likely continue to operate Slack as an independent business unit, in the same way it has done with Tableau or MuleSoft. An acquisition would inevitably mean much deeper integration across the breadth of Salesforce’s portfolio, which will only be positive for the many Slack customers that are already Salesforce customers. Slack already integrates with Salesforce in multiple ways, including with Sales Cloud and Service Cloud via Chatter, and with Quip for document collaboration. However, there’s scope for integration with the rest of the portfolio, and the Chatter capabilities could even be completely replaced by Slack.
An opportunity to team up against Microsoft Overall, if they can agree on a deal, this could be a very positive and exciting move for both Slack and Salesforce, and one that would see them joining up against Microsoft — not just in business applications, where Microsoft is increasingly challenging Salesforce with its Dynamics 365 business, but in employee productivity and collaboration as well. One thing’s for sure, with such a big price tag, the pressure is on to make sure the impact on growth lives up to investors’ expectations.
Angela Ashenden is Principal Analyst of Workplace Transformation at CCS Insight.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,984 | 2,020 |
"Salesforce in talks to acquire Slack | VentureBeat"
|
"https://venturebeat.com/business/salesforce-in-talks-to-acquire-slack"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce in talks to acquire Slack Share on Facebook Share on X Share on LinkedIn Slack logo at Slush 2018 conference in Helsinki, Finland Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Cloud-based software company Salesforce.com is in talks to acquire workplace messaging app Slack as it seeks to expand its offerings to businesses, people familiar with the matter said on Wednesday.
Salesforce’s bid comes as Slack struggles to fully capitalize on the switch to remote working during the COVID-19 pandemic in the face of fierce competition from Microsoft’s Teams and other workplace apps.
Slack shares ended trading on Tuesday at $29.57, well below the $42 high they reached on their first day of trading last year.
Salesforce sees the potential acquisition as a logical extension of its enterprise offerings , the sources said. The price it is offering for Slack was not disclosed, though one of the sources said Salesforce would pay cash for the deal, rather than using its stock as currency.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! If the negotiations conclude successfully, a deal could be announced before Slack reports quarterly earnings on December 9, one of the sources added.
Neither Slack nor Salesforce responded to requests for comment.
Slack shares jumped 24% to $36.58, giving the company a market capitalization of $21 billion, while Salesforce fell 2.7% after the Wall Street Journal first reported that the two companies had held deal talks.
Slack has benefited from companies relying more on information technology systems to keep their workers connected during the pandemic.
Its app has been installed about 12.6 million times so far this year, up approximately 50% from the same period in 2019, according to analytics firm Sensor Tower.
But the economic fallout of the pandemic has forced Slack to give discounts and payment concessions to many of its customers who have had to make cost cuts.
Seeking to save money, some companies have also been switching to Teams, which comes with many of Microsoft’s office software packages.
“I think Microsoft Teams has been able to capitalize on the opportunity better than Slack, partly because they give it away for free as a bundle,” said Rishi Jaluria, an analyst at research firm DA Davidson and Co.
“Now Slack realizes that they might be able to get greater penetration as part of a larger company.” Slack’s billing growth, a key indicator of future revenue, slowed in the three months leading up to the end of July.
Salesforce meanwhile has been thriving financially during the pandemic. It raised its annual revenue forecast in August as the pandemic spurred demand for its online business software that supports remote work and commerce.
Salesforce has been beefing up its cloud business through acquisitions and had spent more than $16 billion last year to fend off competition from rivals such as Oracle and German competitor SAP.
( Reporting Greg Roumeliotis and Krystal Hu in New York, additional reporting by Subrat Patnaik and Eva Mathews in Bengaluru. Editing by Arun Koyyur and Jan Harvey.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,985 | 2,020 |
"SAB Biotherapeutics Awarded $57.5M from BARDA and U.S. Department of Defense for Manufacturing of SAB-185 for the Treatment of COVID-19 | VentureBeat"
|
"https://venturebeat.com/business/sab-biotherapeutics-awarded-57-5m-from-barda-and-u-s-department-of-defense-for-manufacturing-of-sab-185-for-the-treatment-of-covid-19"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release SAB Biotherapeutics Awarded $57.5M from BARDA and U.S. Department of Defense for Manufacturing of SAB-185 for the Treatment of COVID-19 Share on Facebook Share on X Share on LinkedIn SIOUX FALLS, S.D.–(BUSINESS WIRE)–November 30, 2020– SAB Biotherapeutics (SAB), a clinical stage biopharmaceutical company developing a novel immunotherapy platform to produce specifically targeted, high-potency, fully human polyclonal antibodies without the need for human serum, today announced that, as part of Operation Warp Speed, the Biomedical Advanced Research and Development Authority (BARDA), part of the Office of the Assistant Secretary for Preparedness and Response at the U.S. Department of Health and Human Services, and the Department of Defense Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense (JPEO-CBRND) have awarded SAB $57.5 million in expanded scope for its DiversitAb™ Rapid Response Antibody Program contract for the manufacturing of SAB-185, the company’s clinical stage therapeutic candidate for COVID-19.
“We are pleased to be awarded this additional contract scope, which we believe is a reflection of the compelling science that supports SAB-185’s potential in COVID-19, as well as the urgent need for treatment options amidst the global pandemic. Previous data has indicated that this human polyclonal antibody therapeutic has potent neutralizing activity against SARS-CoV-2, potentially driving more available doses, giving us the confidence to continue to progress our clinical development programs for SAB-185,” said Eddie J. Sullivan, PhD, co-founder, president and CEO of SAB Biotherapeutics. “This manufacturing agreement with BARDA and the Department of Defense supports our vision of bringing a novel, first-of-its-kind human polyclonal antibody therapeutic candidate for COVID-19 to patients, and I am proud of the work by our team and appreciate the continued support from BARDA and JPEO as we continue to rapidly advance SAB-185.” SAB-185 is currently being tested as a COVID-19 therapeutic in an ongoing Phase 1 trial in healthy volunteers and an ongoing Phase Ib trial in patients with mild or moderate COVID-19. SAB has leveraged its expertise to develop scalable manufacturing capabilities to support clinical activities, and continues to increase capacities in working with contract manufacturing organizations.
About SAB-185 SAB-185 is a fully-human, specifically-targeted and broadly neutralizing polyclonal antibody therapeutic candidate for COVID-19. The therapeutic was developed from SAB’s novel proprietary DiversitAb™ Rapid Response Antibody Program. SAB filed the Investigational New Drug (IND) application and produced the initial clinical doses in just 98 days from program initiation. The novel therapeutic has shown neutralization of both the Munich and Washington strains of mutated virus in preclinical studies. Preclinical data has also demonstrated SAB-185 to be significantly more potent than human-derived convalescent plasma.
About SAB Biotherapeutics, Inc.
SAB Biotherapeutics, Inc. (SAB) is a clinical-stage, biopharmaceutical company advancing a new class of immunotherapies leveraging fully human polyclonal antibodies. Utilizing some of the most complex genetic engineering and antibody science in the world, SAB has developed the only platform that can rapidly produce natural, specifically-targeted, high-potency, human polyclonal immunotherapies at commercial scale. SAB-185, a fully-human polyclonal antibody therapeutic candidate for COVID-19, is being developed with initial funding supported by the Biomedical Advanced Research Development Authority (BARDA), part of the Assistant Secretary for Preparedness and Response (ASPR) at the U.S. Department of Health and Human Services and the Department of Defense (DoD) Joint Program Executive Office for Chemical, Biological, Radiological, and Nuclear Defense (JPEO-CBRND) Joint Project Lead for Enabling Biotechnologies (JPL-EB). In addition to COVID-19, the company’s pipeline also includes programs in Type 1 diabetes, organ transplant and influenza. For more information visit: www.sabbiotherapeutics.com or follow @SABBantibody on Twitter.
View source version on businesswire.com: https://www.businesswire.com/news/home/20201130005547/en/ Melissa Ullerich Tel: 605-695-8350 [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,986 | 2,020 |
"Robotics researchers propose AI that locates and safely moves items on shelves | VentureBeat"
|
"https://venturebeat.com/business/robotics-researchers-propose-ai-that-locates-items-on-shelves-and-moves-objects-without-tipping-them"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Robotics researchers propose AI that locates and safely moves items on shelves Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
A pair of new robotics studies from Google and the University of California, Berkeley propose ways of finding occluded objects on shelves and solving “contact-rich” manipulation tasks like moving objects across a table. The UC Berkeley research introduces Lateral Access maXimal Reduction of occupancY support Area (LAX-RAY), a system that predicts a target object’s location, even when only a portion of that object is visible. As for the Google-coauthored paper, it proposes Contact-aware Online COntext Inference (COCOI), which aims to embed the dynamics properties of physical things in an easy-to-use framework.
While researchers have explored the robotics problem of searching for objects in clutter for quite some time, settings like shelves, cabinets, and closets are a less-studied area, despite their wide applicability. (For example, a service robot at a pharmacy might need to find supplies from a medical cabinet.) Contact-rich manipulation problems are just as ubiquitous in the physical world, and humans have developed the ability to manipulate objects of various shapes and properties in complex environments. But robots struggle with these tasks due to the challenges inherent in comprehending high-dimensional perception and physics.
The UC Berkeley researchers, working out of the university’s AUTOLab department, focused on the challenge of finding occluded target objects in “lateral access environments,” or shelves. The LAX-RAY system comprises three lateral access mechanical search policies. Called “Uniform,” “Distribution Area Reduction (DAR),” and “Distribution Area Reduction over ‘n’ steps (DER-n),” they compute actions to reveal occluded target objects stored on shelves. To test the performance of these policies, the coauthors leveraged an open framework — The First Order Shelf Simulator (FOSS) — to generate 800 random shelf environments of varying difficulty. Then they deployed LAX-RAY to a physical shelf with a Fetch robot and an embedded depth-sensing camera, measuring whether the policies could figure out the locations of objects accurately enough to have the robot push those objects.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers say the DAR and DER-n policies showed strong performance compared with the Uniform policy. In a simulation, LAX-RAY achieved 87.3% accuracy, which translated to about 80% accuracy when applied to the real-world robot. In future work, the researchers plan to investigate more sophisticated depth models and the use of pushes parallel to the camera to create space for lateral pushes. They also hope to design pull actions using pneumatically activated suction cups to lift and remove occluding objects from crowded shelves.
In the Google work, which had contributions from researchers at Alphabet’s X, Stanford, and UC Berkeley, the coauthors designed a deep reinforcement learning method that takes multimodal data and uses a “deep representative structure” to capture contact-rich dynamics. COCOI taps video footage and readings from a robot-mounted touch sensor to encode dynamics information into a representation. This allows a reinforcement learning algorithm to plan with “dynamics-awareness” that improves its robustness in difficult environments.
The researchers benchmarked COCOI by having both a simulated and real-world robot push objects to target locations while avoiding knocking them over. This isn’t as easy as it sounds; key information couldn’t be easily extracted from third-angle perspectives, and the task dynamics properties weren’t directly observable from raw sensor information. Moreover, the policy needed to be effective for objects with different appearances, shapes, masses, and friction properties.
The researchers say COCOI outperformed a baseline “in a wide range of settings” and dynamics properties. Eventually, they intend to extend their approach to pushing non-rigid objects, such as pieces of cloth.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,987 | 2,020 |
"Logmore Launches First Logistics Monitor Designed for COVID-19 Vaccine Delivery | VentureBeat"
|
"https://venturebeat.com/business/logmore-launches-first-logistics-monitor-designed-for-covid-19-vaccine-delivery"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Logmore Launches First Logistics Monitor Designed for COVID-19 Vaccine Delivery Share on Facebook Share on X Share on LinkedIn HELSINKI–(BUSINESS WIRE)–November 25, 2020– Supply chain analytics solution provider Logmore has announced the launch of its latest product.
Logmore Dry Ice , a high-spec version of Logmore’s original data recorder, has been adapted for cold and sub-zero conditions. The solution is designed for sensitive shipments, such as the COVID-19 vaccine the moment it becomes available.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20201125005841/en/ COVID 19 Logistics Monitor by Logmore (Photo: Logmore) Since its founding in 2017, Logmore has disrupted logistics monitoring with its use of loggers that sync with the cloud using dynamic QR codes, allowing anyone with a smartphone to update secure systems with the latest conditions and locations of shipments.
Logmore Dry Ice redefines monitoring of cold chain shipments from standard fridge temperatures (2-8°C) all the way down to deep freeze levels of -100°C. This makes it ideal for transporting the COVID-19 vaccines, which require deep freeze storage of as low as -80ºC throughout their journey. The device’s ability to monitor temperature, humidity, shocks, tilt, and light in real-time will be critical in ensuring that life-saving vaccines can be distributed in optimum condition.
Logmore’s technology allows shipments to be monitored from any smartphone, providing a cost-effective and precise global logistics solution. It enables shipping companies to offer full transparency to all stakeholders, differentiating their best-in-class COVID-19 vaccine logistics with the quality assurances that pharmaceutical buyers and sellers require.
The device has an additional sensor connected to it with a cable. By placing only the external sensor inside the box, the logger can monitor internal temperatures, despite remaining outside the box. In this manner, Logmore Dry Ice can handle far lower temperatures than comparable products, making it ideal for cold boxes used to dispatch pharmaceutical items.
Other dry ice-compatible loggers generally need to be placed inside cold boxes and can only export data once defrosted upon arrival. What’s more, they sync with other devices using USB connections, which can represent cyber threats. Using QR scanning, shipment recipients can verify that all requirements have been met in transit, without investing in any specialized hardware or software.
In this manner, Logmore Dry Ice ensures that vaccines remain within safe parameters throughout their journeys by monitoring their temperatures, reliably, securely and affordably.
Comparing the total cost of ownership of a standard USB dry ice logger system with Logmore’s solution, after only 5000 monitored cold boxes are shipped, opting for Logmore Dry Ice represents savings of approximately €45,000.
Using Logmore, the pharmaceutical industry and public health services can safely yet affordably deliver vaccines to where they’re needed most. When the coronavirus vaccine is ready for distribution, it is anticipated that there will be unprecedented demand on the global logistics industry. The vaccine will be required for 7 billion people, each of whom may require multiple doses, straining the supply chain network, which is not yet equipped for reliable deep freeze shipping.
To compound the challenge, industry experts estimate that today, some 25% of vaccines reach their destinations in degraded states because of shipping issues, while 20% of all temperature-sensitive products are damaged during transport due to cold chain interruptions. Due to these hazards, careful shipment and monitoring of the coronavirus vaccine will be vital.
“The COVID-19 vaccine will be transported all over the world – in planes, in trucks, through different warehouses, in tropical temperatures,” said Niko Polvinen, COO of Logmore. “We want to make sure every vaccine is safe to use and protects the person receiving it.” “The vaccine is just around the corner,” Polvinen continued. “From a logistics perspective, this cargo is highly susceptible to temperature variations. Any big deviations will ruin the product. Logmore Dry Ice is the only QR data logger that withstands the extreme temperatures and rigorous requirements of COVID-19 vaccine.” Logmore Dry Ice: Requires no app installation or new infrastructure Syncs data using everyday smartphones Provides automated reporting to verified stakeholders Aggregates thousands of data points into a single cloud interface Tracks an array of configurable conditions Is expressly designed for vaccine shipments Is cost-effective and highly scalable Logmore Dry Ice Logger is a game-changer for pharma distribution, ensuring that vaccines stay within safe parameters, maximizing their efficacy.
About Logmore Logmore’s clear, fast and secure data logging service places big data and IoT in the hands of business owners the world over. A first-of-its-kind product, Logmore’s QR data logger supports end-to-end logistics monitoring. One click activates the data logger, and a scan of the QR code wirelessly uploads data. Logmore Dry Ice Logger, the company’s latest product, is designed for the pharmaceutical industry and is capable of operating at extremely low temperatures.
View source version on businesswire.com: https://www.businesswire.com/news/home/20201125005841/en/ Press Contact Name: Dan Edelstein Email: [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,988 | 2,020 |
"Lexon's Oblio Named as One of TIME's 100 Best Inventions Of 2020 | VentureBeat"
|
"https://venturebeat.com/business/lexons-oblio-named-as-one-of-times-100-best-inventions-of-2020"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Lexon’s Oblio Named as One of TIME’s 100 Best Inventions Of 2020 Share on Facebook Share on X Share on LinkedIn The best-selling 2-in-1 charging solution with UV-C sanitizer receives recognition for making the world better PARIS–(BUSINESS WIRE)–November 25, 2020– 20 years after making the cover of TIME with their flagship Tykho radio, French design brand Lexon is back on the prestigious publication to reiterate its mission to continuously create disruptive, useful and affordable design objects that improve our daily lives.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20201125005877/en/ Oblio (Photo: Business Wire) More than just highlighting the groundbreaking French invention, TIME award confirms Oblio ‘s position as a must-have innovation for today’s world, as it has been crafted in-house primarily as a sleek UV-C sanitizer to prevent the spread of harmful viruses and bacteria that are found on our smartphones, using its built-in UV-C LED technology located on its front interior. UV-C LEDs destroy and eradicate the DNA of microorganisms found in viruses, bacteria, mold, and germs.
Capable of fully sanitizing a single surface at a time, Oblio can deliver a 360° disinfection by simply flipping the phone to expose its second surface for a 20-minute cycle. In terms of effectiveness, Oblio has been proven through lab testing to kill 99.9% of viruses, including H1N1.
“ Since the beginning of the pandemic, we have seen a growing interest for this product category (UV sanitizer) and therefore, the market becoming rapidly populated with lots of unaesthetic neither truly legitimate solutions. In this context, we’re extremely honored and proud to be recognized and awarded for our product’s distinct design, its effectiveness and reliability to sanitize our daily essentials and the opportunity to become a must-have solution for today’s home and offices.
With the help of media like TIME who are raising awareness around Oblio, with the support and trust of our retail partners who are listing it, we are together making UV-C technology becoming more popular and accessible to everyone, allowing us to collectively participate in a positive change, leveraging innovation to adapt to new behaviors and prevent the spread of virus, which is our common responsibility.
” Says Boris Brault, Lexon CEO.
Also acting as a 10W wireless charger, Oblio can fully charge a smartphone in 3 hours and comes with an LED indicator that confirms the correct positioning and charging status of the mobile device.
Appropriately, ‘Oblio’ is rooted in the Italian meaning for “forgot” the native language of designers’ Manuela Simonelli & Andrea Quaglio. The name hints to the product’s vase shape, thoughtfully crafted to help people disconnect from their screens and enjoy the freedom to be more present for each other, while discreetly sanitizing any mobile phone and fast-charging all Qi-enabled smartphones such as the latest iPhone and Android.
“With such achievements, we’re also proving that France remains an indisputable territory where creativity, innovation and design are prerequisite to sustainable growth, and we hope we can inspire entrepreneurs to believe in their projects and bring them to life.” adds Brault.
Today, Oblio is already available at the most prominent retailers worldwide such as Best Buy, Nordstrom, MoMA Design Store in the US, Fnac Darty in Europe and many more, as well as the official online store: lexon-design.com.
MSRP of 79,90€/$.
Available in 4 colors.
Compatible with all mobile phone for sanitizing function, and all Qi-enabled smartphones such as the latest iPhone & Android devices for the wireless charging function.
Download the product sheet Download the product photos Download the lifestyle photos About Lexon: Since its creation in 1991, Lexon has relentlessly pushed the limits and created a difference in the world of design while remaining true to its commitment to make small objects useful, beautiful, innovative and affordable. Whether in electronics, audio, travel accessories, office or leisure, Lexon has established a special relationship with creativity and partnered with the best designers around the world to create timeless collections of lifestyle products. Following its recent acquisition by BOW Group, a global player in the lifestyle and wearable consumer markets, Lexon is writing a new chapter in its history, experiencing a staggering international growth and digital expansion. Today, with nearly 30 years of existence, more than 200 awards, collaboration with some of the most renowned designers, a retail presence in more than 90 countries across the Globe, Lexon has truly established itself as a worldly-known French design brand.
View source version on businesswire.com: https://www.businesswire.com/news/home/20201125005877/en/ International Media contact: Annabel Corlay [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,989 | 2,020 |
"How Ubiq Security uses APIs to simplify data protection | VentureBeat"
|
"https://venturebeat.com/business/how-ubiq-security-uses-apis-to-simplify-data-protection"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Ubiq Security uses APIs to simplify data protection Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As cyberthreats continue to multiply, startups with tools to protect data are in high demand. But companies are now facing the growing complexity of managing security across their various data sources.
San Diego-based Ubiq Security believes APIs could play a key role in simplifying this task. The company hopes to encourage more developers and enterprises to build security directly into applications rather than looking for other services to plug the holes.
“How do you take the messy and complicated world of encryption and distill it down to a consumable, bite-sized chunk?” Ubiq CEO Wias Issa asked. “We built an entirely API-based platform that enables any developer of any skill set to be able to integrate encryption directly into an application without having any prior cryptography experience.” Issa is a security veteran and said companies have generally been focused on security for their data storage systems. When they start layering applications on top, many developers find they haven’t built security into those products. In addition, the underlying storage is becoming a thicket of legacy and cloud-based solutions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “You could have an Oracle database , an SQL Server, AWS storage , and then a Snowflake data warehouse ,” Issa said. “You’ve got to go buy five or six different tools to do encryption on each one of those because they’re all structured differently.” Even when encryption is included in the application, it can be poorly designed. Issa said cryptographic errors have typically been among the top three vulnerabilities in software applications over the past decade.
“When you’re a developer in 2020, you’re expected to know multiple languages, do front end, back end, full-stack development,” Issa said. “And on top of that, someone comes along and says, ‘Hey, can you do cryptography?’ And so the developer thinks, ‘How do I just get past this so I can go back to building a fantastic product and focusing on my day job?’ So key management is an area where developers either don’t understand it or don’t want to deal with it because it’s so complicated and so burdensome and, frankly, it’s very expensive to do.” To cut through those challenges, Ubiq’s API-based developer platform lets developers simply include three lines of code that make two API calls. By handling encryption at the application layer with an API, the security works across all underlying storage systems as well.
“The application will handle all the encryption and decryption and simply hand the data in an encrypted state to the storage layer,” Issa said. “That allows them to not only have a better security posture but improve their threat model and reduce the overall time it takes to roll out an encryption plan.” Customers can then use a dashboard to monitor their encryption and adjust policies without having to update code or even know the developer jargon. This, in turn, simplifies the management of encryption keys.
Lessons from the government Among its more notable customers, Ubiq announced this year that it had signed deals with the United States Army and the U.S. Department of Homeland Security. While government buyers have their particular issues, in this case the military and civilian systems faced many of the same obstacles large enterprises encounter.
“The government is struggling with digital transformation,” Issa said. “They’re stuck on all these legacy systems, and they’re not able to innovate as fast as the adversaries. So you’re seeing the likes of Iran and Syria and China and Russia and other Eastern Bloc countries start to build these offensive cyber capabilities. All you need is an internet connection, a bunch of skilled, dedicated resources, and now an entire country’s military cyber capability can rapidly grow. We don’t want that to outpace the United States.” Part of the obstacle here is systems that run across tangled legacy and cloud infrastructure and mix structured and unstructured data and a wide range of coding languages. While there have been big gains in terms of protecting the underlying storage, Issa said attackers have increasingly focused on vulnerabilities in the applications.
“Encryption is something that everybody knows they need to do, but applying it without tripping over yourself is hard to do,” Issa said. “They turned to us because they’ve got all these disparate data types and they have all these unique types of storage. The problem is how to apply a uniform encryption strategy across all those diverse datasets.” Issa said the emergence of the API economy has made such solutions far more accepted among big enterprises. They see APIs in general as a faster, more efficient way to build in functionality. Issa said applying that philosophy to security seemed like a natural evolution that not only eases the task but improves overall security.
“One of the other traditional challenges with encryption is when you deploy it somewhere and it breaks something,” he said. “And then you can’t deploy it in some sectors because the system is old. So you just apply it in two areas and then realize you’ve only applied encryption to 30% of your infrastructure. We enable a much more uniform approach.” Ubiq got a boost earlier this month with a $6.4 million seed round. Okapi Venture Capital led the round, which included investment from TenOneTen Ventures, Cove Fund, DLA Piper Venture, Volta Global, and Alexandria Venture Investments. Ubiq plans to use the money for product development, building relationships with developers, and marketing.
“Our core focus is going to be on growing the platform, getting customer input, and making sure that we’re making the changes that our customers are asking for so we can run a very resilient, useful platform,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,990 | 2,020 |
"How to productionalize your AI initiatives -- for success (VB Live) | VentureBeat"
|
"https://venturebeat.com/business/how-to-productionalize-your-ai-initiatives-for-success-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live How to productionalize your AI initiatives — for success (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Dataiku For developing an AI pipeline, the most pressing consideration is which of the three primary operating models will work best for you. Join this VB Live event for a deep dive into the details of each model and leave with a firm grasp on best practices and your next steps.
Register here for free.
The issue organizations face when launching an AI initiative isn’t the technology, says Beaumont Vance, head of AI, advanced analytics and emerging technology DevOps at TD Ameritrade. It’s the 99 percent perspiration that follows the one percent inspiration.
“The challenge is that it seems like most of the energy, and honestly most of the fun part of the job, is in the creation, doing the proof of concept, getting the first thing to work,” Vance says. “But the part that doesn’t get a lot of attention is building things out in production in a corporate environment that are legally and regulatorily compliant and sound and meet the the five-nines standards of a system. But that’s the thing that has to be solved. If you don’t, what you end up with is a lot of launched betas.” There’s a whole architecture of things that need to happen in order to bring a fully realized product into production. Everything from the 10,000 hours of training needed to have an approved secure server to be hosted, to the production cycle, quality assurance testing, developing a 24/7 support system, not to mention funding and oversight every step of the way.
Establishing solid production processes as early on as possible is the only way to emerge from the rest of the pack, Vance explains, and it can be tremendously difficult. In his 1998 book, Clayton Christensen described companies as a bunch of stable processes that are designed to resist perturbations, which is why they can withstand market turmoil and change. However, those processes also serve to create the innovator’s dilemma, which is to say they often prevent groundbreaking inventions from ever seeing light.
“I like to joke that if we invented a coffee machine that produced liquid gold, it would be very difficult to figure out how to put that into production in any company, no matter how obviously valuable it was,” he says. “The system would say, well, that may be very valuable and it’s certainly worth a lot of money, but we currently have no process in place to produce that.” Many companies have the lab where the proof of concept takes shape, but what’s missing is the production piece, the DevOps piece. You need to change the operational processes to get things from ideas and proof of concepts to an actual production environment to produce value and be part of your ecosystem.
At TD Ameritrade, they learned those lessons when they migrated to a web presence, and learned them again when they migrated to a mobile presence, Vance says. Way back in the day, the mobile team in just about any company was three folks working together in a room. Now every company has a whole infrastructure built around their mobile initiatives. Companies have entire mobile departments. There’s a mobile team in the retail business, and a mobile team in the institutional business, and a mobile team in IT, and a mobile team in legal, because most organizations have learned over time that if you want a mature application in the enterprise, you must implement a framework, built over time.
“For some reason we seem to have forgotten that’s true of every emerging technology — with AI we seem to be doomed to learn it all over again,” he says. “The lesson is, learn from the lessons of the past that we’ve learned with technology. Build that structure, because otherwise you’re not going to have viable products.” AI is a new enough emerging technology for most companies that they’re at the stage where formal structures are still being defined to plan AI projects and make them successful, similar to the processes we went through with internet and mobile, says Vance. Along with that, the positions in these structures will be defined and settled upon and formalized.
“At some point people will totally forget that it was ever a question whatsoever and think that it was totally self-evident and obvious — we won’t understand why everybody didn’t think of these strategies 10 years ago,” he says.
For an in-depth look at how companies are putting their best AI solutions into production and achieving real ROI, a dive into TD Ameritrade’s DevOps pipeline, best practices for companies of every size, and more, don’t miss this VB Live event.
Don’t miss out! Register here for free.
You’ll hear: A detailed look at each of the three primary operating models for AI initiatives The pros and cons of each operating model for a variety of business uses Case studies from companies that have implemented each type of model And more Speakers: Beaumont Vance , Head of AI, Advanced Analytics and Emerging Technology DevOps, TD Ameritrade Jennifer Roubaud-Smith , VP, Global Head of Strategic Advisory, Dataiku Kyle Wiggers , AI Staff Writer, VentureBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,991 | 2,020 |
"Gig economy workers could receive equity under SEC proposal | VentureBeat"
|
"https://venturebeat.com/business/gig-economy-workers-could-receive-equity-under-sec-proposal"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gig economy workers could receive equity under SEC proposal Share on Facebook Share on X Share on LinkedIn Driver Jesus Jacobo takes part in a statewide day of action to demand that ride-hailing companies Uber and Lyft follow California law and grant drivers "basic employee rights.'' Los Angeles, California, August 20, 2020.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) — The U.S. securities regulator on Tuesday proposed a pilot program to allow tech companies like Uber and Lyft to pay gig workers up to 15% of their annual compensation in equity rather than cash, a move it said was designed to reflect changes in the workforce.
The Securities and Exchange Commission (SEC) said internet-based companies may have the same incentives to offer equity compensation to gig workers as they do to employees. Until now, though, SEC rules have not allowed companies to pay gig workers in equity.
The proposal would not require an increase in pay, just create flexibility on whether to pay using cash or equity. It comes amid a fierce debate over the fast-growing gig economy, which labor activists complain exploits workers, depriving them of job security and traditional benefits like health care and paid vacations. The SEC’s Democratic commissioners said giving tech giants such flexibility would create an uneven playing field for other types of companies.
“Work relationships have evolved along with technology, and workers who participate in the gig economy have become increasingly important to the continued growth of the broader U.S. economy,” SEC chair Jay Clayton said in a statement.
The proposed temporary rules would allow gig workers to participate in the growth of the companies their efforts support, he added, capped at 15% of annual compensation or $75,000 in three years.
Democratic SEC commissioners Allison Lee and Caroline Crenshaw opposed the move, saying alternative work arrangements, including independent contractors and freelancers, have existed for decades across a range of industries and it was not clear why tech companies should be singled out for special treatment.
“Whatever the potential merits of equity compensation for alternative workers, the proposal does not establish a basis for selectively conferring a benefit on this particular business model,” they wrote in a statement.
( Reporting by Michelle Price. Editing by David Gregorio.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,992 | 2,020 |
"DeepMind's improved protein-folding prediction AI could accelerate drug discovery | VentureBeat"
|
"https://venturebeat.com/business/deepmind-claims-its-ai-can-predict-how-proteins-will-fold-with-state-of-the-art-accuracy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind’s improved protein-folding prediction AI could accelerate drug discovery Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The recipe for proteins — large molecules consisting of amino acids that are the fundamental building blocks of tissues, muscles, hair, enzymes, antibodies, and other essential parts of living organisms — are encoded in DNA. It’s these genetic definitions that circumscribe their three-dimensional structures, which in turn determines their capabilities. But protein “folding,” as it’s called, is notoriously difficult to figure out from a corresponding genetic sequence alone. DNA contains only information about chains of amino acid residues and not those chains’ final form.
In December 2018, DeepMind attempted to tackle the challenge of protein folding with a machine learning system called AlphaFold. The product of two years of work, the Alphabet subsidiary said at the time that AlphaFold could predict structures more precisely than prior solutions. Lending credence to this claim, the system beat 98 competitors in the Critical Assessment of Structure Prediction (CASP) protein-folding competition in Cancun, where it successfully predicted the structure of 25 out of 43 proteins.
DeepMind now asserts that AlphaFold has outgunned competing protein-folding-predicting methods for a second time. In the results from the 14th CASP assessment, a newer version of AlphaFold — AlphaFold 2 — has average error comparable to the width of an atom (or 0.1 of a nanometer), competitive with the results from experimental methods.
“We have been stuck on this one problem — how do proteins fold up — for nearly 50 years,” University of Maryland professor John Moult, cofounder and chair of CASP, told reporters during a briefing last week. “To see DeepMind produce a solution for this, having worked personally on this problem for so long and after so many stops and starts, wondering if we’d ever get there, is a very special moment.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Protein folding Solutions to many of the world’s challenges, like developing treatments for diseases, can ultimately be traced back to proteins. Antibody proteins are shaped like a “Y,” for example, enabling them to latch onto viruses and bacteria, and collagen proteins are shaped like cords, which transmit tension between cartilage, bones, skin, and ligaments. In SARS-CoV-2, the novel coronavirus, a spike-like protein changes shape to interact with another protein on the surface of human cells, allowing it to force entry.
It was biochemist Christian Anfinsen who hypothesized in 1972 that a protein’s amino acid sequence could determine its structure. This laid the groundwork for attempts to predict a protein’s structure based on its amino acid sequence as an alternative to expensive, time-consuming experimental methods like nuclear magnetic resonance, X-ray crystallography, and cryo-electron microscopy. Complicating matters, however, is the raw complexity of protein folding. Scientists estimate that because of the incalculable number of interactions between the amino acids, it would take longer than 13.8 billion years to figure out all the possible configurations of a typical protein before identifying the right structure.
Above: AlphaFold’s architecture in schematic form.
DeepMind says its approach with AlphaFold draws inspiration from the fields of biology, physics, machine leaning, and the work of scientists over the past half-century. Taking advantage of the fact that a folded protein can be thought of as a “spatial graph,” where amino acid residues (amino acids contained within a peptide or protein) are nodes and edges connect the residues in close proximity, AlphaFold leverages an AI algorithm that attempts to interpret the structure of this graph while reasoning over the implicit graph that it’s building using evolutionarily related sequences, multiple sequence alignment, and a representation of amino acid residue pairs.
By iterating through this process, AlphaFold can learn to predict the underlying structure of a protein and determine its shape within days, according to DeepMind. Moreover, the system can self-assess which parts of each protein structure are reliable using an internal confidence measure.
DeepMind says that the newest release of AlphaFold, which will be detailed in a forthcoming paper, was trained on roughly 170,000 protein structures from the Protein Data Bank, an open source database for structural data of large biological molecules. The company tapped 128 of Google’s third-generation tensor processing units (TPUs), special-purpose AI accelerator chips available through Google Cloud, for compute resources roughly equivalent to 100 to 200 graphics cards. Training took a few weeks. For the sake of comparison, it took DeepMind 44 days to train a single agent within its StarCraft 2-playing AlphaStar system using 32 third-gen TPUs.
DeepMind declined to reveal the cost of training AlphaFold. But Google charges Google Cloud customers $32 per hour per third-generation TPU, which works out to about $688,128 per week.
Measuring progress In 1994, Moult and University of California, Davis professor Krzysztof Fidelis founded CASP as a biennial blind assessment to catalyze research, monitor progress, and establish the state of the art in protein structure prediction. It’s considered the gold standard for benchmarking predictive techniques, because CASP chooses structures that have only recently been experimentally selected as targets for teams to test their prediction methods against. Some were still awaiting validation at the time of AlphaFold’s assessment.
Because the target structures aren’t published in advance, CASP participants must blindly predict the structure of each of the proteins. These predictions are then compared to the ground-truth experimental data when this data become available.
The primary metric used by CASP to measure the accuracy of predictions is the global distance test, which ranges from 0 to 100. It’s essentially the percentage of amino acid residues within a certain threshold distance from the correct position. A score of around 90 is informally considered to be competitive with results obtained from experimental methods; AlphaFold achieved a median score of 92.4 overall and a median score of 87 for proteins in the free-modeling category (i.e., those without templates).
Above: The results of the CASP14 competition.
“What we saw in CASP14 was a group delivering atomic accuracy off the bat,” Moult said. “This [progress] gives you such excitement about the way science works — about how you can never see exactly, or even approximately, what’s going to happen next. There are always these surprises. And that really as a scientist is what keeps you going. What’s going to be the next surprise?” Real-world applications DeepMind makes the case that AlphaFold, if further refined, could be applied to previously intractable problems in the field of protein folding, including those related to epidemiological efforts. Earlier this year, the company predicted several protein structures of SARS-CoV-2, including ORF3a, whose makeup was formerly a mystery. At CASP14, DeepMind predicted the structure of another coronavirus protein, ORF8, which has since been confirmed by experimentalists.
Beyond pandemic response, DeepMind expects that AlphaFold will be used to explore the hundreds of millions of proteins for which science currently lacks models. Since DNA specifies the amino acid sequences that comprise protein structures, advances in genomics have made it possible to read protein sequences from the natural world, with 180 million protein sequences and counting in the publicly available Universal Protein database. In contrast, given the experimental work needed to translate from sequence to structure, only around 170,000 protein structures are in the Protein Data Bank.
DeepMind says it’s committed to making AlphaFold available “at scale” and collaborating with partners to explore new frontiers, like how multiple proteins form complexes and interact with DNA, RNA, and small molecules. Improving the scientific community’s understanding of protein folding could lead to more effective diagnoses and treatment of diseases such as Parkinson’s and Alzheimer’s, as these are believed to be caused by misfolded proteins. And it could aid in protein design, leading to protein-secreting bacteria that make wastewater biodegradable, for instance, and enzymes that can help manage pollutants such as plastic and oil.
Above: A ground-truth folded protein compared with AlphaFold 2’s prediction.
In any case, it’s a milestone for DeepMind, whose work has principally focused on the games domain. Its AlphaStar system bested professional players at StarCraft 2, following wins by AlphaZero at Go, chess, and shogi. While some of DeepMind’s work has found real-world application, chiefly in datacenters , Waymo’s self-driving cars , and the Google Play Store’s recommendation algorithms , DeepMind has yet to achieve a significant AI breakthrough in a scientific area such as protein folding or glass dynamics modeling.
These new results might mark a shift in the company’s fortunes.
“AlphaFold represents a huge leap forward that I hope will really accelerate drug discovery and help us to better understand disease. It’s pretty mind blowing,” DeepMind CEO Demis Hassabis said during the briefing last week. “We advanced the state of the art in the field, so that’s fantastic, but there’s still a long way to go before we’ve solved it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,993 | 2,020 |
"China drives 5G demand for Sweden's Ericsson | VentureBeat"
|
"https://venturebeat.com/business/china-drives-5g-demand-for-swedens-ericsson"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages China drives 5G demand for Sweden’s Ericsson Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Sweden’s Ericsson on Monday raised its global forecast for 5G mobile subscriptions to 220 million by the end of this year, citing faster than expected uptake in China.
The telecoms equipment maker, which had previously forecast 190 million subscriptions, said it expects China to account for almost 80% of the newly forecast total.
“What has fueled the growth is China, and that is driven in itself by a strong strategic national focus on 5G in China,” head of networks Fredrik Jejdling told Reuters.
Ericsson said in its biannual Mobility Report that 2020 had seen society take a “big leap toward digitalization,” as the pandemic acted as a catalyst for rapid change and highlighted the impact connectivity has on people’s daily lives.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! About 15% of the world’s population, or 1 billion people, are expected to be in an area that would be covered by 5G by the end of the year, Jejdling said.
The company forecast 3.5 billion 5G subscriptions by the end of 2026, accounting for more than 50% of mobile data traffic, with four out of every 10 mobile subscriptions being 5G.
Ericsson, which competes with China’s Huawei and Finland’s Nokia , added that 60% of the world’s population will have access to 5G coverage in 2026.
Ericsson has won contracts from all three major operators in China to supply radio equipment for 5G networks.
The mobile network industry has faced waning demand for 4G and older network equipment, but 5G spending in North America has helped fuel a return to growth.
The new generation of mobile phone technology will bring faster data speeds and support a greater variety of connected devices.
( Reporting by Helena Soderpalm and Supantha Mukherjee, editing by Jan Harvey.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,994 | 2,020 |
"Birdeye Named to Deloitte's 2020 Technology Fast 500™ | VentureBeat"
|
"https://venturebeat.com/business/birdeye-named-to-deloittes-2020-technology-fast-500"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Birdeye Named to Deloitte’s 2020 Technology Fast 500™ Share on Facebook Share on X Share on LinkedIn List of Fastest-Growing Companies Favors Customer Experience Leader PALO ALTO, Calif.–(BUSINESS WIRE)–November 25, 2020– Birdeye, a leading provider of innovative Customer Experience solutions, today announced that it was ranked on Deloitte’s Technology Fast 500™, a ranking of the 500 fastest-growing companies in North America, now in its 26th year. Birdeye grew 329% during this period.
“We’re honored to be recognized by Deloitte as one of the fastest growing companies in North America,” said Naveen Gupta, co-founder and CEO of Birdeye. “We started Birdeye to help businesses grow faster and better through Customer Experience. It feels great to have the hard work of the team recognized for the value it’s bringing to our customers.” This recognition by Deloitte adds to the Birdeye’s growing list of accolades, which includes being named #1 Customer Experience software on G2, winning a MarTech Breakthrough “Best Overall Conversational Marketing Company” for the second time in a row, and two 2020 Globee awards, including Gold for company of the year and Bronze for Birdeye Interactions.
The awards follow on product breakthroughs supporting both Marketing and Success, including recent launches of Interactions and Referrals. Market response continues to be strong with significant recent customer wins in the Fortune 1000 and across industries.
“For more than 25 years, we’ve been honoring companies that define the cutting edge and this year’s Technology Fast 500 list is proof positive that technology – from software and digital media platforms, to biotech – truly does permeate so many facets of our lives,” said Paul Silverglate , vice chairman, Deloitte LLP and U.S. technology sector leader. “We congratulate this year’s winners, especially during a time when innovation is needed more than ever to address the monumental challenges posed by the pandemic.” About Deloitte’s 2020 Technology Fast 500™ Now in its 26th year, Deloitte’s Technology Fast 500 provides a ranking of the fastest-growing technology, media, telecommunications, life sciences and energy tech companies – both public and private – in North America. Technology Fast 500 award winners are selected based on percentage fiscal year revenue growth from 2016 to 2019.
In order to be eligible for Technology Fast 500 recognition, companies must own proprietary intellectual property or technology that is sold to customers in products that contribute to a majority of the company’s operating revenues. Companies must have base-year operating revenues of at least $US50,000, and current-year operating revenues of at least $US5 million. Additionally, companies must be in business for a minimum of four years and be headquartered within North America.
About Birdeye Birdeye is the all-in-one customer experience platform that provides businesses with the tools to deliver great experiences at every step of the customer journey.
More than 60,000 businesses of all sizes use Birdeye every day to be found online and chosen through listings and reviews, be connected with existing customers using text messaging, and deliver the best end-to-end customer experience with survey, ticketing and insights tools.
Learn more at birdeye.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20201125005422/en/ Media Contact Travis Bickham [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,995 | 2,020 |
"Aurora Solar raises $50 million to streamline solar installation with predictive algorithms | VentureBeat"
|
"https://venturebeat.com/business/aurora-solar-raises-50-million-to-streamline-solar-installation-with-predictive-algorithms"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aurora Solar raises $50 million to streamline solar installation with predictive algorithms Share on Facebook Share on X Share on LinkedIn Solar Panels Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
San Francisco-based Aurora Solar , which taps a combination of lidar sensor data, computer-assisted design, and computer vision to streamline solar panel installations, today announced a $50 million raise. The company says it will leverage the funds to accelerate hiring across all teams and ramp up development of new features and services for solar installers and solar sales consultants.
Despite recent setbacks , solar remains a bright spot in the still-emerging renewable energy sector. In the U.S., the solar market is projected to top $22.9 billion by 2025, driven by falling materials costs and growing interest in offsite and rooftop installations. Moreover, in China — the world’s leading installer of solar panels and the largest producer of photovoltaic power — 1.84% of the total electricity generated in the country two years ago came from solar.
Above: Modeling solar panel installations with Aurora Solar’s software.
Stanford University graduates Samuel Adeyemo and Christopher Hopper teamed up to cofound Aurora in 2013 after a frustrating experience commissioning a solar project for a school in East Africa. While the panels themselves only took weeks to install, planning — conducting research, calculating financials, and designing the system — dragged on for six months.
So the pair devised SmartRoof, which allows solar installers to create 3D CAD models of construction sites and forecast not only how many panels will fit on the properties, but the amount of power they will produce and the potential energy savings. It’s a bit like Google’s Project Sunroof , a geographic search engine anyone can use to discover the potential for solar energy collection in their home, albeit more sophisticated.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SmartRoof launched in 2015 following two years of development and validation tests with the National Renewable Energy Laboratory and the U.S. Department of Energy, according to Hopper. The process begins with modeling, using Aurora’s CAD software. Designers trace roof outlines over satellite images augmented with lidar data and use built-in edge detection tools to ensure they’re up to spec. From there, designers are able to simulate obstructions (like trees) impacting the panels’ sunlight exposure or sun paths and use that data to extrapolate power consumption at various times of the year.
In September, Aurora launched a range of tools to size battery backup systems and make recommendations based on homeowners’ needs. The software dynamically factors in how much power will be produced by a solar system, the amount of load to be backed up, and the number of batteries and inverters selected. Aurora VP Justin Durack said the product team spent months researching and designing the toolset from the ground up.
Aurora’s platform also allows planners to plot out solar panel sites manually, using drag-and-drop components for things like modules, wiring, connections, combiner boxes, and ground mounts, or to generate site designs on the fly algorithmically. No matter the method, Aurora converts the resulting 3D measurements into 2D single line and layout diagrams while performing hundreds of checks for National Electric Code (NEC) compliance.
Above: Generating an irradiance report with Aurora Solar’s tools.
There’s a sales piece, too. Aurora’s solution uses system performance to model loans, leases, and cash payments and boasts proposal-building tools that let companies import calculations and other information from the project into polished, presentation-ready decks.
Aurora claims the irradiance reports it produces are statistically equivalent to onsite measurements for any location in the world and are certified compliant with the National Electric Code (NEC). Moreover, the company says it is accepted by rebate authorities, including the New York State Energy Research and Development Authority (NYSERDA), Massachusetts Clean Energy Center (MassCEC), Energy Trust of Oregon, New Jersey Clean Energy Fund, Oncor, and Connecticut Green Bank. In March, the California Energy Commission approved Aurora for solar panel installation on new homes.
Aurora operates on a subscription model and offers several options: a $135 per user per month basic tier; a $220 premium tier that adds things like lidar-assisted modeling, NEC validation, and single line diagrams; and a variably priced enterprise tier for “organizations that quote thousands of systems per month.” As of October 2020, over 4 million commercial and residential solar installations had been designed with Aurora’s technology, a number that has grown at a rate of 200,000 projects every month. The company notched 1 million project installations in the last six months alone, coinciding with the start of the pandemic.
“While hardware costs have come down significantly over the last decade, the soft costs of installing solar have remained stubbornly sticky, and this is where Aurora comes in,” Topper told VentureBeat via email. “Aurora’s technology is helping solar professionals accurately design solar projects at scale and eliminating site visits, thus driving down the cost of solar energy systems for homeowners across the U.S. and internationally.” Aurora’s series B round announced today was led by Iconiq Growth, with participation from existing investors Energize Ventures, Fifth Wall, and Pear VC. This brings the company’s total raised to $70 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,996 | 2,020 |
"Are the hyper-platforms going to kill your business? | VentureBeat"
|
"https://venturebeat.com/business/are-the-hyper-platforms-going-to-kill-your-business"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Are the hyper-platforms going to kill your business? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Tech companies come in two flavors: those that are platforms and those that want to be platforms. A platform is one of those ideas that is easy to see yet hard to define, and therein lies the challenge for regulators. By the time regulators figure out that a given company is a platform and prone to tipping over into a predatory monopolist, it’s often too late to do anything meaningful to reign the company in. And by the time any remedies are implemented, the fast-moving world of tech innovation has already moved on, rendering those solutions almost meaningless.
One important concept to understand about true platforms is that they have no boundaries. All tech companies want their platform to be all things to all people. Consider Amazon’s desire to own all of ecommerce, which has only been intensified by the COVID-19 pandemic. This includes logistics, brick-and-mortar (or what’s left of it), enterprise/cloud computing, and content creation/distribution. Google/Alphabet has a similarly ambitious purview with its expansion from core search and targeted advertising into everything from consumer devices to Google Health, and a slew of other non-core initiatives.
In addition, tech companies that are still establishing their dominance or feel their core business may be threatened are willing to pay extraordinary premiums to acquire companies they deem a threat. Examples abound from the well-understood acquisitions of Instagram and WhatsApp by Facebook to the less obvious acquisition of YouTube by Google/Alphabet. In the case of YouTube, a major motivation was the preservation of Google’s leadership in all things search. The phenomenal growth of YouTube’s video corpus created a new pool of content that needed to be included in any search service, and it was unclear if Google would have unfettered access to index this valuable content. In all of these cases and many others, the boundary of what is within the platform was substantially extended with little concern, or even awareness, shown by regulators or the public in general.
Given the boundaryless nature of these hyper-platforms, it’s no wonder that tech giants find it almost impossible to be good partners. Everyone they partner with is at risk of becoming roadkill. The current behemoths — Apple, Amazon, Microsoft, Alphabet, and Facebook — are qualitatively different from the previous generation of tech monopolies such as IBM, Cisco, AT&T, and Intel. The difference is that those monopolies really did have well-defined boundaries and efforts to extend their platforms were half-hearted and typically failed. Take Cisco’s end-user initiatives, which included the acquisitions of Flip Video and WebEx. The former was written off within a year and the latter, while not a failure, is redundant in an increasingly crowded space.
In a sense, it was easier to regulate monopolies in the past simply because you could clearly see when they extended themselves into adjacent markets. Regulating something you can see and touch requires less of a leap of imagination than the new world made of bits. You can regulate the acquisition of one airline by another because you only need to look at the routes and hubs to see if competition and/or consumers will be hurt. But what about the acquisition of a corpus of user-generated videos, what or who exactly is being hurt or helped? So, given the static/slow nature of regulation and the fast-evolving nature of tech, how then should businesses approach hyper-platforms? For startups, the imperative is to stay far away. To that end, it’s critical to analyze the potential overlap in core technology and expertise, primary customers, channel, and talent. Founders owe it to themselves to drill into this overlap analysis and demonstrate the proverbial 10x benefit promised by the startup opportunity.
As a sobering example, it’s worth considering the notion of investing in the AWS (Amazon Web Services) ecosystem. For a startup, the benefit of building on top of AWS is clear — immediate scalability and an enormous, addressable customer base. And yet, AWS itself has demonstrated an insatiable appetite to ingest as much of the ecosystem around it as possible with its 165, and counting, distinct compute services. It’s not difficult to imagine that many of these services could have been standalone companies in a world with more robust pro-competition regulatory regimes. However, it would be almost irresponsible to invest in the AWS ecosystem given Amazon’s existential need to vacuum up as much of the value around its so-called “core” cloud infrastructure. Why so-called core? Because, as should be clear by now, the whole notion of core has no meaning other than as a misnomer for limitless or unbounded market ambition.
Possibly the most useful approach to avoiding a premature death at the hands of the hyper-platforms is to pursue opportunities that require deep domain expertise and address a distinct base of customers. Nevertheless, at some point, every new entrant that has meaningful scale will become a target for one or more of the hyper-platforms, as something to be either challenged or acquired. In either case, provided it has built a big enough competitive moat, it should be in a position to deliver an attractive return for its investors and entrepreneurs.
For established businesses, both public and private, hyper-platforms present a different challenge. In this situation, it’s the hyper-platforms that play the role of startups. Take the example of Alphabet’s Waymo autonomous driving division. What are we to make of an initiative that is “making it safe and easy for everyone to get around, without the need for anyone in the driver’s seat”? For the incumbent car companies and their ecosystem there is only one response: innovate. And many of them are doing just that with their “startup” autonomous driving initiatives.
The broader lesson is that all industries, whether they are inherently information based, such as financial services, or more firmly centered in the physical world, like agriculture and manufacturing, must wholeheartedly adopt the mindset of digital innovation in order to survive the inevitable attack from hyper-platforms.
Salman Ullah is managing director of Merus Capital , an early-stage VC firm based in Palo Alto.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,997 | 2,020 |
"A virtual companywide hackathon is worth the investment | VentureBeat"
|
"https://venturebeat.com/business/a-virtual-company-wide-hackathon-is-worth-the-investment"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest A virtual companywide hackathon is worth the investment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At our annual company retreat, we usually run a hackathon for our dev team. But this year (at our virtual retreat), we decided to expand the hackathon framework to the entire company. We took the basic premise of the hackathon — deep work on a single project — and used it to accelerate our progress toward company priorities. The entire company was split into teams that dedicated their full brain power to projects that feed our company-wide goals. The results were powerful.
The hackathon model outside of development To do a hackathon outside of a software development team takes a little planning, but the basic premise is the same. People get together and do concentrated work on a single topic or project to make more progress than they would when distracted by their everyday workload.
To make this work, we broke the company into departmental and interdepartmental teams. Each of the teams would take on a single, ambitious project over the course of the day with the goal of completing as much work as possible on the project in the limited amount of time we had. Here’s how we accomplished that: 1. Plan plan plan: structure for work and for fun We set aside one day of our 3-day virtual retreat for a full-day hack. The following day, we combined a half-day hack in the morning with a half day of presentations and social events.
For the two working days of the hackathon, we planned events that support our company: project work time, presentations on the projects, and culture events.
The executive leadership also did a lot of work before the hackathon to ensure that everyone could hit the ground running on the morning of the full-day hack. They assigned each team leader a high-impact project, defined the project outcomes, and chose the team members who would work on each part of the project.
2. Provide the mental space to work A hackathon requires intense work without distractions. We made participation in the hackathon mandatory and gave clients and outside stakeholders plenty of notice that we would be largely unavailable during those days. In addition to turning on email away messages, we asked customer-facing employees to inform their accounts ahead of time that they would be slow to respond during the hackathon.
Then we sent out company-wide calendar invitations to reserve the time on our employees’ calendars.
We built in blocks of uninterrupted time for teams to work, but made sure to schedule breaks and recreational activities to break up the day. All of this planning allowed us to point the employees in the right direction and get out of their way.
3. Work on high-priority projects that impact the whole company To decide on the projects for each team, the executive leadership team identified the projects that would make the most impact across the company. They then defined the outcomes that they needed from the hackathon based on company goals and existing company-wide priorities. Some examples of projects we worked on included replacing manual workloads with automated processes and documenting updated customer profiles and buying cycles.
The projects we chose to work on needed to meet two criteria: They were large projects that would take significant resources to complete The completed project would make an impact on the speed of overall work or the completion of company priorities.
4. Tap emerging leaders One of the ways that we invest in our internal leadership resources is by giving emerging leaders the chance to show their skills through projects that raise their visibility in the company. We asked managers to choose emerging leaders — those new people managers or individual contributors that show management promise — from their teams to facilitate their portion of the hackathon. These leaders were in charge of rolling the plan out to their peers, facilitating the sessions, and presenting the findings to the company at large.
This gave us the chance to support emerging leaders, having them take a more visible role in the company at large. The managers then held a couple of alignment sessions in the week prior to ensure the emerging leaders were prepared and had a detailed plan for facilitating the project. Having emerging leaders run the sessions and present the outcomes added a needed component of employee growth that also aligns well with our values.
Takeaways As a quickly growing company, we often have more work to do than people to do it. But we owe it to ourselves to prioritize difficult projects that move us toward our goals. A hackathon provides the opportunity to get over that initial hump of work that often slows us down. We often procrastinate on the most important work because we think it’s going to be difficult or take a long time to complete. The all-hands mentality of a hackathon gives everyone the excuse to stop and work on the business instead of working so hard in it.
When we schedule working breaks like this, we allow ourselves the luxury of stepping back from the daily work to create something bigger, to streamline processes, or to align around goals that really matter for the growth of the company.
We also took advantage of this time to work with people we may have minimal contact with in our day-to-day work, which helps to build connections across the team. The hackathon served the dual purpose of jumpstarting projects and strengthening our team culture and connections.
Rob Bellenfant is a serial entrepreneur, investor, and founder and CEO of TechnologyAdvice.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,998 | 2,020 |
"3 tech execs on how advanced degrees changed their lives | VentureBeat"
|
"https://venturebeat.com/business/3-tech-execs-on-how-advanced-degrees-changed-their-lives"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored 3 tech execs on how advanced degrees changed their lives Share on Facebook Share on X Share on LinkedIn Presented by the University of Missouri If 2020 had a word of the year, it would be “pivot.” Thanks to a pandemic and the resulting supply chain disruptions, travel restrictions, health concerns, and a competing desire for normalcy, business owners worldwide had to be flexible, adaptable, and when that didn’t work, they had to change course completely.
For a rising number of business owners, pivoting has meant reskilling or upskilling — gaining new capabilities to make growth and progress possible, despite external shifts beyond their control. One increasingly popular way to build skills is through earning an advanced degree , often entirely online.
More than 6.9 million students were earning undergraduate and graduate degrees through online programs as of 2018, reported the National Center for Education Statistics.
As of 2019, 77% of students had enrolled in online courses to improve their career prospects, according to Best Colleges’ Online Education Trend Report 2020.
Of that total, 37% of students were looking for training to pivot their careers and 37% were trying to accelerate their career trajectory. More than twice as many students were pursuing a graduate designation versus an undergraduate degree.
The appeal of upskilling through an advanced degree continues to rise within the tech business community.
Recognizing and filling market gaps Jerry Ting, co-founder and CEO of Evisort, Inc.
, couldn’t decide whether to pursue a career in law or business, so he took both the LSAT and GMAT while finishing his undergraduate degree. On admission to law school, he left his job, temporarily setting aside his business aspirations to pursue a legal degree.
Just a few months into his legal education, however, Ting was bemoaning the amount of legal reading required. Professors, advisors, and mentors all told Ting he needed to learn to read faster, which led him to wonder why the whole process couldn’t be automated. Using artificial intelligence (AI), why couldn’t a program be written to locate those vital kernels of information within legal and business documents, Ting posed? Ting joined an entrepreneurship program and was paired with a local entrepreneur, Amine Anoun, and a team of fellow law students who would act as legal advisors to the venture. Anoun was a PhD student working on an AI-related concept, so Ting proposed the idea of AI automating legal documents for feedback. Together they built a program to do just that.
For the remaining two-and-a-half years of school, Ting, fellow student Jake Sussman, and Anoun worked together to build the business.
Evisort exists because, while immersed in earning an advanced degree, Ting recognized an opportunity to apply AI capabilities to a new industry. He was also able to leverage his education to earn credibility that attracted clients. His advanced degree made Evisort’s success possible.
Applying education to solve real-world challenges Kao Yang enrolled in the University of Missouri-Columbia’s master’s in data science and analytics program in August 2018 in the hopes of improving her career prospects. Assessing future hiring needs, Yang determined that there was likely a future in data analytics.
“I did my research and found that data science was becoming very popular,” she says. Indeed, the industry is enjoying an 11% employment growth rate. Having a master’s in data science would make her more competitive in the job market, she believed at the start of the program, which she has confirmed since graduating in May of 2020 and landing a role as a marketing analyst with Veterans United Home Loans.
As a marketing analyst, Yang’s job is completely data-centered, she explains, and “what I learned at Mizzou helped me a lot in my new job.” Through the University of Missouri’s comprehensive program, Yang learned how to use the latest data tools and techniques to solve business challenges.
“The program laid a great foundation for analyzing results,” Yang says, which opened doors for her in mortgage finance.
Bringing a fashion venture to fruition Jonnette Oakes launched her business, SHADED by Jonnette , in the summer of 2020, right before beginning her two-year master’s in journalism at the University of Missouri (a.k.a. Mizzou). She had initially enrolled in the program to build on her undergraduate journalism degree, but quickly realized she could hone her business skills through her courses, too. In particular, the visual design course in her first semester led her to work on making her t-shirts more commercially appealing, she says.
SHADED by Jonnette is an Etsy-based apparel venture that relies heavily on tech tools for product development and production, ranging from an iPad to Procreate studio software to Cricut stencils and a heat press. Having benefited from the beautiful clothing her mother, a sewer, made as she was growing up, Oakes says, “Watching her inspired me to pursue my own creative passion.” Although Oakes began her master’s at Mizzou to accelerate her journalism career, she sees now that being exposed to business studies in her journalism classes, “…was an added benefit for me,” and has enabled her to pursue two career paths simultaneously.
Whether business owners are looking to build new skills, fortify existing ones to remain competitive, or a combination of the two, many are finding that enrolling in online graduate programs provide convenience and credibility — crucial to succeeding as a student and benefitting from your degree upon graduation.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,999 | 2,020 |
"Sphero spinout Company Six launches throwable, video-streaming wheeled drone for first responders | VentureBeat"
|
"https://venturebeat.com/ai/sphero-spinout-company-six-launches-throwable-video-streaming-wheeled-drone-for-first-responders"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sphero spinout Company Six launches throwable, video-streaming wheeled drone for first responders Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Sphero , the Colorado-based company best known for its programmable robots, in May announced the spinoff of Company Six.
CO6 is focused on commercializing intelligence robots and AI-based apps for military, EMT, and fire personnel and others who work in challenging situations. Today the startup took the wraps off ReadySight , a one-pound, throwable robot built for “dangerous and difficult” jobs.
Robots make sense for first responder scenarios, as novel research and commercial products continue to demonstrate. Machines like those from RedZone can autonomously inspect sewage pipes for corrosion, deformation, and debris in order to prevent leaks that could pose health hazards. And drones like the newly unveiled DJI M300 RTK and Parrot Anafi Thermal have been tapped by companies like AT&T and government agencies for maintenance inspections and assistance in disaster zones. CO6 appears poised to carve out a niche in this market, which is estimated to be worth in excess of $3.7 billion.
According to CO6, ReadySight streams video over dedicated first responder and commercial LTE networks. Controlled by a smartphone, technologies integrated into the robot allow for “day and zero light usage,” as well as autonomous and semi-autonomous driving and patrolling modes, two-way audio communication, and unlimited range and usage over cellular. In addition to a speaker and microphone, a white light headset and infrared illuminator, a foldable “tail,” and a time-of-flight distance sensor, ReadySight sports a Sony camera sensor with a 120-degree wide-angle lens and lens shield and a motion sensor paired with a front indicator LED.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ReadySight can stream to viewers on the web, with streaming plans starting at $99 per month and first responder plans starting at $149 per month. Both subscriptions include unlimited streaming via priority networks and a free replacement robot if ReadySight is lost in the line of duty.
CO6 envisions ReadySight being deployed in the course of accident investigation, exploring tight or unknown spaces before someone enters, and acting as a sentry to keep eyes on critical areas like crime scenes. The company says ReadySight is expected to ship in Q3 2021.
CO6 began as Sphero’s Public Safety Division, the brainchild of former Sphero CEO Paul Berberian and Jim Booth, both of whom have backgrounds in military service. The products and services it hopes to deliver — which will include a cloud-based analytics and monitoring platform — will be designed to maintain safety and situational awareness and improve decision-making in the field for critical incidents and everyday operating environments.
To fund the productization and market entry of its initial products, CO6 raised a $3 million seed investment from Spider Capital and others, with participation from existing Sphero investors, including Foundry Group, Techstars, and GAN Ventures.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,000 | 2,020 |
"Kamua's AI-powered editor helps marketers embrace vertical video | VentureBeat"
|
"https://venturebeat.com/ai/kamuas-ai-powered-editor-helps-marketers-embrace-vertical-video"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kamua’s AI-powered editor helps marketers embrace vertical video Share on Facebook Share on X Share on LinkedIn Kamua uses AI to autocrop landscape videos into vertical formats Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
A new AI-powered video-editing platform is preparing for launch, designed to help businesses, marketers, and creators automatically transform landscape-shot videos into a vertical format suitable for TikTok, Instagram, Snapchat, and all the rest.
Founded out of London in 2019, Kamua wants to be aligned with tools such as Figma , a software design and prototyping tool for product managers who lack certain technical skills. For Kamua, the goal is democratizing the creative and technical processes in video editing.
“Kamua makes it possible for non-editors to directly control how their videos look in any format, on any screen, in multiple durations and sizes, without the steep and long learning curves, hardware expense, and legacy workflows associated with editing software suites,” Kamua CEO and cofounder Paul Robert Cary told VentureBeat.
Kamua , which was available as an alpha release since last year before launching in invite-only beta back in September, is now preparing for a more extensive roll-out on December 1, when a limited free version will be made available for anyone without any formal application process.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Kamua CEO and cofounder Paul Robert Cary Reformat Reformatting videos for different-sized screens is an age-old problem, one that movie studios have contended with for years as they shoehorned productions created in one aspect ratio onto displays built for another. In the modern digital era, businesses and freelance creators also have to contend with a wide array of screens and evolving consumption habits — the viewer could potentially be watching the end-product on any number of displays, ranging from a PC monitor, to a smart TV, to a tablet, or, most likely, a smartphone.
Editing a video that was filmed in landscape so that it plays nice with the much-maligned ( but increasingly popular ) vertical video format is no easy feat; it’s a problem that can consume considerable marketing and IT resources. And for businesses that want to tailor their advertisements or showreels for vertical-screen configurations without having to film multiple versions, Kamua hopes to fill that niche.
“Kamua obviates the need to shoot multiple orientations, which can often increase costs and time, double the editing workload, and result in missed opportunities,” Cary said.
Driving this demand is the simple fact that more than half of all internet users only ever use a smartphone to access the internet, a figure that’s expected to grow to nearly three-quarters by 2025.
This trend translates into digital video views, too, which are also now driven chiefly by smartphones.
Visionary Using computer vision and machine learning techniques, Kamua tracks on-screen subjects (e.g., people or animals) to convert landscape videos into organic-looking portrait videos. So when the time comes to port a YouTube video to Instagram’s longform IGTV , for example, Kamua can auto-crop the videos into vertical incarnations, focusing entirely on the action to ensure that the context is preserved.
As Kamua puts it, it’s all about “automating the un-fun parts of video editing,” bypassing the need for software downloads, file syncs, or specially skilled personnel.
In this clip, for example, you can see how the subject of the footage changes mid-scene, with Kamua correctly deciding to switch focus from the cyclist to the skateboarder. Auto-crop can also be manually overridden if it makes a mistake, with any operator able to retarget the focus of the edit in a couple of clicks.
Above: Kamua auto-cropping an action video, where the subject switches mid-scene Kamua also offers a feature it calls auto-cut, which again uses AI to analyze videos to identify where the editor initially included cuts and transitions between scenes. Kamua displays these in a gridlike format separated by each cut point, making it easier for editors to choose which shots or scenes they wish to use in a final edit (and convert to vertical video, if required).
Above: Kamua: Auto-cuts Elsewhere, Kamua can also generate subtitles using speech-to-text technology in more than 60 source languages, similar to other video platforms such as YouTube. However, Kamua brings its own twist to the mix, as it automatically resizes the captions for the screen format on which it will be displayed.
Above: Kamua: Captions There are other similar tools on the market already. Last year, Adobe launched a new auto-reframe tool , though it’s only available as part of Adobe Premier Pro on the desktop. Apple also recently debuted a similar new feature for Final Cut Pro called Smart Conform , though of course that’s only available for Macs. Elsewhere, Cloudinary offers something akin to all of this , but it’s bundled as part of its broader media-management platform.
Earlier this year, Google debuted a new open source framework called AutoFlip, for “intelligent video reframing,” though that does of course require proper technical know-how to implement it into an actual usable product.
What’s clear in all of this, however, is that there is a growing demand for automated video-editing tools that address the myriad screens people use to consume content today.
Vid in the cloud Kamua, for its part, is an entirely browser-based service, deployed on Google Cloud with all its video processing and AI processing taking place on Nvidia GPUs. According to Cary, Kamua uses proprietary machine learning algorithms that are more than 95% accurate in terms of determining the exact frames where videos can be cut into clips, and neural networks that identify the “most interesting” action to track in a given scene. This is all combined with “highly customized” open source computer vision tools and frameworks, including Google’s Tensorflow , alongside off-the-shelf solutions such as Nvidia NGX and CUDA.
Although Kamua is planning offline support in the future, Cary is adamant that one of its core selling points — to businesses, at least — is its ties to the cloud. And this is perhaps more pertinent as companies rapidly embrace remote working.
“Cloud-based creative software that is automation-centric ticks a lot of boxes for IT departments,” Cary said. “The onus is on us to provide faster and cheaper servers and to ensure 100% up-times.” Looking to the future, Cary said that Kamua plans to offer analytics to its customers, and its roadmap includes a mobile app that can automatically resize videos from the device camera roll. Plans are also afoot to raise a seed round of funding in early 2021 — up until now, Kamua has been funded through a combination of bootstrapping and some angel funding stretching back to a couple of products the team developed before pivoting entirely to Kamua in 2019.
In terms of pricing, the company officially opens its basic free tier next week, which will allow only a limited number of watermarked videos each month, limit video processing and cloud bandwidth, and leave out automated captions. The company’s paid plans, which will launch at a later date, will start at around $25 per month, going up to the $100-a-month “premium” plan that will offer more cloud storage, video processing, and other add-ons.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,001 | 2,020 |
"How Mark Kelly used conversational AI to help win a Senate seat | VentureBeat"
|
"https://venturebeat.com/ai/how-mark-kelly-used-conversational-ai-to-help-win-a-senate-seat"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Mark Kelly used conversational AI to help win a Senate seat Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Conversational artificial intelligence has rapidly smartened and scaled since chatbots first entered mainstream social media in 2016. The first few iterations of chatbots on Facebook Messenger were simple, enabling restaurant reservations, flower deliveries, and other structured calls to action. Now, roughly one U.S. presidential term later, conversational experiences are increasingly intuitive. The AI technologies behind them can manage additional individual complexity, contextualize language more readily, and better simulate human reality — even when talking about politics.
Amplify.ai , an enterprise-level conversational AI platform, has helped 2020 senatorial campaigns drive engagement with local constituents. It uses natural language processing (NLP) and machine learning to attach to existing social media pages, analyze public sentiment and intent, and field individual questions through humanlike interactions with AI chatbots.
Multiple Democratic candidate campaigns incorporated the platform into their existing digital strategy, though Amplify.ai CEO Mahi de Silva told VentureBeat that contractual obligations prevented him from sharing a full client list. Senators-elect Mark Kelly (D-AZ) and John Hickenlooper (D-CO), however, have publicly partnered with the company.
Jaime Harrison (D-SC), associate chair of the Democratic National Committee and recent challenger to incumbent Senator Lindsey Graham (R-SC), has too.
Mark Kelly was not a typical candidate: Before running for office, he served as a United States Navy captain, completed missions as a NASA astronaut, and launched a political action committee with his wife, former Congresswoman Gabrielle Giffords. The election also proved atypical. After John McCain passed away in 2018, his senate seat was held by two different Arizonian Republican appointees — Jon Kyl, then Martha McSally — in a two-year period. Kelly challenged incumbent McSally for McCain’s remaining term. Kelly’s campaign also needed to account for factors such as COVID-19 restrictions and Arizona’s history as a swing state.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a statement to VentureBeat, Justin Jenkins, the digital director for Kelly’s campaign, commented on his team’s adoption of conversational AI. “When the pandemic hit, the campaign quickly began exploring new and creative ways to replicate the in-person conversations that we traditionally had at the doors. We chose to test Amplify’s conversational AI because of its ability to scale and customize the user experience based on the user’s history with the campaign.” Kelly’s campaign couldn’t risk spreading COVID-19 by visiting constituents. But, in some ways, chatbots’ accessibility and slight personalization parallel the door-to-door, in-person canvassing that campaigns relied on for voter education and engagement before the pandemic.
Older chatbots were primarily based on strict inputs and outputs. For example, if a user typed “what is the capital of Arizona” into a bot on Facebook Messenger four years ago, the bot might have replied “Phoenix.” The conversational AI used in the most recent election goes further, working to interpret the user’s intent, or the different phrases people may use to ask about one topic. It then seeks to assemble a helpful, friendly, relevant response — and mirror the back-and-forth exchange of a human conversation.
Like a house call, individual messages from a chatbot could more strongly connect users to a candidate’s platform and allow a campaign to recruit them as donors, volunteers, and voters. The Mark Kelly campaign reported that it engaged with over 180,000 voters via Facebook Messenger in the first month of its conversational AI program.
De Silva differentiates these chatbot conversations from those people have with virtual voice assistants like Siri or Alexa, in which the user receives precise information the product has amassed from a database. In an interview with VentureBeat, de Silva said AI messaging creates “a consumer- or citizen-to-brand organization conversation … so it’s highly dynamic, it’s not trying to put all the resources into one system.” These dynamics are amplified by machine learning that tracks user behavior.
For example, the platform might decipher a comment along the lines of “Mark Kelly rocks” on the campaign’s public Facebook page and autonomously reach out to the poster on Messenger. The chatbot would thank the user for engaging with Kelly’s campaign and lean into the positive sentiment by asking if they are open to talking. If the platform analyzes a less positive comment, it may express interest in understanding different points of view.
Amplify.ai also tabulates comments and reactions that flow to a campaign’s Facebook account. The platform then performs sentiment and intent analysis on each data point and visualizes it on a dashboard so team members can closely track the audience’s interactions as a campaign unfolds. “You could imagine that a smiley face is pretty easy to find, you know, to associate positive sentiment with,” de Silva said. “But if you get a comment, we actually have to process that in context of what the post was trying to achieve.” In addition to the insights campaign workers gained through Amplify.ai’s analysis, conversational AI can engage with people at a speed and scale unmatched by human teams. If, for example, a campaign received over 100,000 written engagements in one month, that would translate to over 3,000 individual messages or comments per day, which would require at least six full-time volunteers or staff members to reply to an average of a message a minute. The right AI could, in theory, reduce and manage this task while engaging constituents and inspiring them to volunteer, vote, and donate. According to de Silva, Amplify.ai has created over 10 billion engagements with over 500 million consumers since its launch.
Conversational AI will likely see gains in intelligence, credibility, adoption, and deployment speed in upcoming years. Startups such as Hyro , Pypestream , and Orbita are also working to provide businesses with conversational AI solutions for customer engagement. Hyro’s clients include government agencies, and marketing head Aaron Bours told VentureBeat that “if the AI is smart, fast, and human enough, you might actually have a discussion with it over key issues [such as] tax programs or foreign policy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,002 | 2,020 |
"Facebook acquires messaging marketing automation startup Kustomer | VentureBeat"
|
"https://venturebeat.com/ai/facebook-acquires-marketing-messaging-automation-startup-kustomer"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook acquires messaging marketing automation startup Kustomer Share on Facebook Share on X Share on LinkedIn A mousepad with the Facebook logo is seen at Facebook's London headquarters Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Facebook today announced it will acquire Kustomer , a New York-based customer relationship management startup, for an undisclosed amount. When the deal closes, Facebook says it will natively integrate Kustomer’s tools with its messaging platforms, including WhatsApp and Messenger, to allow businesses and partners to better manage their communications with users.
For most brands, guiding and tracking customers through every step of their journeys is of critical operational importance. According to a recent PricewaterhouseCoopers report , the number of companies investing in omnichannel experiences has jumped from 20% to 80%, and an Adobe study found that those with the strongest omnichannel customer engagement strategies enjoy 10% year-over-year growth on average and a 25% increase in close rates.
“We’ve witnessed this shift firsthand as every day more than 175 million people contact businesses via WhatsApp. This number is growing because messaging provides a better overall customer experience and drives sales for businesses,” Facebook VP of ads and business products Dan Levy and WhatsApp COO Matt Idema wrote in a blog post. “As businesses adjust to an evolving digital environment, they’re seeking solutions that place people at the center, especially when it comes to communication. Any business knows that when the phone rings, they need to answer it. Increasingly, texts and messages have become just as important as that phone call — and businesses need to adapt.” AOL and Salesforce veterans Brad Birnbaum and Jeremy Suriel founded Kustomer in 2015, which went on to attract customers including Sweetgreen, Ring, Glossier, Rent the Runway, Away, and Glovo. The company’s platform let clients search, display, and report out-of-the-box on objects like “customers” and “companies,” with tweakable attributes such as orders, feedback scores, shipping, tracking, web events, and more. On the AI side of the equation, Kustomer offered a conversational assistant that collects customer information for human agents and auto-routes conversations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Kustomer’s workflow and business logic engines supported the creation of conditional, multi-branch flows that enabled each step to use the output of any previous step and to trigger responses based on defined events from internal or third-party systems. From a dashboard, managers could view which agents are working in real time and launch customer satisfaction surveys (or view the results of recent surveys). The dashboard also exposed sentiment to provide a metric for overall customer service effectiveness, and it enabled admins to customize Kustomer’s self-service, customer-facing knowledge base with articles, tutorials, and rich media including videos, PDFs, and other formats.
Last year saw the launch of KustomerIQ, which allowed companies to train AI models to address their unique business needs. The models in question could automatically classify conversations and customer attributes, reading messages between customers and agents using natural language processing techniques.
Prior to the Facebook acquisition, Kustomer raised $173.5 million across six fundraising rounds. Earlier this morning, The Wall Street Journal reported that the deal announced today could value the startup at more than $1 billion.
Birnbaum, Suriel, and the rest of the Kustomer team will join Facebook once the transaction is approved. Facebook says that Kustomer businesses will continue to own the data that comes from interactions with their customers, but that it eventually expects to host Kustomer data on its infrastructure.
“Once the acquisition closes, we look forward to working closely with Facebook, where we will continue to serve our customers and work with our partners as part of the Facebook family,” Birnbaum wrote in a blog post.
“With our complementary capabilities, we will be able to help more people benefit from customer service that is faster, richer and available whenever and however they need it — via phone, email, text, web chat or messaging. In particular, we look forward to enhancing the messaging experience which is one of the fastest growing ways for people and businesses to engage.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,003 | 2,020 |
"Ethical AI isn't the same as trustworthy AI, and that matters | VentureBeat"
|
"https://venturebeat.com/ai/ethical-ai-isnt-the-same-as-trustworthy-ai-and-that-matters"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Ethical AI isn’t the same as trustworthy AI, and that matters Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence (AI) solutions are facing increased scrutiny due to their aptitude for amplifying both good and bad decisions. More specifically, for their propensity to expose and heighten existing societal biases and inequalities. It is only right, then, that discussions of ethics are taking center stage as AI adoption increases.
In lockstep with ethics comes the topic of trust. Ethics are the guiding rules for the decisions we make and actions we take. These rules of conduct reflect our core beliefs about what is right and fair. Trust, on the other hand, reflects our belief that another person — or company — is reliable, has integrity and will behave in the manner we expect. Ethics and trust are discrete, but often mutually reinforcing, concepts.
So is an ethical AI solution inherently trustworthy? Context as a trust determinant Certainly, unethical systems create mistrust. It does not follow, however, that an ethical system will be categorically trusted. To further complicate things, not trusting a system doesn’t mean it won’t get used.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The capabilities that underpin AI solutions – machine learning, deep learning, computer vision, and natural language processing – are not ethical or unethical, trustworthy, or untrustworthy. It is the context in which they are applied that matters.
For example, using OpenAI’s recently released GPT-3 text generator, AI can be used to pen social commentary or recipes. The specter of AI algorithms generating propaganda raises immediate concerns. The scale at which an AI pundit can be deployed to spread disinformation or simply influence the opinions of human readers who may not realize the content’s origin makes this both unethical and unworthy of trust. This is true even if (and this is a big if) the AI pundit manages to not fall prey to and adopt the racist, sexist, and other untoward perspectives rife in social media today.
On the other side of the spectrum, I suspect the enterprising cook conducting this AI experiment resulting in a watermelon cookie wasn’t overly concerned about the ethical implications of a machine-generated recipe — but also entered the kitchen with a healthy skepticism. Trust, in this case, comes after verification.
Consumer trust is intentional Several years ago, SAS (where I’m an advisor) asked survey participants to rate their level of comfort with AI in various applications from health care to retail. No information was provided about how the AI algorithm would be trained or how it was expected to perform, etc. Interestingly, respondents indicated they trusted AI to perform robotic surgery more than AI to check their credit. The results initially seemed counterintuitive. After all, surgery is a life-or-death matter.
However, it is not just the proposed application but the perceived intent that influences trust. In medical applications there is an implicit belief (hope?) that all involved are motivated to preserve life. With credit or insurance, it’s understood that the process is as much about weeding people out as welcoming them in. From the consumer’s perspective, the potential and incentive for the solution to create a negative outcome is pivotal. An AI application that disproportionally denies minorities favorable credit terms is unethical and untrustworthy. But a perfectly unbiased application that dispenses unfavorable credit terms equally will also garner suspicion, ethical or not.
Similarly, an AI algorithm to determine the disposition of aging non-perishable inventory is unlikely to ring any ethical alarms. But will the store manager follow the algorithm’s recommendations? The answer to that question lies in how closely the system’s outcomes align with the human’s objectives. What happens when the AI application recommends an action (e.g., throw stock away) at odds with the employee’s incentive (e.g., maximize sales — even at a discount)? In this case, trust requires more than just ethical AI; it also requires adjusting the manager’s compensation plan, amongst other things.
Delineating ethics from trust Ultimately, ethics can determine whether a given AI solution sees the light of day. Trust will determine its adoption and realized value.
All that said, people are strangely willing to trust with relatively little incentive. This is true even when the risks are higher than a gelatinous watermelon cookie. But regardless of the stakes, trust, once lost, is hard to regain. No more trying a recipe without seeing positive reviews — preferably from someone whose taste buds you trust. Not to mention, disappointed chefs will tell people who trust them not to trust you, sometimes in the news. Which is why I won’t be trying any AI-authored recipes anytime soon.
Watermelon cookies aside, what are the stakes for organizations looking to adopt AI? According to a 2019 Capgemini study , a vast majority of consumers, employees, and citizens want more transparency when a service is powered by AI (75%) and to know if AI is treating them fairly (73%). They will share positive experiences (61%), be more loyal (59%) and purchase more (55%) from companies they trust to operate AI ethically and fairly. On the flip side, 34% will stop interacting with a company they view as untrustworthy. Couple this with a May 2020 study in which less than a third (30%) of respondents felt comfortable with businesses using AI to interact with them at all and the stakes are clear. Leaders must build AI systems – and companies – that are trustworthy and trusted. There’s more to that than an ethics checklist. Successful companies will have a strategy to achieve both.
Kimberly Nevala is AI Strategic Advisor at SAS , where her role encompasses market and industry research, content development, and providing counsel to F500 SAS customers and prospects.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,004 | 2,020 |
"Cloud and AI: The biggest trends in personal and SMB video surveillance | VentureBeat"
|
"https://venturebeat.com/ai/cloud-and-ai-the-biggest-trends-in-personal-and-smb-video-surveillance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Cloud and AI: The biggest trends in personal and SMB video surveillance Share on Facebook Share on X Share on LinkedIn Presented by SpotCam The global pandemic has put a spotlight on personal safety and security , so it’s unsurprising that the video surveillance market is surging as well. Globally, it hit $45.5 billion this year , while AI technology, which is being integrated into video surveillance products at every price point, will hit $100 billion by the year 2025.
Both consumers and small- and medium-size businesses are increasingly looking for solutions to manage the safety and security of homes, businesses and assets. More importantly, they’re in search of solutions that incorporate sophisticated new video analytics, AI and cloud-based storage technology. Manufacturers are racing to meet the demand, according to a report by IFSEC Global.
This trend will grow as the impact of the global pandemic continues to make itself felt for both employees working alone at home and the companies with empty offices.
The growth of AI-powered surveillance Video surveillance system capabilities are increasing in power and value, as new ways to gather, analyze, share, and store digital data are developed. Democratization of the technology means they’ve become affordable for both residential and small- and medium-size business (SMB) customers.
Systems backed by AI have a number of important new features and abilities. That includes lowering the incidence of false alarms, a major priority for most security companies. Standard systems are sometimes unable to distinguish between people, account for environmental changes, or recognize a harmless animal visitor.
AI has vastly improved accuracy detection, as analytics software are less likely to raise false alarms and algorithms are more and more able to identify age group, gender, and even clothing colors. AI software can also detect loitering and identify patterns of suspicious behavior.
For SMBs, especially in retail environments, algorithms can analyze footfalls and collect data around patterns of customer browsing behavior as well, including how store layout can improve or discourage browsing and buying behavior.
Smart cameras are adding facial recognition technology as well, which is a popular option among businesses where security and access control is a high priority. The technology is under some scrutiny, with privacy and data security issues becoming increasingly prominent, as well as raising questions around the underlying bias these algorithms are often built on.
The increase in cloud solutions Traditional video surveillance relies on network video recorders (NVR) for storage, but cloud solutions are on the rise, increasing by 13% from last year’s IFSEC report as more businesses and people start to adopt cloud-hosted services in a variety of other arenas.
The vast majority of cloud adopters are using it for storage – over 70% — the report says, while the rest use it for analytics. Cloud means that businesses no longer have to host physical storage servers, which can be a challenge for larger companies.
For smaller companies and residential services, cloud is opening up major opportunities for video surveillance as a service (VSaaS).
With cloud, vendors can offer an end-to-end solution that is automatically maintained and updated on their customers’ behalf. VSaaS dramatically reduces or even eliminates upfront costs for a system, which is particularly attractive to smaller businesses and residential customers.
Companies like SpotCam offer services like human tracking and cloud recording for free for residential customers. The company is launching its latest series, SpotCam Eva 2 and SpotCam FHD 2, in late November, and will remain the only brand in the market that provides continuous cloud recording “for free forever,” it says. It’s also the first company to offer AI services such as face recognition, virtual fence, and fall detection by monthly or yearly subscription.
SpotCam Eva 2 is a smart pan/tilt camera while SpotCam FHD 2 is a neat fixed type cube camera, and both provide full HD 1080P high resolution video, plus mobile and web alert and schedule functions. The cameras integrate into most popular smart home platforms including Amazon Alexa, Google Home, IFTTT, and Conrad Connect. The company plans on launching an SMB solution in the near future.
Data in the cloud is often safer than on-premises data as well, less vulnerable to cyber security attacks by being harder to trace, and cloud servers are able to add more robust enterprise-grade security precautions for smaller customers.
The cloud also opens up the possibility of edge analytics, where the raw data processing is done on the camera side, significantly reducing bandwidth and storage size requirements.
Dip deeper: Learn more about SpotCam and how cloud and AI are changing the way video surveillance is done.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,005 | 2,020 |
"AI Weekly: The state of machine learning in 2020 | VentureBeat"
|
"https://venturebeat.com/ai/ai-weekly-the-state-of-machine-learning-in-2020"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: The state of machine learning in 2020 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s hard to believe, but a year in which the unprecedented seemed to happen every day is just weeks from being over. In AI circles, the end of the calendar year means the rollout of annual reports aimed at defining progress, impact, and areas for improvement.
The AI Index is due out in the coming weeks, as is CB Insights’ assessment of global AI startup activity , but two reports — both called The State of AI — have already been released.
Last week, McKinsey released its global survey on the state of AI , a report now in its third year. Interviews with executives and a survey of business respondents found a potential widening of the gap between businesses that apply AI and those that do not.
The survey reports that AI adoption is more common in tech and telecommunications than in other industries, followed by automotive and manufacturing. More than two-thirds of respondents with such use cases say adoption increased revenue, but fewer than 25% saw significant bottom-line impact.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Along with questions about AI adoption and implementation, the McKinsey State of AI report examines companies whose AI applications led to EBIT growth of 20% or more in 2019. Among the report’s findings: Respondents from those companies were more likely to rate C-suite executives as very effective, and the companies were more likely to employ data scientists than other businesses were.
At rates of difference of 20% to 30% or more compared to others, high-performing companies were also more likely to have a strategic vision and AI initiative road map, use frameworks for AI model deployment, or use synthetic data when they encountered an insufficient amount of real-world data. These results seem consistent with a Microsoft-funded Altimeter Group survey conducted in early 2019 that found half of high-growth businesses planned to implement AI in the year ahead.
If there was anything surprising in the report, it’s that only 16% of respondents said their companies have moved deep learning projects beyond a pilot stage. (This is the first year McKinsey asked about deep learning deployments.) Also surprising: The report showed that businesses made little progress toward mounting a response to risks associated with AI deployment. Compared with responses submitted last year, companies taking steps to mitigate such risks saw an average 3% increase in response to 10 different kinds of risk — from national security and physical safety to regulatory compliance and fairness. Cybersecurity was the only risk that a majority of respondents said their companies are working to address. The percentage of those surveyed who consider AI risks relevant to their company actually dropped in a number of categories, including in the area of equity and fairness, which declined from 26% in 2019 to 24% in 2020.
McKinsey partner Roger Burkhardt called the survey’s risk results concerning.
“While some risks, such as physical safety, apply to only particular industries, it’s difficult to understand why universal risks aren’t recognized by a much higher proportion of respondents,” he said in the report. “It’s particularly surprising to see little improvement in the recognition and mitigation of this risk, given the attention to racial bias and other examples of discriminatory treatment, such as age-based targeting in job advertisements on social media.” Less surprisingly, the survey found an uptick in automation in some industries during the pandemic. VentureBeat reporters have found this to be true across industries like agriculture , construction , meatpacking , and shipping.
“Most respondents at high performers say their organizations have increased investment in AI in each major business function in response to the pandemic, while less than 30% of other respondents say the same,” the report reads.
The McKinsey State of AI in 2020 global survey was conducted online from June 9 to June 19 and garnered nearly 2,400 responses, with 48% reporting that their companies use some form of AI. A 2019 McKinsey survey of roughly the same number of business leaders found that while nearly two-thirds of companies reported revenue increases due to the use of AI, many still struggled to scale its use.
The other State of AI A month before McKinsey published its business survey, Air Street Capital released its State of AI report , which is now in its third year. The London-based venture capital firm found the AI industry to be strong when it comes to company funding rounds, but its report calls centralization of AI talent and compute “a huge problem.” Other serious problems Air Street Capital identified include ongoing brain drain from academia to industry and issues with reproducibility of models created by private companies. A team of 40 Google researchers also recently identified underspecification as a major hurdle for machine learning.
A number of conclusions found in the Air Street Capital report are in line with a recent analysis of AI research papers that found the concentration of deep learning activity among Big Tech companies, industry leaders, and elite universities is increasing inequality. The team behind this analysis says a growing “compute divide” could be addressed in part by the implementation of a national research cloud.
As we inch toward the end of the year, we can expect more reports on the state of machine learning. The state of AI reports released in the past two months demonstrate a variety of challenges but suggest AI can help businesses save money, generate revenue, and follow proven best practices for success. At the same time, researchers are identifying big opportunities to address the various risks associated with deploying AI.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,006 | 2,020 |
"T-Mobile's mid band 5G is too rare, but fairly fast if you find it | VentureBeat"
|
"https://venturebeat.com/mobile/t-mobiles-mid-band-5g-is-too-rare-but-fairly-fast-if-you-find-it"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages T-Mobile’s mid band 5G is too rare, but fairly fast if you find it Share on Facebook Share on X Share on LinkedIn T-Mobile's mid band 5G now delivers download speeds over 500Mbps and sub-15-millisecond latency, but you'll need a map to find service -- and there isn't one.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Regardless of whether you’ve been closely following the last two years of 5G cellular rollouts or are just catching up on them, you should know that there are three flavors of 5G connectivity : low band 5G, which is largely similar to 4G in speed but far-reaching; high band 5G, which is up to 100 times faster than 4G but limited in range; and mid band 5G, which compromises on speed and distance. Cellular carriers in Asia, Australia, and Europe have focused largely on mid band 5G , but for a variety of reasons, most United States carriers launched low and high band 5G, leading to highly problematic, whipsaw-like performance.
One U.S. carrier, Sprint , deployed a relatively small but powerful 5G network using mid band 2.5GHz wireless spectrum before being formally acquired by T-Mobile and shutting the network down. This week, T-Mobile said it has redeployed that spectrum in over 400 cities and towns as part of its plan to offer mid band 5G to 100 million people by the end of 2020. If that happens, T-Mobile will be well on the road to the “transformative 5G network” it has promised — not the fastest at peak, but rather one that delivers the best overall experience to the highest number of users. This should enable a large number of businesses and consumers to actually enjoy 5G’s long-touted performance benefits this year.
Having tested Sprint’s 5G network in Los Angeles, California last year, we’ve waited a long time to see how it would perform at the center of T-Mobile’s 5G layer cake.
So as soon as T-Mobile announced mid band 5G availability in a nearby city, Garden Grove, we went hunting for the faster signal with a brand new iPhone 12 Pro in hand. Here’s what we found.
There’s no map to T-Mobile mid band 5G, and it’s harder to find than Verizon’s high band 5G In September 2019, T-Mobile blasted larger rival Verizon as “VerHIDEzon” for launching a high band 5G network that offers super high speeds at ranges measured only in city blocks, then “refusing to show customers exactly where their 5G is!” One year later, T-Mobile is facing its own 5G mapping issues.
Our tests of Verizon’s high band “5G UW” have certainly been frustrating: Whether we’ve walked around Providence, Rhode Island or driven through liquor store parking lots outside Disneyland , finding a Verizon 5G signal has been only slightly easier than catching a leprechaun, and though Verizon’s 5G UW maps are sad — individual streets spread across nearly 60 cities, some not actually delivering 5G, in our experience — at least there are maps.
Above: T-Mobile’s current 5G maps don’t differentiate between low and mid band 5G coverage, though they’re fundamentally different in speed and latency, and there are no guides to where mid band 5G service can be found.
Since Sprint’s mid band 5G worked well in its promised parts of Los Angeles, offering clearly differentiated speeds on city streets and stretches of highway, we never would have expected T-Mobile’s implementation to be problematic. But we had a lot of trouble locating and maintaining mid band 5G signals on T-Mobile’s network in Garden Grove. Since T-Mobile’s website doesn’t offer any mid band-specific map , we picked some of the city’s major streets and ran Ookla’s Speedtest repeatedly as we drove around. After an hour of fruitless searching on our own and several hours of waiting, we were ultimately able to get two sets of mid band 5G coordinates directly from T-Mobile.
The coordinates were less than 1,000 feet from one another, and neither location was particularly noteworthy — it wasn’t as if T-Mobile had launched mid band broadly across Orange County’s famed Vietnamese community, Little Saigon. When we asked whether the “Garden Grove” rollout was limited to the tiny area indicated by the map points above, T-Mobile didn’t respond.
T-Mobile’s mid band 5G in Garden Grove had a short range but plenty of power During our initial testing on self-selected streets such as Garden Grove Boulevard, Main Street, and Brookhurst, we saw zero evidence of mid band connectivity or speeds. Despite a “5G” badge on the iPhone’s screen, downloads ranged from 3Mbps to 87Mbps, all within 4G range and matching the generally uninspired performance we’ve previously seen with T-Mobile’s low band 5G “blanket.” On average, these low band speeds were in the 40-60Mbps range, but could go higher or lower at any moment. We actually saw higher 5G speeds when we exited Garden Grove, hitting over 140Mbps on the freeway in Santa Ana.
Armed with T-Mobile’s specific coordinates, we saw dramatically better results — in line with what Sprint was delivering in mid-2019. As we approached the two Garden Grove locations, our 5G speeds finally jumped over the 100Mbps mark, surging from 101Mbps to 556Mbps — a 5.5x improvement — within just five minutes as we covered the last mile or two before reaching the first location. Almost all of the mid band speed benefit was within about a mile of that first location: We clocked speeds in the 200, 300, 400, and 500Mbps ranges as we drove around, eventually peaking at 567Mbps directly in front of the suggested testing address.
Just to quantify that speed, we re-ran a test we had conducted with Verizon’s high band 5G in Anaheim , downloading the 1.25GB movie Borat from the iTunes Store. The one hour and 23 minute film took just over two minutes to download in 1080p HD, virtually identical to the transfer time we experienced with a Verizon millimeter wave connection (at 732-828Mbps speeds) earlier this month. We also saw super-responsive sub-15-millisecond latencies on T-Mobile mid band 5G, including multiple 12-14ms pings, and even one at 11ms, roughly one-third to one-half the ping times we’ve recorded with low band 5G.
At the second T-Mobile testing location, we couldn’t get much faster than 400Mbps speeds even when we started walking around the only cellular tower we could spot in the area. Even so, that number was higher than the “around 300Mbps” typical download speed T-Mobile is advertising for mid band 5G, but well below the “up to 1Gbps” peak it’s promising, and our numbers came from specified testing sites — presumably with relatively few people sharing the towers.
Mid band upload speeds were fairly consistent across both locations, averaging 70-80Mbps across multiple tests. That’s around two to four times faster than upload speeds we’ve seen with low band 5G, and up to 10 times faster than recent T-Mobile LTE uploads, but those numbers have varied a lot between locations in recent weeks and months.
Will T-Mobile’s mid band 5G be as sketchy as Verizon’s high band 5G? While there’s certainly some potential for T-Mobile’s mid band 5G to be as obscure and short-range as Verizon’s high band 5G, we don’t think that’s the case right now, or likely to be so in the future. Having experienced Sprint’s 5G network prior to its shutdown, we’ve seen first hand that this flavor of 5G can indeed work over miles — not just city blocks — and deliver multi-hundred megabit per second download speeds, albeit with less impressive (but still reasonable) upload speeds. We predicted last year that Sprint’s 2019 performance would be akin to “2020’s typical 5G,” and it is — across most of the world’s 5G-adopting countries , just not the United States.
If you can find it, T-Mobile’s mid band 5G comes much closer to delivering on the 5G standard’s transformative promise than anything we’ve yet seen from rival carriers Verizon or AT&T. For now, the problem is precisely in the finding, as we certainly wouldn’t characterize Garden Grove as a “mid-band 5G city” in the way T-Mobile did, and the absence of maps means that users will only stumble upon mid band 5G accidentally rather than enjoying it wherever they go. Your results, of course, may vary based on the cities and towns you live in and visit.
Given T-Mobile’s track record, we’re pretty confident that this will change for the better. Unfortunately, no one really knows when and where, so we’ll have to remain vigilant until true 5G becomes as pervasive as it is powerful.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,007 | 2,020 |
"Why .tech domains are hot, and .com is on its way out | VentureBeat"
|
"https://venturebeat.com/commerce/why-tech-domains-are-hot-and-com-is-on-its-way-out"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals Why .tech domains are hot, and .com is on its way out Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
For those trying to get their new tech-based project or brand off the ground, we feel for you. Tech has been a major economic driving force for over half a century, so even if you have a revolutionary idea, carving out your own place in that uber-crowded industry is an uphill climb.
That branding doesn’t get any easier when you try to register a domain name for your new website either. After decades of companies, organizations and even trolly cybersquatters gobbling up domains like Skittles, your project is going to have to pick through whatever random names happen to be left if you want a heavily trafficked .com or .net address extension.
In fact, entrepreneurs have even been known to completely change their product name just because they couldn’t find a web address available for their preferred name choices.
But if you’re running a forward-thinking new brand like LightYears Industries, would you rather spend $381,000 buying the currently for sale LightYears.com — or instead wear your futuristic bend on your sleeve with a cool, available, and highly brandable LightYears.tech domain instead? Right now, you can stake out your place in the new .tech world with your own .tech web domain , now 80 percent off for 1-year and 5-year domain rights. That gets you 12 months at your new home on the web for as little as $4.99.
More and more organizations are brushing off the old school .com or .net addresses in favor of bright shiny new extensions like .tech , additions that bring instant identification to a new site.
Not only do you avoid creating a long-tail Frankenstein URL because it’s all that’s left in the .com space, but your short, snappy web address also gets a relevant, logical and evocative .tech extension that supplies instant context to your project or brand.
Whether you’re a retailer or a developer, a corporation, an entrepreneur, or even a student, your .tech extension immediately includes you as part of the global tech cohort. It even offers you improved branding opportunities in the search engine rankings.
So far, over 315,000 entities are currently enjoying the .tech name, including major brands like Viacom (viacom.tech) as well as the mecha of tech itself, the annual world-famous Consumer Electronics Show in Las Vegas (CES.tech).
Right now, you can join the ranks of the futuristic vanguard with your own .tech domain name at a big savings off the regular price. Just head to the .tech domain website and enter the code TECHNOW to get 80 percent off the cost of your one-year or five-year domain name.
Prices subject to change.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,008 | 2,020 |
"The SyncPen might be the coolest, smartest digital handwriting system ever devised. | VentureBeat"
|
"https://venturebeat.com/commerce/the-syncpen-might-be-the-coolest-smartest-digital-handwriting-system-ever-devised"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals The SyncPen might be the coolest, smartest digital handwriting system ever devised.
Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Smartpen technology is still in its shakedown phase as manufacturers try all sorts of different ways to best replicate and translate the actual motions of a hand using a real pen and actual paper in the digital space.
The SyncPen by NEWYES might be one of the best representations of the technology yet, featuring methods to recreate handwritten notes and drawings on both a hard-surface tablet as well as with actual ink and paper linework.
Successfully funded via Kickstarter and Indiegogo campaigns, this second-generation SyncPen combines the tactile world of handwriting, note-taking, and doodling and converts it seamlessly into editable digital recreations. It’s really not as much a smartpen as an AI-driven writing system.
The SyncPen comes with a pair of interchangeable tips, including a hard tip that can be used with the accompanying 10-inch LCD writing pad. In this mode, the SyncPen acts like a digital stylus, allowing you to go paperless as you take notes, draw or basically right anything on to your pad’s surface, then save it as a digital file.
But the SyncPen might be even more impressive when it’s rocking its ink-filled tip and the special notebook paper with microdots. As you write, the hidden micro-camera inside the pen is capturing more than 200 frames per second, then cross-checking that against the dot matrix coordinates scanned from the paper to faithfully recreate everything you do in ink on paper right into the SyncPen app. It uses up to 1,024 different pressure levels, which allows an absolutely perfect reproduction while keeping the writing experience fluid, precise, and easy.
And unlike other smartpens, the SyncPen system doesn’t require an unnatural pen holding position, capturing the precise motions while allowing you to hold and move the pen any way you like.
From there, everything you do gets saved to the pen and the cloud, so all your notes are accessible when needed. You can even search your files by keyword so you can easily find any of your past work.
If you’re a messy writer, the SyncPen actually knows 66 languages and can translate your handwriting back into legible text so you — or anyone else — can actually read it. It can also translate a photo of text into OCR machine-readable text.
Regularly $199, you can save almost $50 off the price of the SyncPen by NEWYES with this current offer, cutting your total to just $149.99.
Prices subject to change.
Do you have your stay-at-home essentials? Here are some you may have missed.
VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here.
Prices subject to change.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,009 | 2,020 |
"Onfleet raises $14 million to power last-mile deliveries for ecommerce companies | VentureBeat"
|
"https://venturebeat.com/commerce/onfleet-raises-14-million-to-power-last-mile-deliveries-for-ecommerce-companies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Onfleet raises $14 million to power last-mile deliveries for ecommerce companies Share on Facebook Share on X Share on LinkedIn Onfleet Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Onfleet , a platform that provides last-mile delivery management tools to third parties, has raised $14 million in a series A round of funding led by Kennet Partners.
The raise comes as demand for ecommerce and associated delivery services have gone through the roo f due to the global pandemic, opening the door for companies such as Onfleet to serve as the logistics backbone for retailers wanting to capitalize on the rapid acceleration of online sales.
Dispatch Founded in 2012, San Francisco-based Onfleet has built a routing and dispatch platform to manage the entire delivery process between a store, restaurant, or warehouse and the customer. A web-based dashboard provides access to Onfleet’s “auto-dispatch” engine, which assigns delivery jobs to drivers based on their availability and proximity, for example, while routes are automatically optimized based on time, location, capacity, and traffic conditions.
Above: Onfleet’s “auto-dispatch” engine Moreover, an integrated chat platform allows dispatchers to communicate with drivers directly from their dashboard, negating the need to use separate chat apps, while predictive ETAs show when each delivery should arrive based on all the variables. This also generates real-time alerts, allowing businesses to tackle potential delays and let customers know.
Above: Onfleet: Chats and predictive ETAs Other features spanning proof of delivery, automated status updates, real-time driver tracking, customer feedback solicitation, and more are also part of Onfleet’s various plans. Prices range from $149 a month on the starter plan all the way through to the professional plan, which starts at $1,999 for the full suite of features — this includes a white-label offering that lets companies host the tracking links on their own domain and remove Onfleet’s branding.
Logistical Logistics management software, including dynamic and intelligent routing, is hardly a new concept, with delivery giants such as UPS investing heavily in their own versions of the technology. At the other end of the spectrum, a host of fledgling startups are vying for a piece of the $1.6 billion delivery management software market, including Tel Aviv-based Bringg , which raised $30 million during the heart of the global lockdown this year. Meanwhile, San Francisco’s DispatchTrack recently raised a whopping $144 million in what was its first ever outside investment in its nine-year history.
Earlier in the year, before the COVID-19 crisis had gripped the world, New York-based Bond secured $15 million for digital delivery service tools and distribution center infrastructure that helps smaller companies keep apace with Amazon.
Onfleet had previously raised around $6 million in its eight-year history, and it claims that it has been profitable for several years. Moreover, it has already amassed some big-name clients from the retail world, including Kroger and Gap, while newer “native” ecommerce players such as booze delivery startup Drizly also use Onfleet’s infrastructure.
With another $14 million in the bank, the startup said that it plans to “meet surging customer demand” that has led to “triple-digit” revenue growth over the past year and doubled its delivery volume.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,010 | 2,020 |
"Amazon reports $96.1 billion in Q3 2020 revenue: AWS up 29%, subscriptions up 33%, and 'other' up 51% | VentureBeat"
|
"https://venturebeat.com/commerce/amazon-earnings-q3-2020"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon reports $96.1 billion in Q3 2020 revenue: AWS up 29%, subscriptions up 33%, and ‘other’ up 51% Share on Facebook Share on X Share on LinkedIn The logo of Amazon is seen at the company logistics center in Boves, France, September 18, 2019.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon today reported earnings for its third fiscal quarter of 2020, including revenue up 37% to $96.1 billion, net income of $6.3 billion, and earnings per share of $12.37 (compared to revenue of $70.0 billion, net income of $2.1 billion, and earnings per share of $4.23 in Q3 2019 ). North American sales were up 39% to $59.4 billion, while international sales grew 37% to $25.2 billion.
This is Amazon’s second full quarter during the coronavirus pandemic. Given the company’s leadership position in online retail and the cloud, its results are a bellwether for the industry. In Q2, Amazon set aside “$4.0 billion in costs related to COVID-19,” followed up by $2.0 billion in Q3. For Q4, Amazon set aside another “$4.0 billion of costs related to COVID-19.” The company does not want to be seen as benefiting too much from the pandemic — its $5.2 billion in quarterly profit in Q2 was the largest ever in its 26-year history. It broke that record again in Q3 with $6.3 billion in quarterly profit, up 200% year-over-year.
In a statement, Amazon CEO Jeff Bezos highlighted that Amazon had “created over 400,000 jobs this year alone.” Indeed, Amazon’s headcount jumped 28% from 876,800 employees in Q2 to 1,125,300 in Q3 (up 50% year-over-year).
Analysts had expected Amazon to earn $92.7 billion in revenue and report earnings per share of $7.41. The retail giant thus easily beat on both. The company’s stock was up 1.5% in regular trading and down 2% in after-hours trading. Amazon gave fourth quarter revenue guidance in the range of $112.0 billion and $121.0 billion, compared to a consensus of $112.3 billion from analysts. Bezos noted “more customers than ever shopping early for their holiday gifts, which is just one of the signs that this is going to be an unprecedented holiday season.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AWS settles at sub-30% growth In Q1, Amazon Web Services (AWS) passed the $10 billion milestone, even as growth continued to slow. In Q2, AWS growth fell to 29% — the first sub-30% growth rate since Amazon started breaking out AWS numbers. It stayed there, at 29%, this past quarter. The growth rate has been falling steadily for the past two years, and while COVID-19 accelerated the trend, at least Q3 wasn’t worse than Q2.
$AMZN AWS revenue growth – Q1 2017: 43% – Q2 2017: 42% – Q3 2017: 42% – Q4 2017: 45% – Q1 2018: 49% – Q2 2018: 49% – Q3 2018: 48% – Q4 2018: 45% – Q1 2019: 41% – Q2 2019: 37% – Q3 2019: 35% – Q4 2019: 34% – Q1 2020: 33% – Q2 2020: 29% – Q3 2020: 29% https://t.co/r5eFZdPWD9 — Emil Protalinski (@EPro) October 29, 2020 AWS is the cloud computing market leader, ahead of Microsoft Azure and Google Cloud. High-percentage growth cannot continue unabated, but for a market leader, sales growth of 29% to $11.6 billion is still impressive.
AWS accounted for about 12.1% of Amazon’s total revenue for the quarter, which is on the lower end but in line with Q2. “We’re seeing a lot of customers who are now moving to the cloud at a faster pace,” CFO Brian Olsavsky said on the Q3 earnings call.
Subscriptions and “other” (ads) Subscription services were up 33% to $6.58 billion. This segment mainly constitutes Amazon Prime and its 150 million paid members.
Amazon highlighted Prime Day, which took place this year on October 13-14 (and doesn’t fall into Q3 results). Instead of saying this year’s event was its “biggest in history,” as Amazon has said in past years, the company described it as “the two biggest days ever for small and medium businesses in Amazon’s stores.” A company spokesperson declined to comment on how Prime Day 2020 compared to Prime Day 2019.
Amazon’s “other” category, which mostly covers the company’s advertising business, was up 51% to $5.4 billion in revenue. The company knows plenty about what its customers want to buy, or don’t want to buy, and so its advertising business continues to pay dividends. As the company rakes in online shopping dividends from the pandemic, its advertising business benefits as well.
As always, Alexa was mentioned many times (19, to be exact) in the company’s press release. Amazon still won’t break out the voice assistant in its earnings reports. In Q1, the company noted that Alexa “can now answer tens of thousands of questions related to COVID-19.” It didn’t say anything similar for Q2 or Q3. Amazon did, however, highlight new Echo devices unveiled during its September showcase. That event reminded us that the company is also in the business of surveillance-as-a-service.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,011 | 2,020 |
"What small game studios need to know about security -- before they launch (VB Live) | VentureBeat"
|
"https://venturebeat.com/business/what-small-game-studios-need-to-know-about-security-before-they-launch-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live What small game studios need to know about security — before they launch (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Akamai Launching a new game studio means building the infrastructure of your dreams from scratch. Join our panel of tech and operational experts for help on how to start strong and prepare to scale, plus hear from developers at other independent studios in this VB Live event! Register here for free.
When it comes to launching a brand-new game studio, the most important bit of advice Jonathan Singer, senior manager of global games industry at Akamai has is, “Plan for success, but design for scale.” That means designing your game assuming that you’re going to do really well, because although not every game succeeds to a massive scale, you want to be ready when it does.
This is essential to keep top of mind for new studios because founders tend to be the creatives at the heart of the game, rather than network architects. The most important infrastructure from the perspective of the creatives is the game engine, and the technology that allows you to build the type of player experience you want to deliver to your audience.
Then comes monetization, and implementing a storefront, or partnering with the right publisher to get on the right platforms, and so on from there, working outward and implementing new technology as the next consideration comes into play.
“When you finally get toward some of what are arguably the most important pieces of the overall player experience outside of the game, it becomes farther and farther away from the core expertise of the people founding the studios,” Singer says. “As studios start up, they’re thinking, how are we going to make the most awesome game, rather than, how am I going to scale this globally? What happens if I end up with a runaway hit?” These technology decisions should be integrated from the start , baked into the development of the game. Because otherwise, things get overlooked — and the thing that gets overlooked most often is security. When designing the game you might be thinking about how players could cheat the game economy, but it’s the criminals who target your game and your players that directly impact the quality, reputation, and success of your game.
This is also important because a new studio won’t have a traditional in-house network perimeter and its own data center, but instead use cloud services and infrastructure as a service. Decentralized infrastructure means your data is available in the cloud from the start, whether that’s your confidential projects in development, where a leak would have financial repercussions, or player data when you launch your game.
And if you’re starting a business where you’ll be taking people’s personally identifiable information, or taking their credit cards so that they can make in-game purchases, you’re responsible for that data, and you’re also a target. That means you should be thinking about protecting access, protecting against threats, and protecting your applications from a distributed standpoint and very determined antagonists from the beginning.
“What you need to understand, if you’re starting a new studio, is that criminals are highly organized,” Singer explains. “They operate like an enterprise. They have product development that goes into QA. They make feature requests. They have marketing and PR. They spread misinformation through Reddit.” Cybercriminals’ main focus tends to be account takeovers, and they target your players hard with phishing attacks. And they gain the most amount of success targeting your most vulnerable players — the ones who know the least about hackers, the people who haven’t taken corporate training on how to spot a phishing email.
The thing about criminals is they want to make money quickly and don’t want to spend a lot of effort, so setting up multi-factor authentication by default is a big deterrent. A technically able person could spend time and break MFA, or they could put that in their bucket of MFA-protected accounts and sell it to someone who’s interested in doing the work — but most thieves will simply ignore it and move on to the next account they can easily crack with a password cracking application.
“You want to make sure that your company, your game, is not low-hanging fruit for criminals,” he says.
You also want to ensure your community knows exactly how you’ll communicate with them so they don’t fall for phishing attacks, including what kind of information official communications will request. Security should also be an ongoing conversation with your users. And while they know that security is partly on them, they’re still expecting publishers and studios to fix the problems.
There’s always going to be a tradeoff between user experience and security, and you need to work with your users and your technology to find a balance. But the balance can’t be, we leave you to the wolves, and then we clean up the mess.
“’ Data breach reveals 14 million gamer accounts ’ is not the article you want to appear at the top of your google search results,” Singer says. “So you need to think about security ahead of time.” To learn more about setting up a game studio infrastructure to succeed from the start, the basic security protocols that will keep your players safe from the get-go, how small studios can set themselves up to scale large, and more, don’t miss this VB Live event.
Don’t miss out! Register here for free.
You will learn about: Optimizing Time to Play: removing obstacles from the player experience Basic account security practices: building a more secure player base from day 1 Zero Trust: starting with modern enterprise security practices Speakers: Glen Schofield , Chief Executive Officer, Striking Distance Studios Emily Greer , Co-founder & CEO, Double Loop Games James Dobrowski , Managing Director, Sharkmob Jonathan Singer , Senior Manager – Global Games Industry, Akamai Dean Takahashi , Lead Writer, GamesBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,012 | 2,020 |
"Waymo's driverless cars were involved in 18 accidents over 20 months | VentureBeat"
|
"https://venturebeat.com/business/waymos-driverless-cars-were-involved-in-18-accidents-over-20-month"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Waymo’s driverless cars were involved in 18 accidents over 20 months Share on Facebook Share on X Share on LinkedIn Waymo's fully self-driving Jaguar I-PACE electric SUV Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Waymo’s driverless cars have driven 6.1 million autonomous miles in Phoenix, Arizona, including 65,000 miles without a human behind the wheel from 2019 through the first nine months of 2020. That’s according to data from a new internal report Waymo published today that analyzed a portion of collisions involving the robo-taxi service Waymo One, which launched in 2018. In total, Waymo’s vehicles were involved in 18 accidents with a pedestrian, cyclist, driver, or other object and experienced 29 disengagements — times human drivers were forced to take control — that likely would have otherwise resulted in an accident.
Three independent studies in 2018 — by the Brookings Institution , the think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people aren’t convinced of driverless cars’ safety. And Partners for Automated Vehicle Education (PAVE) reports a majority of Americans don’t think the technology is “ready for prime time.” These concerns are not without reason. In March 2018, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. Separately, Tesla’s Autopilot driver-assistance system has been blamed for a number of fender benders, including one in which a Tesla Model S collided with a parked fire truck. Now the automaker’s Full Self Driving Beta program is raising new concerns.
Waymo has so far declined to sign onto efforts like Safety First For Automated Driving , a group of companies that includes Fiat Chrysler, Intel, and Volkswagen and is dedicated to a common framework for the development, testing, and validation of autonomous vehicles. However, Waymo is a member of the Self-Driving Coalition for Safer Streets, which launched in April 2016 with the stated goal of working “with lawmakers, regulators, and the public to realize the safety and societal benefits of self-driving vehicles.” Since October 2017, Waymo has released a self-driving report each year, ostensibly highlighting how its vehicles work and the technology it uses to ensure safety, albeit in a format some advocates say resembles marketing materials rather than regulatory filings.
Waymo says its Chrysler Pacificas and Jaguar I-Pace electric SUVs — which have driven tens of billions of miles through computer simulations and 20 million miles (74,000 driverless) on public roads in 25 cities — were providing a combined 1,000 to 2,000 rides per week in the East Valley portion of the Phoenix metropolitan region by early 2020. (Waymo One reached 100,000 rides served in December 2019.) Between 5% and 10% of these trips were driverless — without a human behind the wheel. Prior to early October, when Waymo made fully driverless rides available to the public through Waymo One, contracted safety drivers rode in most cars to note anomalies and take over in the event of an emergency.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Waymo One, which initially transitioned to driverless pickups with a group of riders from Waymo’s Early Rider program, delivers rides with a fleet of over 600 autonomous cars from Phoenix-area locations 24 hours a day, seven days a week. It prompts customers to specify pickup and drop-off points before estimating the time to arrival and cost of the ride. As with a typical ride-hailing app, users can enter payment information and rate the quality of rides using a five-star scale.
Using its cloud simulation platform, Carcraft, Waymo says it predicts what might have transpired had a driver not taken over to avert a near-accident — what the company calls a counterfactual. Waymo leverages the outcomes of these counterfactual disengagement simulations individually and in aggregate. Engineers evaluate each counterfactual to identify potential collisions, near-misses, and other metrics. If the simulation outcome reveals an opportunity to improve the system’s behavior, the engineers use it to develop and test changes to software. The counterfactual is also added to a library of scenarios used to test future software.
At an aggregate level, Waymo uses results from counterfactuals to produce metrics relevant to a vehicle’s on-road performance.
While conceding that counterfactuals can’t predict exactly what would have occurred, Waymo asserts they can be more realistic than simulations because they use the actual behavior of the vehicles and objects up to the point of disengagement. Where counterfactuals aren’t involved, Waymo synthesizes sensor data for cars and models scenes in digitized versions of real-world environments. As virtual cars drive through the scenarios, engineers modify the scenes and evaluate possible situations by adding new obstacles (such as cyclists) or by modulating the speed of oncoming traffic to gauge how the vehicle would have reacted.
As part of a collision avoidance testing program, Waymo also benchmarks the vehicles’ capabilities in thousands of scenarios where immediate braking or steering is required to avoid collisions. The company says these scenarios test competencies crucial to reducing the likelihood of collisions caused by other road users.
Waymo analyzes counterfactuals to determine their severity based on the likelihood of injury, collision object, impact velocity, and impact geometry — methods the company developed using national crash databases and periodically refines to reflect new data. Events are tallied using classes ranging from no injury expected (S0) to possible critical injuries expected (S1, S2, and S3). Waymo says it determines this rating using the change in velocity and direction of force estimated for each vehicle.
Here is a breakdown of the car data from January 1, 2019 to September 30, 2020, which covers 65,000 miles in driverless mode. The disengagement data is from January 1 to December 31, 2019, which is when Waymo’s cars drove the aforementioned 6.1 million miles.
S0 Waymo cars were involved in one actual and two simulated events (i.e., events triggered by a disengagement) in which a pedestrian or cyclist struck stationary Waymo cars at low speeds.
Waymo vehicles had two “reversing collisions” (e.g., rear-to-front, rear-to-side, rear-to-rear) — one actual and one simulated — at speeds of less than three miles per hour.
Waymo cars were involved in one actual sideswipe and eight simulated sideswipes. A Waymo car made a lane change during one simulated sideswipe, while other cars made the lane change during the other simulated sideswipes and the actual sideswipe.
Waymo reported 11 actual rear-end collisions involving its cars and one simulated collision. In eight of the actual collisions, another car struck a Waymo car while it was stopped; in two of the actual collisions, another car struck a Waymo car moving at slow speeds; and in one of the actual collisions, another car struck a Waymo car while it was decelerating. The simulated collision modeled a Waymo car striking a decelerating car.
Waymo vehicles had four simulated angled collisions. Three of these collisions occurred when another car turned into a Waymo car while both were heading in the same direction. One of the collisions happened when a Waymo car turned into another car while heading in the same direction.
S1 While making a lane change, a Waymo vehicle was involved in a simulated sideswipe that didn’t trigger airbag deployment.
Waymo cars were involved in one actual and one simulated rear-end collision that didn’t trigger airbag deployment. In the first instance, a Waymo car was struck while traveling slowly, while in the second instance a Waymo car was struck while decelerating.
There were two actual rear-end collisions involving Waymo cars that triggered airbag deployments inside either the Waymo vehicles or other cars, one during deceleration and the other at slow speeds.
There were six simulated angled accidents without airbag deployments, including one actual angled accident with deployment and four simulated accidents with deployment.
Waymo points out that the sole incident in which a Waymo car rear-ended another car involved a passing vehicle that swerved into the lane and braked hard. The company also notes that one actual event triggered a Waymo car’s airbags and two events would have been more severe had drivers not disengaged. However, Waymo also says that the severities it ascribed to the simulated collisions don’t account for secondary collisions that might have occurred subsequent to the simulated event.
Falling short Taken as a whole, Waymo’s report, along with its newly released safety methodologies and readiness determinations, aren’t likely to satisfy critics who advocate for industry-standard self-driving vehicle safety metrics. Tellingly, Waymo didn’t detail accidents earlier in the Waymo One program or progress in the other cities where it’s actively conducting car and semi-truck tests.
These cities include Michigan, Texas, Florida, Arizona, and Washington, some of which experience more challenging weather conditions than Phoenix. As mandated by law, Waymo was one of dozens of companies to release a California-specific disengagement report in February. This report showed that disengagement rates among Waymo’s 153 cars and 268 drivers in the state dropped from 0.09 per 1,000 self-driven miles (or one per 11,017 miles) to 0.076 per 1,000 self-driven miles (one per 13,219 miles). But Waymo has characterized disengagements as a flawed metric because they don’t adequately capture improvements or their impact over time.
In 2018, the RAND Corporation published an Uber-commissioned report — “Measuring Automated Vehicle Safety: Forging a Framework” — that laid bare some of the challenges ahead. It suggested that local DMVs play a larger role in formalizing the demonstration process and proposed that companies and governments engage in data-sharing. A separate RAND report estimated it would take hundreds of millions to hundreds of billions of miles to demonstrate driverless vehicle reliability in terms of fatalities and injuries. And Waymo CEO John Krafcik admitted in a 2018 interview that he doesn’t think self-driving technology will ever be able to operate in all possible conditions without some human interaction.
In June, the U.S. National Highway Traffic Safety Administration (NHTSA) detailed the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) program, which claims to be a robust source of information about autonomous vehicle testing. The program’s goal is to shed light on the breadth of vehicle testing taking place across the country. The federal government maintains no database of autonomous vehicle reliability records, and while states like California mandate that companies testing driverless cars disclose how often humans are forced to take control of the vehicles, critics assert that those are imperfect measures of safety.
Some of the AV TEST tool’s stats are admittedly eye-catching, like the fact that program participants are reportedly conducting 34 shuttle, 24 autonomous car, and seven delivery robot trials in the U.S. But these stats aren’t especially informative. Major stakeholders like Pony.ai, Baidu, Tesla, Argo.AI, Amazon, Postmates, and Motion apparently declined to provide data for the purposes of the tracking tool or have yet to make a decision. Moreover, several pilots don’t list the road type (e.g., “street,” “parking lot,” “freeway,”) used in tests, and the entries for locations tend to be light on the details. Waymo reports it is conducting “Rain Testing” in Florida, for instance, but hasn’t specified the number and models of vehicles involved.
Waymo says it evaluates its cars’ performance based on the avoidance of crashes, completion of trips in driverless mode, and adherence to applicable driving rules. But absent a vetting process, Waymo has wiggle room to underreport or misrepresent these metrics. And because programs like AV TEST are voluntary, there’s nothing to prevent a company from demurring as testing continues during and after the pandemic.
Other federal efforts to regulate autonomous vehicles largely remain stalled.
The Department of Transportation’s recently announced Automated Vehicles 4.0 (AV 4.0) guidelines request — but don’t mandate — regular assessments of self-driving vehicle safety. And they permit those assessments to be completed by the automakers rather than standards bodies. Advocates for Highway and Auto Safety have also criticized AV 4.0 for its vagueness. And while the House of Representatives unanimously passed a bill that would create a regulatory framework for autonomous vehicles, dubbed the SELF DRIVE Act , it has yet to be taken up by the Senate. In fact, the Senate two years ago tabled a separate bill (the AV START Act) that had made its way through committee in November 2017.
Coauthors of the RAND reports say it’s important to test the results of self-driving software with a broad, agreed-upon framework in place. The University of Michigan’s MCity in January released a white paper laying out safety test parameters it believes could work — an “ABC” test concept of accelerated evaluation (focusing on the riskiest driving situations), behavior competence (scenarios that correspond to major motor vehicle crashes), and corner cases (situations that test limits of performance and technology). In this framework, on-road testing of completely driverless cars is the last step — not the first.
“They don’t want to tell you what’s inside the black box,” Matthew O’Kelly, who coauthored a recent report proposing a failure detection method for safety-critical machine learning , recently told VentureBeat, “We need to be able to look at these systems from afar without sort of dissecting them.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.