id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
15,267
2,019
"The DeanBeat: 12 predictable predictions on gaming in 2019 | VentureBeat"
"https://venturebeat.com/2018/12/28/the-deanbeat-12-predictable-predictions-for-gaming-in-2019"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion The DeanBeat: 12 predictable predictions on gaming in 2019 Share on Facebook Share on X Share on LinkedIn Here we go again with our annual game industry predictions. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. I’m making my annual predictions for the game industry again, not because I’m good at it, but because it has become one of the annual rites of December for me. I’m usually wrong or predictable when I make my predictions, but one day I’ll get right. I can’t be wrong all of the time. As the game industry grows beyond the $134.9 billion behemoth it was in 2018, I hope that I can at least help people plan for the future or take a guess about something that forecasters might not otherwise think about. We are what we pretend to be, and today I’ll pretend to be a seer. I have 22 years experience as a beat writer on games. But I have never programmed or designed a game in my life. I do play them though, and I talk to a lot of people about games. I’ve written about 17,000 stories for VentureBeat over 11 years, mostly about what other people think about the future of tech and games. Once a year, I look back at last year’s predictions and try to get into prediction mode. You’ll see some familiar themes that we will explore in our upcoming GamesBeat Summit 2019 event in Los Angeles. First, I’ll give myself grades for last year’s predictions, and then I’ll make 12 new ones. For the usual comparison and embarrassment, here are my predictions for 2017 , 2016 , 2015 , 2014 , 2013 and 2012. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! My 2018 scorecard 1. Red Dead Redemption will be the Game of the Year Above: Red Dead Redemption 2 I was right that the much-delayed game would debut in 2018. I felt this epic Western deserved Game of the Year, after I played through 105 missions for more than 50 hours. It was a rare beauty of a game, with more than 3,000 people contributing to it over the course of eight years. But it was edged out at The Game Awards by God of War , and GamesBeat’s own team outvoted me and gave that title to another game. Yet Rockstar Games may have the last laugh, as the game is a huge commercial success and could generate more than 20 million copies sold in its first two months of sales. Let’s just say I was almost right. Letter grade: B 2. The Nintendo Switch will be the bestselling console of the year Above: Nintendo Switch Nintendo has had good lifetime sales for the Switch so far, selling an estimated 25 million units since March 2017. It is more successful than its predecessor, the Wii U, by far. But it’s unclear which console sold the most units in 2018, as none of the parties have reported sales yet, and the all-important month of December still has to play out. In the second fiscal quarter ended September 30, sales were kind of disappointing. Letter grade: TBD 3. Gamer rage will take new and unpredictable forms Above: The Monk in Diablo: Immortal. I was right about this one, but I didn’t think Blizzard Entertainment would be the instigator. When Blizzard promised a new glimpse of a Diablo game for BlizzCon 2018, gamers got their hopes up. But they were very upset to find out that Diablo: Immortal would be a mobile game. They feared, wrongly, that Blizzard wasn’t working on a big PC title related to Diablo. Epic Games’ popular Fortnite title also got some hit from the hip hop artist 2 Milly, who claimed Epic Games copied his Milly Rock dance as an emote in the battle royale game. Even the companies with the best intentions and the best hardcore line-ups can run afoul of the internet haters. That’s something we’ll address at our GamesBeat Summit 2019 event, where the theme is about building gaming communities. Letter grade: A 4. VR will get cheaper and better, but it will continue to perform below expectations Above: Oculus Go. Facebook did indeed launch a price war in virtual reality with its $200 standalone Oculus Go VR headset in 2018. But that system didn’t ignite a boom in VR sales. The Sony PlayStation VR continued to sell, but it felt like the whole industry was pivoting to VR arcades throughout the year. Facebook’s Oculus will try again in the spring of 2019 with the launch of the Oculus Quest, a wireless headset with full hands-on controllers. But for now, the consumer reaction to VR is quite muted. Letter grade: A 5. AR gaming will take off in a variety of ways Above: Pokemon now appear at scale in augmented reality. Augmented reality still hasn’t taken off the way anyone hoped or predicted. There’s a lot of it on mobile devices, like Pokémon Go and Jurassic World: Alive. But AR headsets are so expensive that they’re being targeted at enterprises. And there aren’t any AR apps in the top 100 grossing mobile games, unless you count CSR 2’s AR experience. But that was an add-on to a popular game. Magic Leap began selling the developer version of its AR headset, dubbed the Magic Leap One Creator Edition. But it was priced at $2,300, and very few people were able to enjoy it as a consumer technology in 2018. Like VR, it looks like AR is going to be a slow-cook experience. Letter grade: C 6. Influencers will grow in importance, but must be reined in as well Above: PewDiePie doesn’t want to be considered with Nazis, but his language isn’t helping. Influencers got more popular than ever. Dominique “ SonicFox ” McLean and Ninja are making lots of money showing off their skills and entertaining people at the same time. But it didn’t seem like the bad boy influencers like PewDiePie lost any real audience, despite a history of making Nazi jokes. Plenty of misbehaving streamers were banned, and yet others became more popular because of their badness. Letter grade: B 7. Esports will continue to score funding and growth Above: Seoul Dynasty is a new Overwatch League team. As predicted, investment stalled on VR, and a lot of attention shifted to esports investments. Venture capital firms such as Accel became bullish on esports, and traditional sports owners got further into making investments in esports teams. Millions of dollars in investments went into companies like Fnatic, PlaysVS , ESL , Newzoo, The Esports Observer , Team Vitality , Popdog, Complexity Gaming, and Gen.G. Market researcher Newzoo predicted that esports could hit $1.7 billion by 2021. Esports is still in its hype stage, but no one has popped the bubble yet. Letter grade: A 8. The Leisure Economy will gather momentum Above: Esports is on the rise. Getting paid to play games was a big theme of our GamesBeat Summit 2018 event. It started as something I had heard from the craziest of forecasters, like Bing Gordon of Kleiner Perkins, and it slowly became something that I believed in as well. (It will come up again at GamesBeat Summit 2019 ). Now I hear other people talking about how streamers and other esports stars are creating new jobs that didn’t exist a generation ago. They include esports athletes, shoutcasters, esports agents, cosplayers, mod designers, user-generated content developers, influencers, livestreamers, and more. There isn’t a huge long tail yet. Mostly, celebrities like the aforementioned Ninja are getting rich playing games. But in the long run, I think this trend is getting more exciting. Letter grade: A 9. Big games will be delayed. Blockbusters will astound us. Above: The Monitor leads the Dominion in Anthem. Sadly, some of the most anticipated video games of the year were delayed from 2018 to 2019. One of those was Anthem, from Electronic Arts’ BioWare division. That title is now coming on February 22. We’ve got some big titles coming in 2019 (that we expected in 2018) like Days Gone. We have no idea when others are coming, like Ghost of Tsushima, Halo Infinite, or The Last of Us Part II. Meanwhile, yes, Red Dead Redemption 2, Spider-Man, and God of War astounded us. Letter grade: A Above: PlayerUnknown’s Battlegrounds has crossed 10 million daily active users. 10. Battle royale titles will multiply and peter out I predicted a year ago that Player Unknown’s Battlegrounds and Fortnite might be the only games that really prosper in the battle royale genre. I was right about how these games would multiply, but that “peter out” guess was wildly inaccurate. We saw the successful launches of battle royale titles like Call of Duty: Blackout (I was right about that one), Ring of Elysium, and Realm Royale. But rather than peter out, it only seems like this game mode is picking up fans. Red Dead Online has one, and Battlefield V will add it as an update in 2019. Letter grade: B 1 2 3 View All Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,268
2,020
"The DeanBeat: 13 humble predictions for gaming in 2020 | VentureBeat"
"https://venturebeat.com/2020/01/03/the-deanbeat-12-humble-predictions-for-gaming-in-2020"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: 13 humble predictions for gaming in 2020 Share on Facebook Share on X Share on LinkedIn The Last of Us Part II Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. I’ve never been good at making predictions about the game industry , but I enjoy it. I figure if I make enough, some of them will come true. So my guesses for the next year have become an annual rite of December for me. One of the easiest predictions to make is that gaming will continue to grow on all fronts. Market researcher Newzoo estimates that industry will grow its annual revenues to $148.8 billion in 2019, up 7.2% from 2018. By 2022, Newzoo estimates the market will be $189.6 billion. (Nielsen’s SuperData predicts games will grow 4% in 2020 to $124.8 billion). That’s why we’re seeing gaming dwarf other entertainment media, as gamers who grew up playing games are a bigger part of the older population, and millennials are all-in on games and esports. I enjoy talking to the seers, and we put many of them on stage at our events, such as the upcoming GamesBeat Summit 2020 in Los Angeles in April. But once a year, I think it is good to go out on a limb myself. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! First, I’ll give myself grades for last year’s predictions, and then I’ll make 13 new ones. For the usual comparison and embarrassment, here are my predictions for 2018 , 2017 , 2016 , 2015 , 2014 , 2013 , and 2012. At the bottom of the story, I’ve also given myself grades for the predictions I gave last year for 2019. Here are my predictions for 2020: 1) The Last of Us: Part II will be my favorite game of 2020 Above: A kiss at a dance in The Last of Us Part II It’s not about the zombies, or in this case, the Clickers or the Infected. It’s about the characters and the relationships between them. The original is my favorite game of all time, so I’m less than objective when it comes to judging the sequel. But I’ve played part of the new game, and I am impressed at its improvements. This is up to Naughty Dog to deliver, but it is on track to be a more ambitious title than the last one, after seven years in the making. What made The Last of Us great is the relationship between Joel and Ellie, who are forced by circumstances to survive together in a dangerous world. Joel had tragedy in his life and shut himself down to human contact, making him one of the most dangerous killers in the postapocalyptic world. But he opened up to Ellie, and she pulled the humanity out of him. The acting was so good. The bookend echoes of the beginning of the game and the end of the game made it memorable. By the time of the second game, Ellie has become the expert killer, protecting a thin layer of humanity from the zombies and the evil humans. Joel is still there, as more of a father-like figure for Ellie, who has love in her relationship with Dina. But somewhere, something goes wrong, and the whole plot revolves around revenge. No one does a better job of making a game look so real and feel so real when it comes to narrative and emotion. 2020 is Naughty Dog’s year, and it only needs to deliver. And will I give other games the chance to beat out The Last of Us Part II? Yes, of course. 2) Sony’s PlayStation 5 will be a smashing success Above: PlayStation 5 is coming holiday 2020. I don’t yet know what big titles will make or break the PlayStation 5 , which is coming in the holidays of 2020. But I’ve got enough confidence in Sony, even with its big management changes, to trust that it’s going to pull off a good launch. We’ve seen leaks that suggest it’s going to be a lot more powerful than the existing PlayStation 4 Pro, based on Sony’s technical decisions. And I like the fact that the SSD drive will make cutscenes run faster, rather than forcing us to endure long loading screens. Sony still has some of the best first-party studios in the world, such as newly acquired Insomniac Games and stalwarts like Naughty Dog, Guerrilla Games, Japan Studio, London Studio, Santa Monica Studio, Media Molecule, and Sucker Punch. Those studios are dedicated to outstanding narrative games that will be exclusive to Sony’s PlayStation 5, as needed. Sony also has one of the smartest architects in Mark Cerny, who understands both the hardware and what the games demand of it. Every now and then, Sony has had mixed results because it priced its console too high (PS3) or forced hardware upon it (Blu-ray and Cell processor) that folks didn’t need, but I don’t foresee such wrong-headed behavior continuing with the PS5. Sony’s reasonable decisions helped it win with the PlayStation 4 generation. Some folks expect that Sony will say something about this as early as CES 2020, next week. I think that both the Sony and Microsoft consoles could sell more in 2020 than they did during their previous launch year of 2013. 3) The Xbox Series X will also be a big success Above: Xbox Series X width. Microsoft has had ups-and-downs with each console generation. The original Xbox’s hardware was too expensive, leading to $4 billion in losses despite 20 million consoles sold. The Xbox 360 was an excellent machine, plagued by the Red Rings of Death. The Xbox One was well-designed but burdened with the expensive Kinect camera and an unwanted entertainment/media focus. That history doesn’t really bode well for the next machine, the Xbox Series X. But I have to say that the management under Phil Spencer has recently been running pretty smoothly and — perhaps crucial — has been very authentic when it comes to gamer cred. It’s a big machine, but it’s pretty much a PC. It will have an AMD-designed processor/graphics combo, close to the same as Sony’s but perhaps less powerful. But Microsoft has learned some important lessons. It has a good service in the combination of the Xbox Live and GamePass subscription service, and it has gone on an acquisition binge. That means it has finally learned that it takes first-party studios to win console generations. Microsoft now has a lot of studios to go up against Sony’s first-party teams. The big studios include 343 Industries, The Coalition, Playground Games, Turn 10 Studios, Rare, Mojang, InXile, Obsidian Entertainment, Double Fine, and various others. And Ninja Theory looks like it will produce an excellent title, Senua’s Saga: Hellblade II , for the Xbox Series X. This means Microsoft can take its time with each title and make sure they’re primed for the Xbox One X. And the company isn’t distracted with virtual reality, which could produce headwinds for Sony if VR doesn’t take off in this generation. 4) Fry’s Electronics will shut down, and so will many video game stores Above: Bare shelves at Fry’s Electronics in Campbell, California. Fry’s Electronics is the kind of big box retail store that should have survived and won. But, as numerous reports about bare shelves at all stores show, the big electronics chain looks like it is on its last legs, another victim of Amazon and digital delivery of games and software. GameStop, the game-focused retailer, is also shutting hundreds of stores around the world. Gamers appear to have spoken, and they now have a lot of choices for online stores, such as Amazon, Steam, and the Epic Games Store. When you’re selling 1s and 0s, there’s really no need for a retail presence. Fry’s claims it isn’t shutting down, but why would the company have no inventory during the holiday season at many of its stores? 5) Nintendo may reveal new hardware, but won’t ship it in 2020 Above: The new Nintendo Switch model from August 2019. Nintendo launched its successful console-handheld hybrid Switch console in March 2017, replacing its failed Wii-U machine in the middle of its cycle. The switch is a smash hit, with more than 34 million consoles sold and 187 million copies of software. It isn’t running out of steam anytime soon, and so Nintendo does not need to launch a new console in 2020, as Microsoft and Sony are. Sony and Microsoft launched their big consoles in 2013, and they did midlife kickers. But the Switch is targeting a different market, including Nintendo fans who want to play games on the go. Still, the PlayStation 5 and the Xbox Series X could steal a lot of customers when they launch at the end of 2020. So it might be good for Nintendo to start talking about its next machine while Sony and Microsoft shift their new ones. That might stop Nintendo players from defecting. So this could mean that Nintendo might announce a new system in 2020 (risking a few lost sales) and ship it sometime in 2022. This rumor about launch timing is speculation on my part, and not based on any inside tip. The Wall Street Journal, meanwhile, reported that Nintendo was working on the Switch 2 and had not yet decided when it would announce it. I don’t think Nintendo or the Switch have much to worry about from Sony and Microsoft. After all, Nintendo has Miyamoto, Mario, and so many other secret weapons to use in the console wars. 6) Amazon, Facebook, and Microsoft will join Google in launching cloud gaming services Above: Project xCloud is Microsoft’s approach to cloud gaming. Startups like Blade and its Shadow service have moved into the cloud gaming market in hopes of overcoming the ghost of OnLive. But Google eased its way into cloud gaming with the go-slow launch of Stadia in November. The company will beef up the service in 2020, and it will have to do a lot more to get fans excited about cloud gaming before the next set of game consoles arrives in the holiday of 2020. Google will get some company in 2020 with new cloud game services from Microsoft in the form of Project xCloud, and I expect Amazon won’t be that far behind either. Cloud gaming is going to have a slow launch, but I believe it is here to stay. As for Facebook, last week it confirmed the acquisition of cloud gaming startup PlayGiga in Spain. 7) Big companies and VCs will continue to invest in game companies Above: Tencent Cloud is on the horizon. I was worried when I heard earlier this year that AT&T might put Warner Bros. Interactive Entertainment up for sale just to pay down debt. But I’ve since heard that this idea is off the table, and that’s a good thing in my view. WBIE is one of the treasures of the game industry, and it is the most successful example of a Hollywood studio exploiting both movies and games. But there will be plenty of other buying and selling and investing going on in games in 2020. You can expect that Disney will sell the FoxNext Studios, as its focus is on licensing games rather than building them. There are also other transactions in the works that I know about. And why not? Games are the biggest form of entertainment, but game companies aren’t always valued in recognition of that fact. Tencent continues to invest in games, and other Chinese companies like NetEase are doing the same. There are more than 20 game-focused venture capital funds, all the way from seed investments to larger investments. That’s more game-focused VCs than ever before. And I haven’t even mentioned esports companies yet. My prediction is that you can continue to expect to see an investment in the smallest game companies and acquisitions of the biggest game companies as well. And what sort of prices would people pay? They could buy Take-Two Interactive (owner of the Grand Theft Auto, Red Dead Redemption, BioShock, NBA 2K, and Civilization franchises) for $13 billion. Or they could buy one aircraft carrier with that money. 8) Esports companies will continue to soar in viewers, valuations, and acquisitions — but not profits Above: Women are changing esports and broadening its appeal. Riot Games has had an amazing 10-year run with League of Legends, but the company acknowledges that the esports investment that it has put into LoL isn’t yet profitable. That is a surprise to me, given that it has millions of players. And if Riot Games isn’t profitable yet with its esports efforts, then how can anyone else in this business hope to be profitable? I think it will come. And I know that many investors, such as traditional sports team owners, are betting heavily that esports will become as big as the NBA (the second-biggest pro sports league in the U.S.) someday. Esports has all the advantages. It is transforming society, as nerds now have a chance to become socially accepted by playing video games skillfully and getting rewarded for it. Millennials are more easily found by brands via esports. And brands such as AT&T and Anheuser-Busch are diving into esports sponsorships. Eventually, media rights revenues should flow all the way down to the teams, making the big investments in esports organizations into money makers. We are definitely in a period of overhype now, and it could all come crashing down. But there are plenty of good reasons for that hype. 9) VR will have its biggest games yet, but will continue to struggle Above: Medal of Honor: Above and Beyond is coming to VR. I’m looking forward to some really big virtual reality games next year, such as Respawn Entertainment’s Medal of Honor: Above and Beyond , Facebook Horizon , and Valve’s Half-Life: Alyx. These games have been years in the making, and they’re showing that triple-A developers believe in VR. But it will be hard for these games to find huge audiences, as VR still has a small customer base. Sony has the best results so far with the PlayStation VR, and Quest is having a strong debut , but it has plenty of detractors, like Microsoft. It’s going to be an uphill climb for VR to get consumer traction. I think VR will still survive, but it will be years before we see enough consumer interest. In the meantime, VR companies can survive by investing in the enterprise or making titles for VR arcades. 10) Augmented reality glasses will become more practical Above: At 2019’s Snapdragon Tech Summit, Qualcomm demonstrated that its chips can power amazing experiences — but real products delivering those experiences will depend substantially on the execution of its partners. Qualcomm unveiled its latest tech for augmented reality glasses this fall, and that means AR’s biggest supporters will use those chips to come up with a new generation of AR devices. One of those companies that have already committed to it is Niantic, the maker of Pokémon Go. We are not yet where we want to be with AR. Facebook has promised us that AR/VR glasses will eventually be as lightweight as ordinary glasses. We aren’t there yet, but with this new generation, we should get closer. Apple is reportedly working on something cool in AR, and this new generation of AR devices is likely to be cool, as Niantic knows more about this field than a lot of game companies. I’m hopeful that we’ll see signs of better AR technology in 2020, with games coming soon to exploit it. 11) Regulatory forces will gather momentum Above: Star Wars: Battlefront II has a loot box problem at launch. I can foresee all sorts of regulatory troubles for games. The Federal Trade Commission could crack down on loot boxes and microtransactions for deceiving consumers. Regulators could conclude that loot boxes amount to illegal gambling. Or they could decide that games are addictive and should be controlled, with time limits such as those imposed by China for children. Addiction isn’t a thing to be messed with, and now that the World Health Organization has classified gaming addiction as a real condition, the door is open for regulators to take action in response to that. I don’t know how long it will take for worldwide restrictions to be imposed, but the game industry should move to get ahead of this problem, or it will face the consequences. 12) Subscription gaming will gather steam Above: Microsoft wants to hook you up with a massive library for your Xbox One. Somebody has to come up with the “Netflix of games.” This phrase has helped many a startup raise money, and Microsoft is moving full-speed ahead with Xbox Game Pass , which is becoming an extraordinary value for gamers. Sony is also jumping on this bandwagon, as are companies such as Apple (with Apple Arcade) and Google (with Stadia). Consumers have adopted subscription models in so many other categories of entertainment, it seems like it will be a matter of time before gamers go for it. My money is on Microsoft in this particular battle. 13) Intellivision will be the wild card of 2020 Above: Intellivision Amico Tommy Tallarico is dead serious about shipping the best game console of 2020. His Intellvision Amico will go up against other big consoles as far as timing goes, as the Intellivision retro video game console will ship on October 10, 2020, perhaps just before Sony and Microsoft ship their machines. But Tallarico is going after a very specific audience: the people who grew up playing with friends or family with video game systems where everybody could play together on the couch. He has convinced a lot of people to partner with his company, he promises some cool games, and he isn’t trying to do too much. Tallarico is targeting those who are nostalgic for non-violent, fun games that everyone can play. We don’t know how big this niche will be, but if Tallarico executes well, this retro revival could be a surprise hit. My 2019 scorecard 1. The Hunger Games license will surface with 1,000-player battle royale Above: Katniss in The Hunger Games Letter grade: F I figured out that Suzanne Collins, the author of The Hunger Games , is not a fan of violence in media, including video games. That’s pretty much going to stop any licensing of the books or movies for video game purposes. This tells me that it’s not always a good idea to be so precise about your predictions. Mavericks: Proving Grounds also bit the dust, so we don’t have 1,000-player battle royale games yet. 2. Cloud gaming slouches toward Bethlehem Letter grade: A Google launched its Stadia cloud gaming service in November. Blade also launched its Shadow cloud gaming service and raised another $33 million in October. Microsoft said it is testing as many as 50 games for its Project xCloud cloud gaming service. Amazon is also reportedly working on its service. The era of cloud gaming is here. Now we’ll see how much of an impact it makes. 3. China will resume its growth in games Letter grade: A The Chinese government stopped approving game launches in China during 2018, but on January 2, the government approved 80 titles and then lifted its foot off the brakes. That was a pretty quick time between prediction and the prediction coming true. A rarity for me. 4. Blockchain gaming startups will produce great ideas, but adoption depends on big companies Letter grade: A Blockchain and cryptocurrency startups produced a lot of bright ideas for improving video games during the year. But some of the most ambitious ideas are still waiting for backing from big companies. Robot Cache hasn’t launched its blockchain tech for an app store that is more efficient and allows developers to keep more of the proceeds. Forte cut a deal with Kongregate, and Animoca Brands embraced blockchain in a big way, but except for Ubisoft , the big companies are still holding back. 5. Esports will become a battle royale, with winners and losers Above: Millennials love esports. Letter grade: A The battle royale took place among league owners, and operators, investors, and traditional sports owners all scrambled to get a piece of the pie. We didn’t see any big craters, but plenty of pundits are predicting some fallout from over-investment and too much esports hype. Meanwhile, brands like Lexus , Anheuser-Busch, and Louis Vuitton are moving into esports sponsorships. At the close of the year, EA and Respawn launched a new esports league for Apex Legends. Full speed ahead still, as revenues and viewers are still growing. 6. Gamers will think of new ways to play games and make money doing it. Letter grade: A I didn’t think that this Leisure Economy would move so fast. But during the year, even high schools began adopting esports in a big way. PlayVS raised $50 million for hits high school esports league, and it had competition from multiple rivals. So the number of people making a living from esports is widening. I still have high hopes that the “ creator economy ” jobs will employ many more people in the future. I’m up for a life of leisure. 7. The Metaverse will start to take shape Above: Aech’s Basement for Ready Player One. Letter grade: C We didn’t see much material movement on this prediction during the year. Many companies are still espousing this idea, and maybe the HBO rendition of Snow Crash will inspire more people to move this further. Perhaps the most ambitious company is Japan’s Gumi, which is investing heavily in games, virtual reality, and blockchain to tie all the thread of the Metaverse, or nexus of connected virtual worlds, together. But it’s going to take time. 8. The lines will blur between games, esports, and gambling Letter grade: B The courts have cleared the way for betting on games of skill, and for other esports-related gambling. And companies like Unikrn and Skillz made progress during the year in launching spectator betting and skill-based betting on games. It’s not huge yet, but it is growing. 9. Regulators will catch up with gaming’s frontier and crackdown where needed Letter grade: B The FTC held hearings on loot boxes and microntransactions to explore whether regulation was necessary in the games business. These in-game rewards that players buy with real money are feared to cause addiction and gambling-like behavior. The crackdown hasn’t happened yet, but loot boxes continued to command a lot of negative vibes and press during the year. 10. E3 will hold together Above: E3 2019. Letter grade: D When Electronic Arts decided to leave the show floor of the Electronic Entertainment Expo (E3) trade show, some predicted doom for U.S. gaming’s largest event. Sony also decided to skip last year, and prominent show host Geoff Keighley, who has been active at E3 in the past, has begun investing more time in Europe’s Gamescom event. And media coverage appeared to dip 39% from 2017 to 2019. E3 didn’t do itself any favors by getting hacked and exposing the data of 2,000 journalists. We’ll see if the show recovers with the launch of new consoles from Sony and Microsoft, but there’s a lot of risk for E3 still. 11. Games will make their way to strange platforms Letter grade: A Drivetime turned out to get strong momentum for its voice-based games and it raised money during 2019. New platforms are emerging all the time, like the social games on Snapchat and audio games on Amazon Alexa. There’s no slowing of this trend. 12. Reboots continue to breathe life into old franchises Letter grade: A On both the hardware and the software front, reboots and remakes are going strong. We just finished the trilogy of rebooted games for Lara Croft, with 2018’s Shadow of the Tomb Raider, and the highly successful reboot of God of War. The next big one coming is Resident Evil 2. Red Dead Redemption 2 took the prequel route, and it made fans anticipate interesting twists in the storyline. And on hardware, we may see some news about the Atari retro game console, the Atari VCS, and the new Intellivision console reboot coming from Tommy Tallarico. Fans love the gameplay that they know with proven franchises. And sometimes it’s not good to just keep on going with a franchise forever, grinding it down into ever-smaller audiences. Reboots give developers a chance to re-imagine a familiar property from the ground up, create better graphics, and service fans who want a return to great gameplay without some crazy new tortured plot. But as the Tomb Raider reboot showed, a well-executed reboot can reignite an aging franchise. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,269
2,020
"GDC survey: Game devs favor unionization, PC, and next-gen consoles | VentureBeat"
"https://venturebeat.com/2020/01/24/gdc-survey-game-devs-favor-unionization-pc-and-next-gen-consoles"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GDC survey: Game devs favor unionization, PC, and next-gen consoles Share on Facebook Share on X Share on LinkedIn The Epic Games event at GDC 2019. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Game developers favor the unionization of their industry. They consider the PC to be the best platform to develop games for. And they are excited about the coming next-generation consoles from Microsoft and Sony. Those are the results of a new survey of 4,000 game industry professionals. One of the key results was that 54% of developers now favor unionization, compared to 47% in the survey from a year ago, said Katie Stern, general manager of GDC Events, in an interview with GamesBeat. The Game Developers Conference 2020 is coming up in March, and the event is releasing its annual survey today on the state of the game industry. The eighth annual survey results suggest a heightened interest in development for next-generation game platforms including PlayStation 5 and Xbox Series X. “It’s not overly surprising with PC kind of being the front runner again,” Stern said. “But what is interesting is where everybody’s starting to think of the next generation of platforms and what they’re developing for next. The current projects already include 11% for the PS5 and 9% for the Xbox Series X. It’s a notable rise in interest in the next-gen platforms.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! We also saw an uptick in interest in VR game development, which was potentially spurred by the release of Oculus Quest last year. Organized by Informa Tech, GDC 2020 takes place March 16 to March 20 at the Moscone Convention Center in San Francisco. The GDC drew 29,000 attendees last year, and about the same is expected this year, Stern said. Next-generation consoles Above: Game devs are developing for 16 platforms now. More than 10 percent of respondents are currently making games for next-gen consoles; interest in streaming services remains nascent. Continuing the trend from previous years, the survey asked respondents what platform their last game was released on, and the results reflect a consistent preference for PC and mobile consoles. But these results get more interesting when respondents detail their current projects. Developers felt that the PC offered unmatched direct access to customers, but they weren’t fond of Valve’s Steam digital distribution service (see below). While the majority of respondents said their current projects are being developed for the PC (56%) and/or mobile (39%), 11% of survey respondents said their current project is being developed for the PlayStation 5, and 9% said they’re currently targeting the next generation Xbox Series X (still known as “Project Scarlett” at the time the survey was conducted). “We’re definitely in that transition period of people are moving off of the current platforms and into the next one, and trying to figure out what they’re most excited about,” Stern said. The survey also inquired into how many respondents were working on games for Google Stadia and Microsoft’s Project xCloud, two high-profile game streaming services which made waves in 2019. 6% of respondents said they’re currently targeting Google Stadia and just 3% said they’re aiming to put their current project on xCloud, suggesting that developer interest in making games for these services remains nascent. “The sentiment around [cloud gaming] is that it’s in its infancy,” Stern said. “It’s still a bit nascent.” Above: Where developers are focusing their energies next. Developer interest in targeting next-gen consoles increased a bit when respondents were asked what platform(s) they expect to launch their next project on. 23% said they expect their next game will launch on PlayStation 5, while 17 percent expected it would come to the next-gen Xbox. (Our own conference theme for GamesBeat Summit 2020 is “dawn of a new generation.” This year it seems next-gen consoles are on developers’ minds; while 50% said they’re most interested in making games for PC, 38% said they’re interested in the PlayStation 5 and 25% said they’re interested in the next-gen Xbox. 37 percent said they’re interested in the Switch, suggesting that while the platform remains high up in developers’ estimation, the lure of new tech has taken some of the shine off Nintendo’s latest hardware, the GDC said. 72% said that they aren’t making any money on the Nintendo Switch. As for mobile platforms, 50% said they developed for Android, 48% developed for iOS. The VR faithful Above: Oculus Quest (left) connected to an Anker USB cable for Oculus Link, alongside the Oculus Rift S (right). Faith in VR is rising, and current VR devs are shifting focus from the HTC Vive to Oculus and itsnew Quest headset. For a few years the survey has been polling game industry professionals about which ‘immersive reality’ technology they think will be dominant in five years, and augmented reality has long been the most popular answer. That held true this year, but surprisingly, the survey showed a significant year-over-year uptick in the percentage of respondents who believe VR will be dominant; one in three respondents (32%) said they think AR will dominate, while one quarter (25%) said VR. 19% expect AR and VR to be equally popular, 16% think neither tech will be important in five years, and 6% admitted they just don’t know. Above: Katie Stern is general manager of GDC Events. Last year the survey reflected a similar pattern of responses, except back then (in 2018) only 19% threw in with VR as the eventual dominant immersive reality tech. It’s a small increase, one which may be influenced by the recent mainstream success of the wireless Oculus Quest VR headset. Looking ahead, AR/VR devs seem chiefly interested in the Oculus Quest as the target for their next VR/AR game; when asked which AR/VR platform(s) developers expected their next game to release on, the most popular answer (with 24%) proved to be the Quest, followed by the Rift (20%) and the HTC Vive (17%). Above: Which VR/AR platform will developers target next? However, 32 percent of respondents said they were yet undecided, leaving plenty of room for things to shift as the AR/VR mark et evolves. When asked the same question in 2019’s survey, developers gave the HTC Vive 28% of the vote, the Rift received 25% and the Quest received 13%, marking an 11 and 5 percentage point drop for the Vive and Rift, respectively, and an 11 percentage point jump for Quest. The Oculus Quest appears to have picked up that slack to become the most popular answer in this year’s survey. Unionization Above: GDC 2019 attendees. The majority of game makers think game devs should unionize; few think they will Unionization and labor practices remain hot topics in the game industry, and this year the survey found that when asked whether game industry workers should unionize, the largest share of respondents (54%) said yes. 21% said maybe, 16% said no, and 9% said they weren’t sure. “Building on some of our conversations around unionization for our last couple of years, that continued to be a really hot topic that bubbled up in the survey,” Stern said. “We’ve noticed that a couple of studios in South Korea have already started to unionize.” However, when asked whether they thought game industry workers would unionize, only 23% said yes; 22% said no, and the largest share (43%) said maybe. 12% said they just didn’t know. When compared to last year’s survey, in which respondents were asked these questions for the first time, responses showed a small but significant uptick in confidence that the game industry should and will unionize. Last year, when asked the same pair of questions, just 47% of respondents said that yes, game industry workers should unionize; only 21% of respondents then predicted they would unionize, while 24% felt they wouldn’t unionize. As for crunch time, or unpaid and forced overtime work, the survey asked developers about that again in the 2020 survey. But while developers are working overtime hours similar to the past, much of it was “self-inflicted,” Stern said, “not the result of studio pressure,” at least in terms of the survey results. The bulk of developers said they work 36 to 70 hours per week. Nearly half (45%) are still working over 40 hours a week. 59% of them said self-pressure was the reason that they worked long hours. Only 1% said they worked more than 90 hours in a single week. Diversity Above: Speakers and attendees interact at GDC 2019. 21% of respondents said they identify as female, while 2% said they identify as other. In 2000, 17% of developers said they were female. “We have a long way to go, but we are headed in the right direction,” Stern said. About 55% of respondents were from North America, 25% were from Europe, and 9% were from Asia. 6% were from South America and 2% were from Australia or New Zealand. That’s not a surprise, given the GDC is in the U.S. 28% of developers said that studios don’t invest anything in staff inclusion and diversity issues. 16% said a little, and 24% said a moderate amount. 18% said a great deal, and 14% said a lot. 48% of developers said they aren’t incorporating accessibility features into their work, while 28% said they were. The results of the diversity investments were viewed for the most part as successful, Stern said. What was a big investment? Social media. 15% said they made a big investment in social media, 15% said they invested in word of mouth, and 13% invested in live events. 25% invested in YouTube videos. Staffing expansions Above: GDC 2019’s #1ReasonToBe panel. 49% said their company expanded staff in 2019, while 36% said it stayed roughly the same size, while 10% said that their company diminished in staffing. 1% said their company closed down. That was roughly in line with past years, though 2018 saw a large number of closings, Stern said. Above: Funding sources for games in 2020 As for funding sources, 51% said they use existing company funds. 8% use angel investors, 17% use external publishers, and 31% said they used personal funds. 10% used venture capital, 11% use government funds, and 6% turned to crowdfunding. 7% relied on a platform owner, and 9% marked “other.” More than 25% of developers are working with publishers. Subscription gaming and new stores Above: Ann Thai “can’t wait” for Apple Arcade to launch. Loot boxes remain unpopular ways to monetize users, with only 8% said they were using “paid item crates.” But subscription monetization drew mixed results. 27% said they are concerned that Netflix-style subscription services will devalue individual games. 28% said maybe, 26% said no, and 18% said they didn’t know. Only 21% believed that the Apple Arcade subscription service would be a long-term success, while 35% said maybe. 19% said no, and 26% said they didn’t know. One developer said it was less confusing than Stadia, and Apple was making good on its commitment to provide high-quality games without “predatory business practices.” Above: Developers say what business models they’re using. As for Google Stadia, 11% of respondents said it would succeed. 36% said maybe, 33% said no, and 20% said they weren’t sure. Meanwhile, 40% said that the Epic Games Store would be a long-term commercial success, while only 7% said no. 35% said maybe and 18% didn’t know. Developers saw it as competition for Valve’s Steam. Only 7% of developers believed that Steam deserves the full 30% cut that it takes from online store sales. Most thought that 20% or less was more reasonable. Main stage returns Above: Osama Dorias speaks on Muslim representation in games at GDC 2018. In other news this week, Stern said a main stage talk will return for the second year in a row (after a long hiatus in previous years) in the form of The Developer’s Impact , a series of talks about issues on the cultural and social impact of gaming. The speakers include Osama Dorias, lead game designer at Warner Brothers Games Montreal; Ziba Scott, optimist at Popcannibal; and Lyndsay Pearson, lead producer and general manager for The Sims at Electronic Arts’ Maxis. Dorias will talk about how his struggles growing up with xenophobia as a Muslim Arab in the west inspired the narratives of his games. Scott will talk about how Popcannibal convinced players to send a million uplifting messages. And Pearson will talk about valuing inclusive game design. Above: The spread of experience in the game industry for developers. “The point is bringing the community together in one place for like one really inspirational message that we can’t really accomplish in our very deep technical sessions,” Stern said. “So as long as we find a great story arc that makes sense for everybody, we’ll keep it up on the main stage.” The GDC also saw more interest in mental health issues and created more programming around that, Stern said. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,270
2,021
"The DeanBeat: Predictions for gaming in 2021 | VentureBeat"
"https://venturebeat.com/2021/01/01/the-deanbeat-predictions-for-gaming-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: Predictions for gaming in 2021 Share on Facebook Share on X Share on LinkedIn Dave Baszucki, CEO of Roblox, talks about the metaverse. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. This past year was one of the most unpredictable in all of human history. The pandemic threw off our ability to predict what will happen in the game industry. It surely messed up my predictions about where the games industry would go. Game companies had a record year in 2020, but I never would have predicted that in March as the world seemed headed into an unprecedented global recession caused by the coronavirus. But people turned to gaming as a solace, and the whole industry not only survived. It prospered. Market researcher Newzoo is predicting that the entire game industry — PC, mobile, and console — will grow 19.6% to more than $174.9 billion in 2020. With two new consoles launched in November, the industry will likely grow again in 2021, helping gaming stand apart from the crowd. This year, we saw a huge surge in venture capital investments in game studios, and a big wave of acquisitions as well, with Microsoft buying ZeniMax Media (Bethesda) for $7.5 billion and Embracer Group announcing it had bought 12 game studios in one day. A continuation of this growth is the easiest prediction to make. But the pandemic has changed the predictability of the industry in many ways. Esports will continue to struggle as it moves forward in a digital-only format, and it’s not clear when we will ever be able to go to live events again. Game conferences are one of the places where we can catch up on future trends, but so many of those events have been canceled or gone digital (such as our GamesBeat/Facebook Summit on Driving Game Growth and Into the Metaverse on January 26 to January 28). But we can expect games to continue to dwarf other forms of entertainment. Movie and TV production has been hobbled, and cinemas are all but shut down. With that limited horizon, I’m going to boldly make my bad predictions once again. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! For the usual comparison and embarrassment, here are my predictions for 2019 , 2018 , 2017 , 2016 , 2015 , 2014 , 2013 , and 2012. At the bottom of the story, I’ve also given myself grades for the predictions I made a year ago for 2020. Lastly, I’ve been very publicly calling for ideas on social media about my predictions, and I appreciate all of the followers and readers who have pitched ideas to me. Thank you for your help, and Happy New Year! May it be a better one than the one we just endured. 2021 predictions Above: Tim Cook, CEO of Apple, is a big advocate for privacy. 1) Apple’s IDFA change will hobble targeted advertising for iOS games Apple is on a quest to put user privacy above all else. But that means it will no longer allow advertisers to extract user data to do targeted advertising. And that’s what Apple’s retirement of the obscure Identifier for Advertisers (IDFA) is all about, and the game industry is caught in the middle in this fight between Apple and advertising companies. Apple warned the change in its opt-in rules for IDFA usage was coming and it planned to launch it in mid-September. But Apple postponed the change after the ad, app, and game industries warned about the disruption it would cause. But the reprieve was only temporary, and Apple is moving ahead in early 2021 with plans to require users to specifically opt-in if they want to be tracked for advertising purposes. Without proper explanations for what it means for app pricing, most people are opting out. And that could cause a big disruption in iOS games, which generated perhaps a quarter of the industry’s $174.9 billion in 2020. Since the effect is so unpredictable, some mobile marketing companies are raising the alarm bells, but game companies are saying it may not be a big deal. I predict it will have different effects on different players in the industry. Eric Seufert , monetization expert and the owner of Mobile Dev Memo , believes both Google and Facebook will be impacted. He thinks that those companies might better oppose Apple by noting how consumers could lose access to free apps and games that advertising allows them to enjoy. He thinks highly monetized strategy games, role-playing games, social casino games, and other titles that need to reach very specific customers will suffer, while casual games and games that naturally go viral on their own, without the need for targeted ads, should do well. He thinks there will be little impact on subscription apps and those that are only moderately dependent on ads or in-app purchases. I worry it could trigger a recession in games and cause the fastest-growing part of the industry to stall. That said, I believe this is a very unpredictable but important issue that is far too opaque. For the opacity, I blame Apple. It might just come out and say it wants to change the way that games become successful on the app store, but that might mean more legal trouble for Apple. But one thing is clear. Ignore the IDFA change at your peril. 2) Epic Games may lose its legal case but win a wider war Above: Epic Games launched the Free Fortnite Cup with Apple as the villain. The sad thing about the IDFA is that Apple is judge and jury, and the industry can’t do much about it. And that reminds me of Epic Games’ quixotic antitrust case against Apple. Epic Games has assembled good evidence, and it is a bold strike to fight back against Apple’s control of mobile gaming. At the cost of getting its own Fortnite game booted off the App Store by Apple, Epic Games is doing a big favor for game developers in standing up to Apple and trying to get rid of its 30% royalty cut on all App Store sales. But antitrust law is antiquated, and it doesn’t necessarily protect a company like Epic Games when a platform owner like Apple decides to cut it off. If a judge decides that Epic has plenty of other choices where it can take Fortnite without much direct harm to consumers, then Epic Games could lose the legal case even though it has the moral high ground. But if Apple does everything it can to crush Epic Games as it has so far, Apple could lose the wider war. Regulators could change their policies or Congress could amend antitrust law and curtail Apple’s power. But the game industry could also aggressively seek to escape the platforms and the app stores that the tech giants run. They could support HTML5 games such as Facebook’s Instant Games or Snap’s messaging games or Nvidia’s GeForce Now that use the open web to circumvent the app stores. By creating downloadless game experiences with HTML5 or royalty-free cloud games, game companies could bypass the gatekeepers and escape the rules of the tech giants. The open web could be a viable path to an industry that doesn’t have to pay the platform tax. If regulators or the rest of the industry force Apple to become more open, then Epic will have accomplished its goals, even if it doesn’t reap benefits for itself. In the long run, the game industry and its platforms could become more open, and we could thank Epic’s Tim Sweeney for that. 3) Game IPOs will continue and change the game industry Above: Unity Technologies has 1.5 million monthly active users. Because gaming has done so well in the pandemic, more investors have noticed the industry and are moving money into it. One way is through initial public offerings (IPOs), and another is special purpose acquisition corporations (SPACs). Game engine maker Unity went public and is now valued at $40 billion, far more than the $17 billion value of the larger rival Epic Games at its last funding in 2020. Now Unity is too big to be acquired by most other game companies. Skillz went public via a SPAC , and Roblox and Playtika are expected to follow up with IPOs soon. These companies are exploiting a historic window of opportunity that will enable them to stay independent. And that means that they won’t be acquired anytime soon by tech giants or the biggest game companies. And from our first two predictions, we can understand some of the danger of companies becoming too big, either through their own great business ideas or by acquisitions. I don’t want to sound like a free-market-at-all-costs advocate. But if big game companies acquired a bunch of the big game developers, that could stifle innovation and creativity for a time. With the IPO window open, there’s still a way for the public to get in on the action and reward the best game makers with a market value that is inflated in the public markets and makes it impractical for another big game company to try to take them over. That’s good, as I don’t want to see all the good game developers get acquired. IPOs are the market’s way of saying that if you create something great, you don’t have to sell it to a big corporation to make it pay off. You can sell it to all of us, and keep control of it. Don’t get me wrong. Money pouring into games instead of into other industries is a good thing. That’s happening on the level of game startups, and it’s good for the owners of mid-sized companies, and it’s good for the owners of the newly public companies. Hopefully, the markets will stay strong and it will be good for public stock investors as well. 4) Game streaming and movie streaming will get hitched Above: GamePass with xCloud. The big Hollywood companies — and their owners — are all pouring money into the streaming of movie and TV shows in a bid to ward off Netflix. But Netflix itself is moving into games, where engagement with an intellectual property can be far higher and more lucrative. We have seen Apple, Disney, NBCUniversal, HBO, and more move into movie streaming. At the same time, we’ve seen Google, Microsoft, Sony, Amazon , Nvidia , Shadow, and Facebook all move into the streaming of cloud-based games. Microsoft has launched its Xbox Game Pass subscription in the hope of becoming the Netflix of gaming. It may not make tactical sense, but big companies will see the strategy that they can pursue to become even bigger and lock up more users. In the words of former MIT Media Lab director Nicholas Negroponte, “bits are bits.” You can stream games or stream movies and make money from both. Surely, someone in this vast marketplace will see that the convergence of technologies and the economies of scale could favor a company that brings game streaming and movie streaming under one roof. Disney could gain a lot of subscribers if it bought Electronic Arts and made its games available as part of the streamed Disney+ service. Strategically, such a service could be a way to aggregate consumers and concentrate media power into the hands of a single company with a single subscription. But this requires a skill that the biggest tech and streaming companies have not mastered: understanding gaming and allowing game companies to be their best. Let’s just hope that broadband technologies such as 5G networks will enable us to stream so much entertainment into homes. 5) The metaverse will begin to emerge as social gaming grows Above: Roblox is holding events related to Ready Player Two by Ernest Cline. Such a company as we’ve envisioned in the previous prediction could become so strong that it could launch the Oasis, a metaverse controlled by a single company, offering gaming, movie, TV, and other entertainment services so that you’ll never have to leave it. We desperately need a metaverse to escape the Zoomverse that we have all been stuck in during the pandemic. We need something that is more immersive and enthralling than video. Realistic or fantastic game worlds can deliver that. While Ready Player Two has been criticized by many observers, I would love to hang out in the worlds of J.R.R. Tolkien, as envisioned in Ernest Cline’s latest book. The metaverse should offer a rabbit hole of fun for everybody, whatever your particular preferences are. And there are many ways for it to emerge. Netflix could launch a vast game world full of its entertainment properties. Epic Games or Roblox or Microsoft’s Minecraft could create a metaverse for their fans. Every company that has amassed an audience has to make that audience more engaged and more social, and connecting fans in a world — preferably a game world — they never have to leave is my expectation for a real metaverse, not one that tries to trick us by being a metaverse in name only. A lot of companies will try and fail to create what author Neal Stephenson envisioned with Snow Crash back in 1992. I’d like to see it succeed soon (and that’s why we’re holding our own GamesBeat Summit: Into the Metaverse event on January 27-28). It will take years to build and perfect the metaverse, but let’s start it in 2021. I realize it will take time, but we need this. For our own mental wellness and every other reason as well. 6) God of War: Ragnarok will remind us of Sony’s greatness Above: God of War: Ragnarok is coming sometime. Hopefully in 2021. At The Game Awards , Sony showed a small teaser for the next big exclusive game for the PlayStation 5, and it will be God of War: Ragnarok. The sequel to 2018’s winner of many Game of the Year awards will hopefully debut in 2021. Cory Barlog, the game director at Sony Santa Monica, is busy at work trying to top his previous creation. But this game is much more than just a sequel. It’s a reminder that Sony believes in giant single-player games with a shitload of storytelling. Exclusives like God of War made the PlayStation 4 stand out and pull ahead of other consoles in the last generation, and Sony still has many studios working on such games for the PS5, which is off to a good start. Barlog took what might have been a weak God of War 4 and turned it into a father-son tale that was more widely appealing. This next God of War title will have a heavy burden. It has to show that big, exclusive single-player narrative games still make sense when triple-A titles are under attack from free-to-play games that last forever. Sony has shown more than any other game company that it still believes in these narrative masterpieces in the face of competition from year-round franchises such as Call of Duty and FIFA. 7) Halo: Infinite will put Microsoft back in the game Above: Halo Infinite is still getting ready for Master Chief. We haven’t seen shipment numbers yet, but it certainly feels like Sony had a more balanced launch for the PlayStation 5, with good exclusives such as Spider-Man: Miles Morales and Astro’s Playroom to stir demand. Microsoft showed up with Xbox Game Pass and lots of compatible games, but the launch lineup was underwhelming. The missing part of the console launch was Halo: Infinite , which got a poor reception in its preview. 343 Industries and Microsoft shook up the team’s leadership and brought in former Bungie leader Joseph Staten. Now the game will ship in the fall of 2021 , so long as there isn’t another delay. Microsoft has always tried to align a good launch lineup with its console launches. It has also tried to launch new systems with new Halo games, but it has succeeded only in doing that once, with the launch of the original Xbox. With Xbox Game Pass available and a good strategy on backward compatibility, the company can focus on getting lots of units into the market even without a tent-pole title. By the fall of 2021, however, it will need a system seller to keep pace with the PS5. Titles from Microsoft’s acquired studios will only begin to show up around that time, and the development job should become simpler as making titles that run on both generations — Xbox One and Xbox Series X/S — should get easier with experience. I’m hoping Microsoft will use the time to double down on content for Halo: Infinite multiplayer, esports tournaments, and bigger marketing plans for what could be its biggest Halo yet. 8) Nintendo will unveil the Switch successor in 2021 Above: Nintendo Switch. The Wall Street Journal reported that Nintendo was readying a successor to the Nintendo Switch in 2020. But Nintendo didn’t announce the system, and it has focused on cranking up production of the Switch and the Switch Lite. At some point, however, sales of the PS5 and the Xbox Series X/S will start to eat away at potential Switch buyers. If we have something like an Electronic Entertainment Expo (E3) in 2021, that would be a good time for Nintendo to announce a next-generation system. Developers could get a head start on developing games for the system, and Nintendo could launch it in the spring of 2022, about five years after the launch of the original Switch. I’m not going on insider information, so this is speculation. But it would make sense for Nintendo to stay away from the launch cycles of its console rivals and pursue a strategy of being an alternative to Microsoft and Sony. Nintendo definitely found a broad niche with the Switch, as a hybrid machine that is both playable on the TV and as a portable device. If Nintendo focuses on that niche and expands it further, it could withstand the forces around it such as cloud gaming, multiplayer universes, and mobile gaming. 9) Regulators will come after both games and game platforms Above: Star Wars: Battlefront II. Gaming has become front and center of the entertainment universe during the pandemic. But that means it will draw the attention of governments and regulators. China has cracked down on games with censorship, and slowed the approval of new mobile games to a trickle. It is removing games that don’t have proper registration. It has put limits on how much minors can play out of concerns about addiction. The rest of the world’s regulators won’t be as harsh, but they will pay more attention to games and their effects on society. I wouldn’t be surprised if more countries ban loot boxes as illegal gambling or regulate it as entertainment for adults. The game industry is walking a delicate tightrope. Campaigns such as #PlayApartTogether , aimed at getting people to social distance during the pandemic, are broadly appealing. But free-to-play games that have pay-to-win mechanics, aggressive monetization that can prey upon the young or people with addiction problems, privacy-invasive advertising, or gambling-like hooks could prompt regulators to crack down. That’s all in the name of protecting people from game companies. But as we’ve seen with Apple and Epic’s clash, regulators may also pay attention to the platforms that host games and whether they’re enabling fair competition. And I think we would like to see the platforms create an open metaverse to host the games of the future. If they don’t, the crackdown will come. It’s time for the game industry to get in front of this problem, aggressively. 10) Riot Games will establish Valorant as an esport, and other games will follow Above: Valorant is a 5v5 shooter game. Counter-Strike: Global Offensive has been a staple of esports for decades. But Valve hasn’t invested much in the esports ecosystem, in contrast to Riot’s efforts to establish a permanent esports ecosystem around League of Legends. Riot will now leverage that ecosystem to establish its second major esports game: Valorant. It still has a long way to go to catch on with the masses of gamers. But esports pros have been switching over to Valorant from CS:GO. Valve will have its hands full trying to reinvest in its game as a counterattack, but Riot is a far bigger company with 3,000 people. It can afford to invest in Valorant, but the key will be to bring in new esports fans into the fold, rather than just stealing the audience from CS:GO. For the past few years, esports has grown dramatically in terms of its audience, but it still needs fans to spend money in order to generate profits the way that traditional sports teams can do. That’s hard to do while we’re in a pandemic and physical events aren’t possible. But it is possible to grow a huge digital audience and ramp up the fan base for the day when physical events could happen again. I hope somebody knocks it out of the park because we could sure use another billion-dollar esports game. 11) Game startups will continue to thrive and generate huge game ecosystems Above: Griffin Gaming Partners’ focus. During 2020, more than 30 game-focused venture capital funds set up shop to invest in game companies. Game investment site InvestGame estimated that more than 100 game studios received funding in 2020. Combined with acquisitions, the deals led to more than $20.5 billion in transactions in the first nine months of 2020. When I started at VentureBeat 12 years ago and started GamesBeat, there were no such venture capital funds. Traditional VCs slowly picked up game-savvy investors, and the specialty funds evolved out of that as game investors and entrepreneurs became successful and plowed the money back into new funds. March Capital is on its second game-oriented fund with a $60 million raise for its March Gaming Fund , and Griffin Gaming Partners has raised $235 million. That new capital has barely begun to work, even though it feels like a couple of fundings per week is a bit much. What I enjoy seeing is the economic benefits of the job creation that happen alongside these investments. If you look at Turkey, for instance, it had the core of a mobile game industry arise with the success of Peak Games and Gram Games. Zynga bought those companies for enormous sums, and some of the people who got their first jobs with those companies have now splintered off into their own startups. Game VCs are investing in those studios, and Turkey is now a hot spot for games, with a lot of economic goodness resulting from that. Countries such as the U.S., China, the United Kingdom, and Canada still have the strongest ecosystems, but there’s no reason for them to monopolize all the jobs. A strong game ecosystem can arise anywhere now, and the game VCs are the fertilizer for that growth. These small studios will grow, launch hits, and then get acquired by the big publishers over time, starting the cycle over again. Lastly, here is my scorecard for my 2020 predictions from a year ago. 2020 Scorecard 1) The Last of Us: Part II will be my favorite game of 2020 Letter grade: A+ This game didn’t turn out anything like I had expected a sequel to be. Naughty Dog’s The Last of Us was my favorite of all time. But this title took what I liked about this 2013 hit – the characters and the special relationship between the father figure and the daughter figure of the previous game — and destroyed it. Then The Last of Us Part II proceeded with a script that was the logical consequence of Joel’s decision in the first game to lie to Ellie about why she didn’t need to be sacrificed to develop a plague vaccine. The game introduced us to new characters in the universe of the post-zombie apocalypse, and it told a story about revenge and redemption that I totally didn’t expect. Even so, it was a moving story, with compelling characters, flawless execution on graphics and gameplay, and everything else I expect from a Naught Dog production. It made a statement about violence through a story in an extremely violent video game. It was emotionally exhausting to play it, and it wasn’t what a lot of people consider to be fun. But I’m glad Naughty Dog created it and that I played it through with my daughter. 2) Sony’s PlayStation 5 will be a smashing success Letter grade: A We don’t yet know how many units Sony has sold for the PlayStation 5, which debuted on November 12. But we know that it likely outsold Microsoft’s Xbox Series X/S. (It really is just a matter of which company did the best job lining up its supply chain.) If I were to gamble, I’d say that Sony won, with a better lineup thanks to the likes of Spider-Man: Miles Morales and Astro’s Playroom. While Microsoft made some big strides in matching Sony when it comes to first-party studios, Sony had its studios in place for a longer time. It managed to bring some big projects home at the same time as the launch, and that made a big difference in the eyes of gamers. Sony outsold Microsoft by more than 2-to-1 in the last generation, and it’s going to be hard for Microsoft to steal away those gamers. This console war is Sony’s to lose. 3) The Xbox Series X will also be a big success Letter grade: B Microsoft had everything going for it when it readied the launch of the Xbox Series X/S. It had two different consoles to target two different types of buyers: the hardcore spenders and the budget-conscious fans. It lined up a lot of new studios with acquisitions. But nothing came in for the finish line in terms of big games that could shine on the new console. The biggest game of all, 343 Industries’ Halo Infinite, was delayed a year until the fall of 2021. Microsoft showed up without a major exclusive to make its console stand out. But it did show that its Xbox Game Pass subscription had grown quite valuable in the eyes of consumers, and it also made it easy for fans to upgrade to the new machine by making its Xbox One games compatible with the Xbox Series X/S. With those moves, Microsoft will hang on to its own hardcore base. Microsoft’s fans will have to be patient as they await big titles and new games coming from Microsoft’s acquisitions, however. 4) Fry’s Electronics will shut down, and so will many video game stores Letter grade: C Fry’s Electronics is definitely a dinosaur from another era. It should have become dominant in the age of big box retail, but the chain never expanded that aggressively. And yet somehow, the chain is holding on. The company closed another big store in Campbell, California , in addition to one in Palo Alto, California. But like other big box retailers, Fry’s has been Amazon’d. It’s like the walking dead. But for some reason, it’s still alive, prompting my C grade on this one. With the coronavirus still running rampant, big retail’s days are numbered. Most shoppers report that Fry’s stores are bereft of merchandise. They’re big empty shells. It’s sad, as Fry’s Electronics, which grew up with groceries in one aisle and memory chips in another, is a Silicon Valley institution. I’m not expecting it to be around much longer. GameStop isn’t faring much better, with 462 stores closed in 2020. 5) Nintendo may reveal new hardware, but won’t ship it in 2020 Letter grade: C I scored an F when it came to whether Nintendo would unveil new hardware to replace its successful Switch. But I scored an A in noting that Nintendo was not likely to launch the said unannounced console in 2020. Nintendo should be in no rush. It launched its successful console-handheld hybrid Switch console in March 2017. And now it can benefit from being off the cycle of Microsoft and Sony, which both launched new machines this year. While the PlayStation 5 and Xbox Series X/S were in short supply this holiday season, Nintendo probably cleaned up with plentiful supplies of the Nintendo Switch. 6) Amazon, Facebook, and Microsoft will join Google in launching cloud gaming services Letter grade: A Cloud gaming has come a long way since OnLive gave up the ghost. Google launched its cloud gaming service Stadia in November 2019. But it had a very slow launch, and that left the door open for rivals. In this prediction, all three of the rivals came through with their own cloud gaming service launches. Microsoft debuted Project xCloud; Facebook did a small launch of its cloud gaming service, which evolved from its acquisition of startup PlayGiga in Spain; and Amazon launched Luna. On top of that, Nvidia finally formally launched its GeForce Now cloud gaming service. 7) Big companies and VCs will continue to invest in game companies Letter grade: A+ While the pandemic made 2020 miserable for many of us, gaming prospered. And game venture capital firms multiplied. March Capital launched a $60 million gaming fund, Griffin Gaming Partners launched a $235 million fund, and by the end of 2020 we had more than 30 game VC funds investing in games around the world. InvestGame , which tracks game investments, said more than 100 game studios were funded in the first nine months of 2020. And if you add the money raised together with the acquisitions, the total value of deals in 2020 exceeded $20.5 billion, according to InvestGame. I aced that prediction, but something else happened that I didn’t expect. Gaming prospered in the pandemic as people turned to games as a distraction and for remote socializing. That opened a window for initial public offerings and SPACs (special purpose acquisition corporations) for game companies. Unity went public and saw its market value rise to $52 billion. Skillz went public via a SPAC while Roblox and Playtika filed to go public. 8) Esports companies will continue to soar in viewers, valuations, and acquisitions — but not profits Letter grade: A This prediction proved accurate, but not in the way I expected. Esports companies were hit with a broadside when the pandemic arrived and led every company to cancel their physical events. But the esports industry recovered as the audiences shifted to watching matches online, in a digital-only format. Riot Games went ahead and launched Valorant in the pandemic, and it kicked off esports tournaments for the anti-Counter-Strike game by the end of 2020. Quantum Tech Partners still expects esports to generate a lot of deals and acquisitions going forward, and new esports holding companies have emerged to acquire esports properties. And yes, nobody is really bragging about the buckets of profits they’re making from esports yet. 9) VR will have its biggest games yet, but will continue to struggle Letter grade: A We had some great games debut in 2020 on virtual reality platforms. Facebook launched its Oculus Quest 2 headset, and Valve came out with its Valve Index. Respawn Entertainment’s Medal of Honor: Above and Beyond and Valve’s Half-Life: Alyx were among the triple-A games that debuted for VR during the year. But the consumer market for VR continues to struggle. VR arcades were wiped out due to the pandemic. Facebook is doing its part by acquiring struggling VR studios and launching new hardware at lower prices. But the enterprise is keeping VR going as full immersion is extremely valuable to companies that are trying to train and educate their personnel. Let’s hope that VR hangs in there until the masses truly arrive. 10) Augmented reality glasses will become more practical Letter grade: D There was small progress on augmented reality this year, but not enough to warrant a good grade on this prediction. Apple didn’t announce its plans to go into this market. And while Facebook said it will launch AR glasses in 2021, it didn’t actually introduce any new hardware in 2020. We still expect great things from companies that are investing in the tech, such as Qualcomm and Niantic. But 2020 wasn’t the year for AR. 11) Regulatory forces will gather momentum Letter grade: A The Federal Trade Commission has been investigating game loot boxes and microtransactions for deceiving consumers and spawning addictive gambling-like behaviors. Regulators didn’t act against game companies this year, but the concern about regulation is growing. Sheldon Evans, an assistant professor of law at St. John’s University in New York, wrote a paper on how states should crack down on loot boxes as a form of gambling. The state of Washington has begun treating social casino games as leading to possible gambling addictions. Congress went after the big tech companies for antitrust violations, but gaming could be the next target. Add to that China’s own crackdown on making sure games are registered — and forcing Apple to delete tens of thousands of games that weren’t in compliance — and you can see the effect that governments around the world are having on gaming. 12) Subscription gaming will gather steam Letter grade: A Microsoft made good headway toward its goal of creating the “Netflix of games.” It has moved fast to add premium games to its Xbox Game Pass Ultimate subscription service. And Game Pass is turning into a juggernaut. In September, the company reported that subscriptions grew 50% to 15 million in six months. Apple Arcade now has a full year under its belt, and Stadia does as well. Consumers are going to have a lot of choice when it comes to subscription services, and that’s a good thing. 13) Intellivision will be the wild card of 2020 Letter grade: F Intellivision delayed the launch of its Amico family game console from October 2020 to April 2021. And with that, it skewered my prediction about a 2020 wild card launch. I’m still expecting good things from Tommy Tallarico’s retro project, but it better live up to its family focus more than ever, now that it is coming out after the PS5 and the Xbox Series X/S, which should be in plentiful supply by April. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,271
2,021
"Horizon: Forbidden West delay is official -- launches February 18 | VentureBeat"
"https://venturebeat.com/2021/08/25/horizon-forbidden-west-delay-is-official-launches-february-22"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Horizon: Forbidden West delay is official — launches February 18 Share on Facebook Share on X Share on LinkedIn Horizon: Forbidden West. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Horizon: Forbidden West is coming to PlayStation 5 and PlayStation 4 on February 18. Sony confirmed the delay into next year during today’s Gamescom’s Opening Night Live broadcast. You can begin preordering Guerrilla Games’ open-world adventure on September 2. Horizon was only ever tentatively planned to launch in 2021. And when Sony debuted gameplay at a State of Play event over the summer, it was clear that the game could end up launching in early 2022. Then in July, I reported that the game was moving into 2022 , and Bloomberg’s Jason Schreier corroborated that with his own sources. With this new Horizon game, Guerrilla is picking up the ongoing adventures of its hero Aloy. After exploring the post-apocalyptic world of Colorado and the wider Rocky Mountains in the original, the game is taking players to the West Coast. Also, today, Guerrilla launched an update for Horizon: Zero Dawn. This patch adds support for 60 frames per second. The game was previously locked to 30 on console. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,272
2,021
"Brendan 'PlayerUnknown' Greene interview -- Prologue is huge, but here's the vision for Artemis | VentureBeat"
"https://venturebeat.com/2021/09/04/brendan-playerunknown-greene-interview-prologue-is-huge-but-heres-the-vision-for-artemis"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Exclusive Brendan ‘PlayerUnknown’ Greene interview — Prologue is huge, but here’s the vision for Artemis Share on Facebook Share on X Share on LinkedIn An aerial view of Prologue's landscape. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Brendan “PlayerUnknown” Greene made news this week as he left Krafton, the company that published his tremendously successful PlayerUnknown’s Battlegrounds (PUBG) battle royale game, and he started a new studio in Amsterdam called PlayerUnknown Productions , funded by Krafton. He further revealed Prologue, a tech demo that his team will create in the coming years. Prologue will be a huge virtual world, with some previously unfathomable dimensions of 64 kilometers on a side. That is as big as open world games get. But that’s not all. In an exclusive interview with GamesBeat, Greene said that Prologue is just what its name implies. It is a single-player game where you can wander in the wilderness. But it is setting the stage for something bigger: Project Artemis. If Prologue is impossible to build with current game technology, then Artemis will be even more impossible, as an Earth-size virtual world. But Greene is still going to try. Greene said that Artemis is a journey towards the sort of gaming experience that people have dreamed of for decades but that have never been able to make before: a giant, deep world that exists not as a single experience but as a place where anything can happen. He said that half of this is technology, leveraging machine learning to create worlds and systems far too big and complex for humans to feasibly make. Half of this is vision: finding ways to fill this colossal canvas with opportunities for interesting and emergent behavior. “Prologue is really just a stepping stone,” he said. “My biggest mission with PlayerUnknown Productions is to build an authentic and trustworthy studio. I want my team’s name to mean something in a couple of years, and I think the only way to achieve that is to be open. To open the doors and say, ‘This is what we’re working on.’ Prologue gives us that opportunity. We have funding to head toward Artemis and the big dreams, so we don’t have to make money from Prologue. We have a chance to show off and bring people into the fold a lot earlier and build that relationship.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! You could think of this world as a metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. But Greene doesn’t really like to think of it that way. He calls it a world. Emergence pic.twitter.com/dipZpUSeNY — PLAYERUNKNOWN (@PLAYERUNKNOWN) September 3, 2021 The keyword for Prologue is that it will be “emergent.” It won’t be pre-scripted, and it could very well be different every time you log into it. If players want to create a battle royale game in this world, they could certainly do so, Greene said. Prologue will draw on systems and concepts from existing survival games. But that’s not all it’s going to be. This dream of a giant world goes pretty far back for Greene. “Since the first day I played DayZ , getting to the edge of the map and thinking, ‘Fuck, man, why does it have to end here?’,” he said. “Seeing some of the bigger worlds and thinking about what’s possible, I loved the idea of making a space where a helicopter has real meaning. It’s not just that it cuts the trip across the map down to a few minutes. It cuts it down from a few days.” He added, “This kind of desire to have a digital life is strong in a lot of gamers. Providing this space where it’s a big enough world–I love Rust, but if you play on a busy server there are bases every few meters. I want a space where you don’t discover a player’s base for miles. Or when you do it’s a big settlement rather than a box. This stuff has always excited me, ever since I got back into gaming by discovering really open worlds. Red Dead Redemption is fantastic, but it’s just a bunch of scripts. You go kill all the bears in a region, go away, come back, and they’re all back again. I want to have meaningful life in the world. If you kill all the bears in a region, maybe the deer population explodes.” That last line is what he means by “emergent” gameplay. The world can have all kinds of unintended, unplanned consequences. “We have the technology to do this. We can think about ecosystems. I want this world to have a life that isn’t dependent upon the player,” he said. “It exists without the player. It’s a big ask. I know what I’m trying to do here is seemingly impossible, but it will be small steps. I think we’ll get there. I’ve been thinking about this a long time. I want this open world. This space where you can just even go by yourself and discover places in it. Just go hiking. I’ve had this dream for quite some time.” Cultural capital Above: Brendan Greene is the creator of PlayerUnknown’s Battlegrounds. Before we dismiss this as pure fantasy, it’s important to remember what Greene has pulled off. It’s a bit sad for Krafton that Greene decided to leave the company, as Krafton just raised $3.75 billion in an initial public offering at a $20 billion valuation. But if Greene creates something valuable again, then at least Krafton will own a piece of it. Greene said he was thankful to fans and to Krafton. I asked him why he left. “It was more that I just wanted a chance to do things by myself. I wanted the buck to stop here with me. What I’m doing is a little crazy,” he said. “I love Krafton and the opportunity they’ve given me, but I wanted a chance to strike out on my own and leave a legacy. Ultimately, I want PlayerUnknown Productions to be in a state where I can leave it and they can still release games without me as a trusted development studio. I hope. But it was more that I just wanted to strike out on my own and make my own name.” Artemis is about the creation of something that isn’t authored or controlled, but rather an organic collaboration between humanity and machines to make a world from scratch, Greene said. No other studio will get the chance to create something this crazy, he said. Greene’s team has been working together for some time. Krafton agreed to make an unspecified investment into Greene’s company. Greene is famous as the creator of the battle royale mode for first-person shooter games, starting six years ago with a mod for the military shooter Arma. Greene’s work has led to a huge change in the first-person shooter genre and generated more than $5.1 billion in revenues for the mobile version of PUBG alone, according to measurement firm Sensor Tower. The PC and console versions have generated billions more revenue, with copies sold surpassing 70 million in mid-2020. Greene believes the industry has become risk-averse with skyrocketing budgets, the elimination of many mid-sized studios, and lots of sequels. But he believes battle royale showed that people are hungry for genuinely new experiences. It gave Greene the cultural and financial capital to take a bigger risk than battle royale and possibly transform the industry again. Greene said he has held “this deep fascination with sandbox-style, open-world games, and the freedoms that they give their players.” But he said he always wished they were a bit bigger. At the new studio, his team wants to create realistic sandbox worlds on a huge scale with thousands of players interacting, exploring, and creating. He said he is following in the footsteps of other great open-world developers and his dream is not to create a game but a world. That’s what he wants to do with Prologue. “We are entering a new space race for the metaverse. Companies are taking positions but not yet knowing precisely where they want to go. But the interesting thing is that all these investments and both the failures and success of them, will get us closer to the dream (or the nightmare) of creating fully livable alternate realities,” said Ivan Fernandez Lobo, organizer of the Gamelab conference in Barcelona. “I don’t necessarily think of the metaverse as real-world scale worlds to explore (I am more excited about other narratives and possibilities like the ability to sharing dreams of all sizes), but we are still in 2021, and I take Brendan’s vision and ambition very seriously. He is young, brave and bright enough to make it happen or at least to open amazing paths for other explorers to follow.” The team Above: A birds-eye view of Prologue. PlayerUnknown Productions has about 25 team members in Amsterdam. They have been working for some time together, and Greene negotiated with Krafton to take that team with him. He also gets to keep the PlayerUnknown name, while Krafton keeps PUBG. “I take my intellectual property, Prologue, my work, all of that with me, and the team. That was very important to me,” he said. “The last few years we have battled to find the right team, and now we have a good team that believes in what we’re doing. A lot of time spent making sure we could take that away with us.” Some of the team is working on a proprietary game engine, as Greene did a lot of research on the technology required to make the game. He found that nothing commercially available was suitable. And so the team is making that game engine, dubbed Entity Component System (ECS), itself. “We haven’t been able to be very open about what we’re doing, especially through recruitment channels,” Greene said. “Part of opening up, hopefully, is that we’ll start attracting the kind of people who are crazy enough to take on the challenge we’ve taken on ourselves.” Thanks to COVID, the team has to work remotely for now. “Maybe we’ll have to enable a bit of remote work. Right now everyone’s working from home, but even when we come back we’ll be doing 50-50,” he said. “We can do that. We’re in the right industry. It makes that relatively easily. But in Europe, it’s not so easy working from home because the homes are a lot smaller. A lot of people prefer being in the office.” Greene realizes he’ll have to hire more people, perhaps taking the team up to 50 people. “We don’t see the need for a big team because of the tech we’re building,” he said. Prologue Above: Prologue is Brendan Greene’s next project. I asked Greene how his team settled on a 64-kilometer by 64-kilometer world. The terrain of Prologue will resemble the modern forests of Europe. And you can see in some of these images there are modern battle tanks and elegant sanitoriums. There’s a coal mine that should be the size and scope of a coal mine in the real world. The weather system should be realistic. “This was more to see how far we could push the tech. The game mechanics should work on any scale world, or at least that’s the idea,” he said. Prologue requires that the company build the technology required to generate vast worlds, and “in that sense Prologue is intended to serve as a simple introduction to an early iteration of our technology,” he said. In Prologue, players will find their way across a “runtime-generated wilderness” and use tools and gather resources to survive on a journey where harsh weather is constantly a problem. “There will be no guidance, no path for you to follow. Just a world a spot on the map to reach and the tools needed to get there,” he said. Because Prologue is a tech demo, Greene said the company will use a “pay what you want” model for it. If you like what you see and enjoy the experience, you can pay to support the team. “This is just the start of the small glimpse of that technology that will eventually power a much more extensive experience,” Greene said. “Prologue is our first step on a multi-year journey towards creating what we hope will be rich, interactive, open worlds. We’re thinking of truly massive worlds. That brings a whole load of questions alongside it. But the idea behind the technology is that it’s scalable. It shouldn’t matter what size of world you want to create. The technology should scale with it. That’s our main focus.” The technology Above: Prologue is going be as much as 64 kilometers by 64 kilometers of game space. In a traditional game, Greene would need hundreds of artists to create the detailed art in a map this big. If you downloaded it, it would be ridiculously big. The team concentrated on this problem, and it thought about bringing AI into the solution and an easier programming model as well. This AI won’t replace human designers and artists, but it will make the task more manageable. The key to making the whole project work is to find a balance between game design, machine learning, and user-generated content. The process of creating this world is a team effort. And that’s why Greene is trying to be transparent about the process. AI is going to be key to automatically creating a lot of the content. “We figured out how to achieve this massive world. We broke it down into some pillars,” Greene said. “The first was building a terrain tool to generate and fill massive worlds with content. The second thing would be how you fill that with life, using AI, giving the player real things to do, and making the world feel alive. And then finally, how do you dump 100,000 players on top of that?” At first, Greene thought he would build a new game technology first, and then add a game on top of it. But he realized how hard it would be to do the game in addition to the tech. So he decided to build the tech first, make it free, and then transparently take feedback from players. “Building a game on top of new tech is very hard. Prologue was always meant to be an example of our terrain tech,” he said. “Here, we can generate these worlds. How do I leverage that into some kind of game to show it off? Prologue would get you across the map. We put in some equations there, put in the weather, and you just had to get across the map. A simple sandbox game, single-player, just leveraging the tech.” Then he realized, after talking with the team, that this wouldn’t turn out to be a good game. So the team stripped Prologue back to being a tech demo. “The more I got experience as a producer, the more I realized this wasn’t really a good plan,” Greene said. “After talking it over with the rest of the team, we stripped it back to now being a tech demo. We’ll release it for pay-as-you-want. If you like it, pay us some money, but either way you can try it out and enjoy it. That’s our plan.” Since the engine isn’t ready yet, Greene said the team is using Unity to create initial game assets. Graphics processing units (GPUs) will be needed not just for graphics but non-graphics processing for increased simulation detail. Many players should be able to share one world. And the functions in the game should be able to morph at any time, allowing for new gameplay and mods. “We don’t have our engine ready yet. We need to leverage Unity’s tools and our own tools to start testing our tech,” Greene said. “This is a big and a long project. The only way to achieve it is by iterating. That’s what we’re trying to do.” Greene admired Unity’s Megacity demo, and it inspired him to use its tools. The experience will run on Windows, but the team will attempt to add other platforms over time. The minimum hardware system would likely include Windows 10, a four-teraflop GPU, four gigabytes of video memory, a four-core CPU, eight gigabytes of RAM, and a variety of GPU vendor types. “Holy hell, we can really make these massive worlds. Now my dream can be made,” he said. Artemis Above: Can machine learning generate maps? Or an entire world? Artemis is an open world, undirected sandbox experience that uses machine learning to produce worlds and systems bigger than any other game ever made. It is an attempt to recreate the world as close as the team can, Greene said. And it’s going to be far bigger than Prologue. You can think of it like other open world games like Rust, DayZ, Arma II, No Man’s Sky, The Legend of Zelda: Breath of the Wild, Eco, Valheim, Second Life, Eve Online, and World of Warcraft — all titles that have pushed the boundaries of sandbox-style gameplay over the years. Players will explore a procedurally generated world, gathering resources and building things. Rust, created by Garry Newman and Facepunch Studios, is perhaps the closest thing to this idea. It also sounds lot like Nvidia’s Omniverse , which is targeted at engineers doing physically accurate simulations, as well as Improbable’s engine for massive simulations. The biggest difference should be the scale. In contrast to Prologue, Artemis will likely be full of different biomes, terrain types, players, animals, plants, and more. Players will be able to build not just camps, houses, or bases. They can build whole cities, societies, and civilizations. “We want to give people a new place to live, because this one has some issues,” he said. Artemis represents the next leap forward in technology, and it could take five years to get there. Maybe longer, Greene said. I asked Greene what the difference was between Prologue and Artemis. It should be a spherical place, like the planet, with a radius exceeding 6,000 kilometers. But that’s subject to change. “We’re using Prologue as a testbed for the game elements of the world. We can test out an electrical system,” Greene said. “We can put in a better animation system. All these things will be spec’d out first in Prologue, made to work, and then when we come to Artemis we at least have the logic figured out and we can start programming it into the engine. It’s like what ArmA was for battle royale. It was a place for me to test, iterate, get a final game mode, and then be able to say, ‘Okay, it works.’ That’s what we want to do with Prologue. Prologue will be a single-player experience, while Artemis will be massively multiplayer. “Artemis probably won’t be worlds generated with runtime. Prologue will be, every time you press play you’ll get a new world,” Greene said. “It will hopefully be a different enough terrain that it should feel different every time. With Artemis we won’t have that. We’ll probably have static worlds that you can come and enter. Prologue is on a much smaller scale as well. It’s maybe 32 kilometers by 32 kilometers or 64 kilometers by 64 kilometers, whereas Artemis will be planet-scale, hopefully. A smaller planet, but that kind of scale.” While Prologue will be big, Greene said he didn’t think it would be a very interesting game for most people. “I think it’ll be quite boring. Light fires, board up windows, keep yourself warm against the constant storm where cold weather will knock you out,” he said. “But again, it’s more to show a consistent world with logical points on it where you can do things, and this is systemic gameplay.” Artemis could include conflict, particularly over limited resources. But it’s not required for players who would rather be in a peaceful world. Combat will be onerous, and the designers intend to reward players for cooperation. And while it could take five years to build, Greene said all of this is subject to change, as it’s a joint project with players. Massive challenges Above: A military base in Prologue. Greene acknowledged creating such worlds presents challenges. “One of the more significant is that we simply don’t have a way to fill these massive spaces with content, assets, game mechanics, locations, and similar things in a reasonable amount of time,” he said. “Realistic open worlds take a great deal of time and effort to produce. And so this was the first issue that we chose to tackle.” The key to making things bigger than humans can create on their own has always been to get machines to pitch in and help. And he plans to use deep learning neural networks, or artificial intelligence, to help generate the massive realistic open world. The game should be a new experience each and every time you press play, he said. “It’s this breakthrough that we hope will start pushing video game worlds to the sorts of scale that would lend weight to the idea of “you see that mountain, you can climb it,” a reference to a comment by game designer Todd Howard about The Elder Scrolls V: Skyrim. That became a meme. He thinks that finding a hidden corner in a vast space will have real meaning when thousands of other players haven’t passed through it in the last hour. “This is what I and my team have been working on,” he said. “We’re developing this technology required to enable massive scale within open-world games. It’s been a fascinating project to date. And soon, we’ll be ready to show off some of what we’ve achieved.” Staffing will be a challenge too, as the early work will focus on proving out the technology with Prologue. There are more people who would rather start working on a game first. The multiplayer challenge Above: A sanitorium in Prologue. Multiplayer is also going to be a huge challenge, and the company has left that as its final problem to tackle. “We’re going to be leaving multiplayer until the end, because multiplayer tech is coming out every year,” he said. “The further we push that out, the more chance there will be to leverage some exciting new tech. Already there’s stuff out there being developed for high player counts, to get across that thousandth player. If we’re doing the worlds we’re doing, the size we’re doing, we have to have massive numbers of players.” It will be tough to find the right gameplay. The goal is to create non-player characters (NPCs) that behave more like real players than AI characters. “I’m making a sandbox, more like Rust than a single-player experience,” he said. “That’s hard for a lot of people in the game industry. They want to make games, and I don’t want to do that. I want to make a big world where you can make all the games.” If players just want to use a piece of the world for their own battle royale games, that would be fine. “ I want to start with something basic. Battle royale, king of the hill, capture the flag, these very simple game modes where I need a weapons pack, some basic game mechanics, and then you can just put that anywhere on the map,” he said. “Even with the ArmA III battle royale code, there was one mission file at the start, and you could change the size of the map and all this. It would fit any size of the world. I want to keep that mentality. You can give your friends a set of paintball guns and a location and you can figure out how to play.” And it shouldn’t be really hard to create games to play within Artemis. “I want to give the player as much freedom as possible. I want to liberate people from coding,” Greene said. “I want them to be able to make a game mode without having to worry about modding.” Is Artemis the metaverse? Above: The metaverse is the next phase of the internet. Asked if Artemis is the metaverse, Greene said that could be confusing to think of it that way because the metaverse is so vague. Greene replied, “I don’t want to say that word. I’ve been thinking long and hard about this. I watched Ready Player One and I thought, ‘Holy shit, that’s what I want to make.'” That doesn’t mean he wanted to copy the movie, or the book by Ernest Cline. Rather, he said the world of the film was more like a validation of an idea that he had been thinking about for a long time, rather than a source of inspiration. Ever since having these big open worlds, I thought, ‘But then how do you make the metaverse?’ I’m just building a layer. I’m building this one big open world that you can all come in and fuck around in. If that happens to be a layer — we’re doing it fully open and making sure the protocols for files and everything are all open.” Above: A scene from Steven Spielberg’s 2018 movie Ready Player One. Rather, Greene thinks this is more like the next version of the internet. (A lot of metaverse advocates say it will be the next version of the internet). “It won’t be Tim Sweeney’s metaverse or Unity’s metaverse or PUBG’s metaverse. They’re all separate universes. Even the blockchain stuff going on right now, I’m not sure it’s the metaverse,” Greene said. “Maybe it’s a part. They made a good talk in that talk about how the metaverse has to be open. It has to be a protocol. Ultimately, with my world, I want you to be able to access it from a browser.” Greene considered whether Google Stadia’s technology was a way to scale up the players and get to 100,000 players in a single shard, or game space. “If you simulate everything on a server, you don’t have to worry about the stupid peer-to-peer stuff that holds up massive multiplayer,” he said. “I can simulate 100,000 people on a server — but then that restricts people through bandwidth. That’s not good. If you make an open world for everyone, everyone has to be able to enter regardless of device. It has to be accessible via web browsers. I’m building a world. If that world happens to be part of the metaverse that’s great. But it’s not my first thought.” I asked if it would have blockchain tech or nonfungible tokens (NFTs), which enables new business models like selling rare items or transferring digital assets from one world to another. He said it was possible he would use such technology because if he’s going to make a world, he has to give people a way earn a living in that world. He said he noted that EverQuest had one of the biggest economies in the world at one point, and he admired Axie Infinity , which lets many players earn a living by playing and earning rewards. But Greene is wary of scammers in the space and will only move forward with it as NFTs become tangible and scam-free. The story Above: Inside a sanitorium in Prologue. The story behind the world is still pretty much a secret. The world will be a generated world, and there’s a story to it, but Greene isn’t telling anything about that yet. He wants people to think about the tech problem at hand first. “How do you build a digital massive open world and give people the freedom to do whatever the hell they want in it? You can give people a square chunk of land and let them manipulate that how they want,” he said. It still has to be worked out in terms of what level of user-generated content the world will have. But Greene definitely believes in the creativity of users. Perhaps the users will build towns, and they can keep expanding the borders of the towns. You should be able to spin up your own private world and design it as you wish, he said. He wants it to be something like the Holodeck in Star Trek , where you can create any kind of lifelike simulation. “I want to build a Holodeck,” Greene said. “Anything should be possible on the Holodeck.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,273
2,021
"The DeanBeat: Interpreting what Epic v. Apple means for games | VentureBeat"
"https://venturebeat.com/2021/09/17/the-deanbeat-interpreting-what-epic-vs-apple-means-for-games"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: Interpreting what Epic v. Apple means for games Share on Facebook Share on X Share on LinkedIn Epic Games launched the Free Fortnite Cup with Apple as the villain. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. A federal judge last week issued a landmark ruling in Epic Games antitrust lawsuit against Apple over how Apple runs the App Store and charges a 30% fee for all developers. I offered my own interpretation of the ruling , in which Apple won nine major points under federal antitrust laws and Epic won only one in California’s antitrust law in its struggle to get Fortnite back into the store and curb Apple’s power. I’ve solicited more opinions from developers, payment companies, antitrust attorneys, and other experts. These sources have helped identify key questions in the ruling, the depth of Apple’s legal victory, and opportunities for Epic to turn the case into a larger defeat for big tech companies. The results are more complicated than we first thought, but some important details have turned up that I haven’t mentioned yet. The 180-page order from U.S. District Court Judge Yvonne Gonzalez Rogers in Oakland, California held that Apple violated California’s laws against unfair competition when it came to a narrow but important matter of “anti-steering rules.” The judge ruled Apple can’t force developers to be silent when it comes to telling consumers inside a game that there are better digital item deals outside the App Store. Still, she ruled in favor of Apple on all other important counts in the complicated antitrust lawsuit. While it stuck to the law, the judge’s ruling is full of observations that clearly showed that she didn’t care much for either Apple or Epic Games, said Richard Hoeg , a partner at Hoeg Law in Michigan and a frequent commentator on YouTube about legal cases involving games, in an interview with me. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The limits of antitrust After the ruling came out, plenty of commentators were saying that it’s time to change antitrust law to deal with big tech companies. In looking at the decision, lawyers offered the reminder that antitrust law doesn’t protect competitors. It protects consumers. If there isn’t a harm to consumers, it’s hard to prove a tough competitor is guilty of monopolistic behavior. “What we focus on in antitrust is if they engaged in some conduct that enables them to charge some price that is above what it would exist in a competitive marketplace,” Jonathan Lewis, an antitrust/competition partner at Lowenstein Sandler, said in an interview with GamesBeat. “This is about who is controlling the relationship and the money flow,” Lewis said. “I think the question is whether Epic bit off more than it could chew. Fairly often in antitrust cases, where you have somebody challenging the way somebody does business, people sometimes swing for the fences. That doesn’t mean they’re necessarily wrong. It’s just that’s the way they think it’s best to pitch the case.” Hoeg said antitrust cases are difficult to both prove and predict. “Antitrust law from its very inception has been very vague and amorphous,” Hoeg said. “It was designed to be a catch-all from really 1890 onward. I don’t blame anybody for thinking either side was going to win because ultimately, a lot of antitrust law comes down to what’s in the judge’s mind. What is the relevant market? Does this go too far? Does it not? Antitrust is one of those areas that I think is ambiguous enough in its application that it really does live in the minds of the judges.” Apple’s landslide victory? Above: Apple’s headquarters sure takes up a lot of space. The judge found that Epic overreached in its antitrust claims, and she held that Apple wasn’t an illegal monopoly. “Given the trial record, the court cannot ultimately conclude that Apple is a monopolist under either federal or state antitrust laws,” the judge said. She said Apple had earned its success, and that wasn’t illegal. Apple acknowledged in testimony that it tried to make it more attractive for consumers to stay on its platform, or making it “stickier.” But the judge said that is “not necessarily nefarious.” Its market power may flow from a good relationship with consumers who like Apple’s products. “This was a mixed decision in which there is no clear winner or loser,” said Jennifer Rie, senior litigation analyst at Bloomberg Intelligence, in an email. “My view is that the decision is better for Apple than for Epic Games. This is because the judge ruled that Apple was not a monopolist and didn’t violate federal antitrust laws. Therefore, she did not grant the primary remedies Epic was seeking, which were fairly drastic business model changes — to require Apple to allow other app stores on iOS devices, other than Apple’s own App Store, and to allow app developers to use their own payment processors within their apps.” Parker Miller, a partner and antitrust attorney at Alston & Bird, also said in an email that he didn’t see any winners. The court didn’t declare Apple a monopolist, but the one part of the case that it lost could lead to a loss of revenue and a loss of control over the monetization of apps. And he noted that the court might find Apple could be proven a monopolist, given different evidence than was presented by Epic. Today’s ruling isn't a win for developers or for consumers. Epic is fighting for fair competition among in-app payment methods and app stores for a billion consumers. https://t.co/cGTBxThnsP — Tim Sweeney (@TimSweeneyEpic) September 10, 2021 Tim Sweeney, the CEO of Epic Games, tweeted that Epic did not win. And Apple is certainly celebrating. “We are very pleased with the Court’s ruling and we consider this a huge win for Apple. This decision validates that Apple’s ‘success is not illegal,’ as the judge said. As the Court found ‘both Apple and third-party developers like Epic Games have symbiotically benefited from the ever-increasing innovation and growth in the iOS ecosystem,'” said Kate Adams, the general counsel of Apple, in a statement. “The Court has confirmed, after reviewing evidence from a 16-day trial, that Apple is not a monopolist in any relevant market and that its agreements with app developers are legal under the antitrust laws. Let me repeat that: The Court found that Apple is not a monopolist under “either federal or state antitrust laws.” The relevant market Above: The bulk of Apple’s App Store revenue comes from games. This was a narrow legal victory in some respects, as the judge noted that if Apple had a 65% market share, it would have been declared a monopoly on its face. It only has 55% of mobile game purchase revenues, the judge decided. Significantly, the judge decided the “relevant market” for antitrust evaluation was the mobile game in-app purchase market. Epic wanted the App Store itself to be declared the relevant market, in which case Apple would have automatically been declared a monopoly because there was so little choice in either Android or other platforms if you really wanted to reach Apple’s one billion lucrative users. Apple attorneys had wanted the judge to consider the wider PC and console game sectors to be part of the relevant market so that it could show that there was plenty of choice for both developers and consumers if they didn’t like Apple’s rules. The judge did consider the Nintendo Switch and the upcoming Steam Deck to be possible competitors in mobility devices, but she rejected Apple’s argument about the wider market, saying that mobile gamers were unique and behaved in different ways than the wider market of players. Gonzalez Rogers also found that, without evidence of excessive and illegal monopoly power, many of the allegations didn’t stick. She found that Apple’s reasons for disallowing sideloading of apps to be plausible and not “pretextual,” meaning Apple’s rules were not just a pretext for stopping developers like Epic from sidestepping Apple’s own pricing, payment systems, and commission. Epic argued that Apple had poor security and should have let developers tell players they could go off the App Store to get cheaper digital items elsewhere. Apple said it prohibited this for security reasons. Epic asked why Apple allowed sideloading and alternative payments on the Mac. The judge noted that Epic cannot just commandeer the App Store, which Apple invested heavily in during the early days of the iPhone, as a kind of public commons or a public utility (dubbed an essential facility in antitrust law, like a bridge that everyone must use to get across a river). That would be like the U.S. government taking away Elon Musk’s SpaceX rockets and giving them to NASA, some observers said. “What does it say to innovators, if we’re going to say now that you’ve been successful, you have to change. It’s not like they came up with this 30% fee commission from thin air,” Lewis said. “It’s the standard. Are we going to punish innovators for their success and require them to turn over access to what they’ve created because people don’t like to pay what they’re asking?” Companies should expect to gain rewards from their intellectual property investments, the judge said. “I believe that the most significant parts of the decision were No. 1, that the judge rejected the definition of the market set forth by Epic, which tried to prove that the market included only iOS devices; and No. 2, the judge accepted Apple’s procompetitive business justifications for maintaining a closed system [a walled garden], such as privacy, safety, and security,” Rie said. “Epic tried to show that these were a sham, but the judge disagreed. This may provide a solid defense for Apple in future matters.” The judge’s subversive messages Above: Tim Cook of Apple testified at the antitrust trial. Here’s where the judge went a bit rogue. Apple’s Craig Federighi said the security on the Mac was not good enough, and that the App Store was more secure because of the prohibitions. But the judge found this “late admission” — why didn’t Apple say Mac security wasn’t good enough before the trial? — to lack credibility. “There are a lot of comments from the judge about how she is uncomfortable with the way Apple does business,” Hoeg said. “You could look at this ruling as a roadmap for how Epic or the next person who sues Apple could win.” Still, the judge found that Apple’s security argument was a valid reason for keeping developers inside its walled garden, and not just a pretext to block competition. In another subversion of Apple, she suggested there were possibly less draconian security methods that Apple could use that would sit better with developers, such as an “enterprise model” or “notarization model” of security where developers who were trusted could be allowed more freedom. The judge hinted that this was a common ground where developers and Apple could negotiate some kind of settlement. And the judge adopted Apple’s view that Epic overreached, that its PR campaign was premeditated, and its surreptitious “hotfix,” where it triggered the confrontation by secretly modifying Fortnite to enable off-App-Store transactions, was underhanded. She ruled that Epic broke its contract, the Apple Developer Program License Agreement (DPLA) that every developer must sign, with Apple. Had Epic not done these things, Gonzalez Rogers might have looked more sympathetically on the fact that Epic did not seek damages and it was trying to make life better for all developers by getting rid of the Apple App Store tax. Gonzalez Rogers took a swipe at Apple, suggesting it should not rest easy, as it stood “near the precipice of substantial market power, or monopoly power, with its substantial market share.” The judge did not need to point this out; she did so as a warning to Apple to back off. Yet the judge noted that Apple offered no evidence that its 30% commission of all App Store in-app gaming purchases — which amounts to $14.7 billion take of total mobile game spending of $47.6 billion in 2020, according to measurement firm Sensor Tower — was justified on the basis of costs. Above: Apple accounts for a very small share of Fortnite revenues. “As described, the commission rate driving the excessive margins has not been justified,” Gonzalez Rogers said. “Cross-reference to a historic gamble made over a decade ago is insufficient. Nor can Apple hide behind its self-created web of interlocking rules, regulations, and generic intellectual property claims; or the lack of transparency among various businesses to feign innocence.” But she noted her hands were tied on ordering Apple to cut those commissions, given the current evidence and given that she did not find Apple to be an illegal monopoly. She said the U.S. Supreme Court has recognized that judges are not well suited to micromanage businesses. “Clearly the judge doesn’t love 30%. But there’s some percentage that’s allowed,” Hoeg said. “The judge says that in about 10 different places in her judgment that Apple is owed some money for this. I’m not sure 30% is the right number, but you’re owed something.” She said that App Store profit margins of 75% were extraordinary but again noted that success was not evidence of an abuse of monopoly power. And she noted that Apple never raised prices — it even cut its 30% rate in a couple of instances, reducing it to 15% with subscriptions in their second year; and this year Apple cut the rate to 15% for app makers who make less than $1 million a year. That means Epic had to show egregious anti-competitive behavior and back it up with a lot of evidence. Part of the problem was self-inflicted, Hoeg said. Epic asked for an expedited trial, and so it had to limit its witnesses, exhibits, and trial time. But in many places, the judge asked for either more evidence or wondered why Epic didn’t address certain arguments, like the public utility argument. Lastly, the judge suggested someone might look into how much Epic relies on “impulse purchases” from consumers to generate revenues for Fortnite. That was outside the scope of the antitrust case, but the judge mentioned it anyway. At the same time, the judge virtually instructed Epic how to build a better anti-monopoly case, Hoeg said. Epic’s small victory Above: Tim Sweeney is the outspoken CEO of Epic Games. Under California antitrust law, Gonzalez Rogers found that marketplace owners such as Apple can set their own marketplace terms, but she directed Apple to end its rules that prohibit game companies from communicating with players and steering them to better deals elsewhere. Apple had put in place “anti-steering” policies that directed developers to use its payment system — which generates the 30% commission — in part because it reduced security and privacy risks for players. The judge pointed out this enables Apple to monetize its intellectual property, and she noted evidence supports the argument that consumers value these attributes of privacy and security, and trustworthiness. Apple had argued that Nordstrom does not advertise prices inside Macy’s stores for its goods. But the judge said Apple created a “black box” where it enforced silence around competitive pricing elsewhere. “Apple [is] hereby permanently restrained and enjoined from prohibiting developers from (i) including in their apps and their metadata buttons, external links, or other calls to action that direct customers to purchasing mechanisms, in addition to In-App Purchasing and (ii) communicating with customers through points of contact, obtained voluntarily from customers through account registration within the app,” the judge said. She found Apple had unreasonably restrained competition and harmed consumers with a lack of information and transparency about policies that affect consumers’ ability to find cheaper prices, increased customer service, and options regarding their purchases. The anti-steering rules stop consumers from learning from developers that there may be lower prices on their websites, she said. I found at least one expert who considered Epic to be the big winner because of this small victory. “Most experts realize that Apple won the battles but Epic won the war. This seems lost on a lot of people in the media as well as on Epic itself. The most important part of this case is that developers don’t have to transact commerce on the App Store — they can now steer customers to their own sites,” said Aron Solomon, the head of strategy at Esquire Digital, in an email to GamesBeat. “Epic wanted to be able to collect money directly on the AppStore. Again, insiders knew this was absolutely not going to happen. But the win is massive. Now Epic and any other developer don’t need to fork over 30% to Apple. They can collect payment on their own sites and pay 3% to Stripe for so doing. Stripe is the big winner here.” We’ll examine whether that view is correct or not later. How Epic’s win could become a big victory Above: Epic Games wants to free Fortnite. I think that Epic should press the point that it made about friction, and how very small inconveniences like telling users they have to use the web to buy virtual currency rather than letting them buy it with an alternative payment option directly in the App Store could produce so much friction that no one would ever do it. It seemed like the judge’s suggestions were tantamount to telling Epic how it should file an appeal, enter more evidence, and gain a greater victory in the appeals court. But if Apple resists compliance with this small loss, the retaliation against Apple could be big. Apple faces other lawsuits, and it has tangled with the antitrust regulators in both Japan and South Korea, where restrictions will likely be tougher. It continues to face a regulatory investigation by the European Union as well. Epic could use these allies to come to its defense, and they are likely to support Epic in its parallel antitrust lawsuit against Google, which has also been accused in a separate lawsuit of doing the same thing as Apple to Epic and Fortnite. The PR war Above: Epic Games’ opening statement slides make its case against Apple. If there was a miscalculation, it was Epic deciding it had to do the hotfix in a surreptitious way, where it secretly updated its Fortnite app to enable links to discounted prices off the App Store on its own website. Epic clearly broke its developer agreement. “You can tell from her decision, the judge wasn’t particularly happy with the way that Epic originally presented its case with the breach and the marketing and everything else,” Hoeg said. “That’s always going to impact somebody. A judge is a human being.” Epic evidently thought it was the only way it could show there was a real demand for sideloading discounted Fortnite virtual currency pricing. But here the judge made Epic pay Apple its 30% commission and other fees that added up to $6 million. At least the judge did not order Epic to pay Apple’s legal bills, which were probably a lot more. But the judge viewed Epic’s deception as a reason for Apple to take retaliatory measures, such as kicking Fortnite out of the App Store and terminating the developer support for Epic’s Unreal game engine. This could be very dangerous for Epic’s customers, as we note below. However, if Apple presses its legal advantage against Epic, it could lose the PR war. Epic could say it is a victim, and that antitrust laws should be changed to stop Apple from carrying out such retaliation. Epic could lobby Congress, which has bipartisan support for reining in big tech, to change the antitrust laws to stop such behavior from companies that have become extremely powerful, even if they don’t hold monopolies as defined by laws and precedents that are more than 100 years old. Indeed, if Apple continues to flex power, act like a bully, and Epic calls it out, there could be consequences. Developers could decide to leave the platform for others such as Android. It’s no surprise that mobile gaming publisher Zynga is starting to make games for the PC. But the judge pointed out there was a flaw in Epic’s “opportunistic” PR campaign. Epic’s Sweeney admitted on the stand that he would have taken a special sweetheart deal from Apple for Fortnite to play lower royalties if it had been offered. Epic had positioned itself as the good guy, fighting on behalf of the little developers, but the judge took note of Sweeney’s response. Why Epic is in trouble Above: Epic Games might need a fast car to avoid trouble. The judge did not order Apple to allow Fortnite back in the App Store, and she said Apple was right to terminate its developer contract with Epic because of the hotfix deception. Apple could legitimately argue it can never trust Epic. “That kind of surreptitiousness is something that the court was never going to like,” Hoeg said. “And so Apple always has the right to say, ‘Well, look, I don’t I don’t trust you anymore.'” Epic could cave to Apple’s demands in some kind of settlement, but that’s a lot of humble crow to eat. On top of that, the judge rescinded her temporary restraining order that stopped Apple from cutting off developer support for Epic’s Unreal Engine. That order was in effect, pending a finding that Apple had violated antitrust laws. Since the judge did not find that was the case, Apple is free to cut off the developer support. “Apple has the contractual right to terminate its DPLA with any or all of Epic Games’ wholly owned subsidiaries, affiliates, and/or other entities under Epic Games’ control at any time and at Apple’s sole discretion,” the judge said. Epic had pointed out this could mean that Unreal Engine would not be able to access Apple’s code updates and could quickly become incompatible. That means that the games of Epic’s Unreal Engine customers — who generate $97 million a year or more for Epic — could break. That would be a customer disaster for Epic and could compel many to switch to the Unity engine. I very much doubt that Epic considered this outcome when it decided to go forward with its lawsuit and “Project Liberty” campaign. Hoeg noted the judge clearly did not like Epic’s PR gambit at all. Epic must not let that happen, and so its best bet is to get back into good standing with Apple through some kind of settlement. What might happen next Above: Epic Games says Apple’s 30% cut is unreasonable. The judge’s order takes effect in three months on December 9, and Apple will have a chance to appeal. How the injunction is written and the nature of the order is shaped will matter enormously. Apple will have to permit developers to advertise better deals and lower prices if they go off the App Store to buy their digital items. But Apple does not have to enable consumers to make those purchases directly through alternative payment providers within the App Store, as she ruled the “payment systems” monopoly, as Epic alleged, did not hold up. That was a big part of the case that Epic lost. All Apple has to do is let developers tell consumers about discounts elsewhere and link to those discounts. Even if consumers go off the store to take advantage of those discounted offers, there is nothing stopping Apple from demanding a 30% cut of those sales, though it would be tougher to collect, Hoeg said. “The judge very specifically finds that Apple’s control of distribution, which is the store itself, and in-app payment processing, is legal,” Hoeg said. “That’s the most important part.” Right now, Apple just collects the payment and takes its 30% take in its own store transactions. Now that could change with developers collecting the purchase fees first. Apple would have to get its money after that. Here’s a case where the friction for Apple may increase, and it may begin to feel like the shoe is on the other foot — at how painful it is to face friction when it comes to waiting for someone to pay you. In the future, developers could collect the money directly, and Apple would have to trust them to pay it. The judge said Apple has a right to monetize its intellectual property, but she does think 30% is not justifiable and she didn’t say Apple has to be paid first. But she didn’t explicitly stop Apple from getting its 30%. For instance, Apple could decide to rewrite its developer contract so developers would have to submit to an audit and disclose the exact revenues they generate off the App Store and share 30% of that with Apple, Hoeg said. The judge is not likely to issue restrictive rules on Apple rules that would protect developers but that kind of order would probably overreach in terms of micromanagement and be overturned on appeal. Here, developers would have to trust Apple to implement these rules in good faith. As for an appeals court, it would not likely dispute a lot of the facts that were entered into evidence in the case, unless it found some egregious errors. Rather, it would look at whether the judge applied the law correctly, Hoeg said. Payment provider impact It would have been a big win if alternative payment providers could set up in the App Store, but this is not the case, said Chris Hewish, the president of payments company Xsolla, in an email to GamesBeat. Still, there are big opportunities. “I really see this as an opportunity for developers to start earning more money while developing closer relationships with their players and customers,” Hewish said. “We’ve already seen companies have great success running commerce for their mobile games from the web, with some games making up to 40% more revenue than they were when driving monetization solely through the app store. It’s a real opportunity for developers to get creative and leverage the data they normally wouldn’t receive to do things like customize pricing and offers.” The opportunity is that mobile developers finally have some clearance to run webshops for their mobile commerce, without fear of running afoul of Apple, he said. “Some companies were already doing this as part of crossplay, with mobile and web versions of their game linked on the backend,” Hewish added. “But this was an investment that very few mobile devs were willing to make. Now that fear is removed and we’re seeing counsel for multiple companies finally giving the green light to run webshops and steer their players to them. We’ve seen with our own eyes just how lucrative this can be, so we know the opportunity is large. But again, anyone looking to run a webshop needs to have solutions for digital taxes, VAT, fraud, and customer service.” After the dust settles it will be clear that larger mobile developers and publishers won, Hewish said. “These are companies with enough revenue and marketing savvy to create webshops where they can redirect and monetize their mobile players while providing a sense of community or exclusivity that they aren’t able to do within the app store,” he said. Christian Owens, the CEO of managed payments firm Paddle, said in an email to GamesBeat that Epic is appealing the decision because its real objective was always to obtain a seamless in-app purchasing mechanism, as well as the external payment concession, that permits it to bypass Apple’s cut entirely. But as noted, that bypass of the 30% cut is not a given. Apple can discourage consumers from going off the App Store because they could face security risks, possible scams, and billing disputes. “For software companies that have existing relationships with their customers, this will create huge opportunities to tap into new revenue,” Owens said. Honor Gunday, CEO of Paymentwall, said in an email to GamesBeat that app developers can now place text and images inside the app that promote promotions, discounts, and alternative payment methods. App developers will now be able to place payment widgets inside the app in different formats, such as multiple payment option bearing widgets, credit card forms, alternative local payment option links, buy buttons, or paylinks that take the users outside of the experience or that process the payment within the app itself, he said. He noted that app developers can also advertise or paste links of their own monetization methods on the app download page of the app store before they even download the app. App developers can send notifications about promotions, availability of alternative payment options inside and outside the app. Free trials, subscriptions, one time purchases, in-app purchases and user registration, user data collection will not be prohibited by Apple anymore. Apple used to weed out apps during its app store listing review process, any app that had these and Apple, would simply not approve the app to get listed on the app store. Now, they are prohibited from doing so, Gunday said. “Can Apple try to control what payment methods or payment providers are available on the platform? Since the ruling says that Apple cannot control this process anymore, I believe any app developer can choose any payment provider and include these inside the app, similar to how ecommerce apps choose their own payment provider,” Gunday said. “Digital commerce will now be like e-commerce on the app stores. “Apple allowed ecommerce, gambling and skill gaming apps to monetize the way they wanted, because Apple deemed these verticals too risky for their payments business appetite. After all, with e-commerce there can be delivery and quality issues, and with gambling there is strict regulation for payment providers. So these verticals had a free for all policy. Now, I interpret the same will happen for games and digital content – digital commerce.” He also said he believes this sets a precedent for Google Play and other hardware manufacturers running app stores, and potentially for Steam, XBox, Nintendo, PlayStation as well. “Games on these platforms were prohibited from using their own monetization options, and now they also may need to open up to competition,” he said. “The ruling was by a California judge and the ruling is mainly for the United States market but this ruling will influence how Apple monetizes the app store all around the world in my opinion as most game developers and app developers build and launch games in all markets not just a specific market.” The Apple app store mainly monetized with credit cards and the physical and digital gift cards sold at various outlets, but did not cover bank Transfers, PayPal, or other emerging payment methods that consumers may want to use. For example, Paymentwall over 200 payment methods integrated, and Xsolla said it has more than 700 payment methods. Owens predicted we would see a massive push from the payments sector as companies race to develop better, cheaper, and smarter alternatives for sellers who can finally choose how they sell their products. Developer impact Above: This Lara Croft-like character shows off Epic’s latest animation tech in Unreal. Developers may get a benefit that was paved by Epic, but Epic is not able to enjoy that benefit itself so long as Fortnite isn’t in the App Store. “This opens the path for developers to communicate with their users and to include a link in the app that diverts a user out of the app and App Store, perhaps to the developers’ website, to pay for the app outside the App Store at a lower price than in the App Store,” Rie said. If Apple doesn’t seek to delay the order or is denied, the developers, as of December 9, can include a link in their apps that takes users out of the App Store and allows users to buy apps, subscriptions, or app purchases outside the App Store, Rie said. But as noted above, Apple could aggressively demand that they pay Apple its full 30% cut. Rie believes that game developers themselves could turn up the antitrust heat on Apple by lobbying. Both the House and Senate are considering legislation that would regulate the app stores. They can push Congress to enact legislation that benefits developers, changing the law from protecting just consumers to adding protection for competition, where the developers are positioned. “Some of what you’re seeing in antitrust reform legislation being proposed right now is the notion that we should get rid of the consumer welfare standard and move to a different standard than really analyzes things on the competitor level,” Hoeg said. “There is definitely room for reform in a law that is more than 100 years old.” Apple’s next move Above: Apple is enforcing tougher privacy rules. The case will likely move to appeals, as Epic has already said it will appeal the decision. “While anything could change on appeal, our view is that the ruling is more likely to stand than not. Only Epic has noticed an intention to appeal so far,” Rie of Bloomberg Intelligence said. “Apple has not yet done so, though we think they probably will. We think Epic will have a tough time getting the ruling reversed. If Apple decides to appeal, it will likely also ask the court to stay its injunction pending the appeal. If the court says no, Apple can appeal the denial of the stay. We give 50-50 odds on Apple getting a stay, if, in fact, it goes forward with an appeal. If this happened, it would mean nothing changes for at least another year and likely longer.” Rie expects the U.S. Supreme Court, with its new conservative majority, would likely uphold the judge’s ruling and even overturn the single victory that Epic had under state law. In my opinion, the smartest thing that Apple can do is to get developers back on its side. It can declare victory, and then it can give developers what they want by slashing its developer fees to 15%. That will generate better profits for the game industry without doing much harm to Apple’s bottom line. Cutting $7 billion in profit from Apple each year would reduce its $57 billion profit from 2020 to just $50 billion, or a mere 12%. That’s worth it if it generates goodwill among developers. Those developers are already upset about another move Apple made, favoring privacy over targeted advertising, by curtailing the use of the Identifier for Advertisers (IDFA). I asked Hoeg why Epic didn’t bring up IDFA as part of its case, as it showed how opaque Apple’s process is when dealing with complaints from developers. Hoeg believes that Epic didn’t bring up IDFA because Apple has a pretty good case that they’re competing with other phone makers by emphasizing privacy, and that’s the reason for the decisions that it makes. Even if these decisions impact developers, Apple is justified if its primary aim is better user privacy. “If Epic brought that up, I see that as a problematic argument for Epic,” Hoeg said. “It does mean that you are participating in a platform held and controlled by someone else.” Ironically, this anti-targeted advertising move might hurt Apple. Apple might not be able to demand that developers track users and purchases because of this privacy stance, and so both developers and Apple would never know if an off-store consumer purchase came as a result of coming from the App Store or the consumer came from some other developer advertisement elsewhere. Indeed, Apple is not winning any popularity contests, and many may remember the judge’s ruling in saying that Apple engaged in bad behavior. Rather than appeal the case, Apple might want to accept this decision as the least harmful, and it may still hope that all of this will blow over and Epic will cave. However, if I know Sweeney just a little bit, I’m pretty sure he’s not going to cave. So we may be in for years of litigation. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,274
2,021
"Why Chris Akhavan left EA to join blockchain gaming platform Forte | VentureBeat"
"https://venturebeat.com/2021/10/23/why-chris-akhavan-left-ea-to-join-blockchain-gaming-platform-forte"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why Chris Akhavan left EA to join blockchain gaming platform Forte Share on Facebook Share on X Share on LinkedIn Forte has raised a whopping $725 million. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Yesterday, I wrote about my rule of game journalism: follow the money. That was all about the flow of money into blockchain games. And there’s another rule as well: follow the people. In yesterday’s story, I said I expected to see an exodus of people from traditional game companies to blockchain game companies. Chris Akhavan, who was a senior executive at Glu Mobile and then at Electronic Arts (which acquired Glu for $2.4 billion), has jumped ship to join Forte, a company that specializes in handling the technical infrastructure for blockchain games. That’s a big deal, as Akhavan spent nine years at Glu, and now he will become the chief business officer at Forte, which is making an end-to-end blockchain tech platform for games and other content. Akhavan left one of the biggest video game companies to make sure he wasn’t too late to catch the latest trend. You can expect to see more of this happen as an enormous amount of money goes into making blockchain games the next big thing. Blockchain is a transparent and secure digital ledger. It enables nonfungible tokens, or NFTs, to authenticate unique digital items. And that enables a new kind of business model in video games where players can own their own digital items in games. They can receive these as rewards, and they can resell them for a profit. Such NFTs can transform players from spenders into investors in games, and that could change the whole game industry. You can read more about that in my column from yesterday about the true believers in NFT games. Akhavan is one of those true believers as well, and we walked through the decision he made to jump into something new. It reminded him of jumping into social games, mobile games, and free-to-play. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “As a lifelong gamer who has spent countless hours and money playing games, I quickly started to believe with full conviction that this technology will be transformative for gaming in the most positive of ways by aligning the interests of game developers with gamers, and enabling incredibly rich and vibrant communities and game economies,” Akhavan said. “I’m pumped about the creative and business opportunities this technology will create for our industry.” Here’s an edited transcript of our interview. Above: Chris Akhavan is the chief business officer at Forte. Chris Akhavan: My role at Forte, I’m joining as the chief business officer. I’ll run all things biz dev and corp dev, primarily responsible for our partnerships with game developers and other types of content creators. Personally I found myself — probably like a lot of people — spending a lot of free time learning about blockchain technology, checking out early games in the space and other types of content. Things like Axie and Sorare and NBA Top Shot that were pioneering. It was all coming from a place of personal passion and genuine interest. As I thought about what I wanted to do next with my career, having now spent since 2010 working in mobile free-to-play, it started to become obvious that my passions were moving toward this new world of blockchain gaming. Any time you can align your passion with what you do for work, it’s a good thing to do. From an industry perspective, as I got deeper into the space, the energy reminded me of when I was fortunate to be a part of the early social gaming wave on Facebook back in 2007, and the early free-to-play mobile wave in 2010 when I was at Tapjoy. The excitement and the energy and the innovation and all these new teams forming left and right to pursue these opportunities that are going to be transformative. I also noticed that there’s so much friction involved in the current group of blockchain games out there. It’s a great sign that there’s a massive opportunity here. Despite all the friction involved in moving Ethereum from one wallet to another just to play a game–I’m sure you’ve seen Axie sharing that their day 30 retention is the same as day 90, which is just unheard-of. It shows how sticky games with these compelling economies can be. I felt like this was going to create some cool creative opportunities for game teams, and business opportunities as well. It’ll require the industry to evolve our skill sets and bring people in that have never worked in gaming as this whole thing takes off. GamesBeat: What’s interesting to me is that this moment in time feels a lot like mobile gaming’s start and social gaming’s start in one particular way. There’s this group of believers who believe it’s completely going to change and disrupt the industry for the better, improving things like monetization. Now you have a much better base of people to monetize compared to just monetizing two percent through free-to-play, the “whales only” model. But that’s spread all the way to the other side of–everybody thought Mark Pincus should be thrown in jail for running a scam or whatever. Gabe Leydon was running a machine to strip people’s money away. And now supposedly the SEC is going to declare NFTs illegal any day now. Valve decided not to allow blockchain games at all. (I don’t think these things will really happen, but it’s a sign of the skepticism). Akhavan: And then Epic welcomes blockchain games. Above: Forte was started by Kevin Chou and Josh Williams. GamesBeat: But there’s that spread. It’s even weird for me to see GameIndustry.biz has said they’re going to limit their coverage of blockchain games, because they don’t think it’s good for the world. It’s bad for things like climate change because of the environmental cost of mining. Akhavan: That’s rapidly changing. So many solutions are quickly coming on board that are eliminating that as an issue. GamesBeat: There’s a fair amount of gamers out there who have also said that this is a new evil. Partly because of the climate change thing, but also because they feel like there are scams involved. These questions, they arise when there’s something that challenges the status quo like this. And then everybody has to figure out what the answer really is. I don’t know if you went through and analyzed a bunch of these challenges. Akhavan: You’re hitting some very pertinent themes. There’s no doubt that plenty of people are in this space right now just to jump in and try to exploit what they might see as a short-term opportunity. It reminds me of the early days of social gaming. People used to crank out apps on Facebook and spam the news feed. Just horrible games of very low value that would spam every little activity to the news feed. In some ways it’s reminiscent of that behavior. I’m sure you’ll see people enter the space that are purely here for short-term speculation. But I think that will be weeded out very quickly. What got me excited is I’m seeing legitimate people get involved in the space, both from the blockchain world and also from the gaming world. It’s hard to come across a proven triple-A team that’s not going on to start a studio. A lot of those teams are creating blockchain games. Those are the kinds of teams that will not be in it for a quick buck. They’ll be very thoughtful. They want to deliver high-quality games. They’re in it for the value that the blockchain technology can bring to making their game more compelling. I look at this as a big opportunity to closely align the interests of game developers with gamers, enabling rich and vibrant communities and game economies. I think blockchain technology and blockchain gaming can fix some big problems. You touched on some of the problems with the current free-to-play landscape. A lot of the dynamics in free-to-play gear companies toward grinding this small pool of payers. Non-payers are okay, but they’re not who we really care about. That mentality is not good for the ecosystem. With this model, and you already see this with things like Yield Guild Games –amassing these NFT assets and loaning them out to people that maybe can’t purchase them themselves. They can borrow the assets and use those assets to earn in-game value. All of a sudden those people have real value they’ve created themselves in these games. The opportunities this opens up to serve the entire spectrum of players and gamers out there is going to be transformative in a positive and healthy way. Above: Forte is building wallets for blockchain gaming companies. I even think about things like user acquisition right now, which has gotten absurd in free-to-play mobile. So much of the value goes straight to ad networks and other platforms that serve that intermediary function of introducing players to games. I also feel like blockchain can shift some of that value directly to players themselves. Players get so invested in these economies 0that all of a sudden they have a real vested interest in growing that game’s economy. In many ways they can become the best form of marketing possible, bringing new players to the game. Through doing that, maybe they then capture the value that, in the current free-to-play landscape, is all going to ad tech players. I also think about how you can reward all this content creation that happens in games, whether it’s people creating content in a game, or contributing to a game’s community on places like Discord or forums or wikis. Being able to connect all that activity to the game’s economy is going to create some positive and healthy loops that strengthen gaming and game communities. That’s the perspective I had coming into the space. Part of why I joined Forte–I know you’ve talked to Josh Williams, our CEO. It’s very near and dear to Josh’s entire vision for Forte, to build this platform with that long-term perspective in mind. We want to work with high-quality games. We’re not interested in any short-term speculation stuff. Another big part for Forte was just day-one thinking about regulatory compliance and doing things the right way so this will be a sustainable business for us and our partners. That’s what got me excited about Forte in particular. It’s pretty amazing that these guys started to work on this back in 2018, and now we’re ready to go as the industry is shifting in this direction. GamesBeat: If you had to pull back a bit and look at the legal question that may have motivated Valve–there’s that Washington state law. It mentions that you cannot win something of value in a game. If you do, if it has real-world value, there’s other implications coming out of it. Things like gambling regulations and taxation. I don’t know whether that will cause people to go back and become legal experts to figure out how to parse what that might mean. It also seems like that could give the traditional big companies–I think Forte would love to have them as customers, right? But it might give them pause to sit on the sidelines even longer while this new question gets resolved. Akhavan: It’s a good point. I can’t speculate as to whether that was Valve’s specific concern, or whether they had other concerns about simply not controlling other parts of the value chain in their ecosystem. But from my perspective, again, this is why I got excited about Forte. Forte, from day one, has planned on regulatory bodies looking very closely into this space. They didn’t cut any corners. Things like all the know-your-customer aspects of having a custodial wallet, anti-money-laundering, all the money transmittal licenses you need to have, tax compliance so that you’re dealing with all that in the appropriate way when people cash out value in the game, that’s all fundamentally built into the Forte platform. With Forte this was all fully expected. It would not be long until all the regulatory bodies said, “Wow, this industry is blowing up. Let’s make sure this follows the rules.” That’s a big piece of the value that we can add for game developers. We’re their partner in understanding the regulatory environment and making sure we’re covering all these pieces you need to cover. Above: Kevin Chou, then-CEO of Forte, and Mike Vorhaus of Vorhaus Advisors at our 2019 GamesBeat Summit event. GamesBeat: There was another group of people I’ve heard concerns from about whether the industry has found the right kind of NFT game yet. They point to Axie. There’s evidence that it’s really working well, but then there’s some concern that it’s almost Ponzi-like. The latest players have to hope that there’s going to be more players coming into it down the road so they can sell their stuff to them. Things like price fluctuations could happen. That could trigger a collapse. Akhavan: You’re hitting on exactly the kinds of things I was talking about before. For game developers that are currently working in free-to-play or premium, moving into the blockchain world is going to require a whole different set of skills and ways of thinking about economy and game design. No one’s going to be successful if they build a game that’s purely reliant on new users coming in. Games that are going to be enduring and sustainable need to fundamentally be games that people want to play day after day because they’re getting pure value out of it. They want to maintain it, growing value and putting value into those ecosystems. In addition to all the tech that Forte’s built, we’ve also invested in expertise and services. Our model is, when we work with a game studio, we have people working on tokenomics and helping with economy design precisely to hit on the things that you’re mentioning. We want to avoid creating mechanics or loops that end up being reliant purely on new players coming in. Instead, we want to create games that are genuinely fun and engaging and compelling that people want to play because of what they’re getting out of the value of the game. We definitely don’t want to be involved in speculation-driven gaming. GamesBeat: When you were at Glu and EA, did you get a chance to discuss these topics with people there? Did they give their blessings, or did they say, “Why do you want to go out into the wild west? Why not stay here?” Akhavan: I won’t speak to those companies in particular, but I can comment on–at this point Forte is talking to so many big publishers. We’re talking to the biggest companies in the world that are interested. It’s fair to say that within even the biggest publishers, there’s at least a number of people at each of those companies who are interested in this space and actively exploring it. Along with that you still have plenty of doubters. Maybe the way to think about it is the size of the company–smaller studios right now tend to be the ones that are moving the fastest into the space. Smaller startup teams that are coming from phenomenal talent, proven triple-A game development talent. These people are going out and in many cases starting new studios just to pursue blockchain gaming. I think they will be the first ones to capture this opportunity. At the same time, big companies are quickly seeing, especially over the course of this year–their eyes have been opened to the potential of this space. I don’t think they’ll be late to the game either. A lot of people that might be doubters today, I wouldn’t be surprised if just six months from now, given how fast the ecosystem seems to move, they might quickly find themselves in the believer camp. GamesBeat: Did you feel some FOMO (fear of missing out) as well? I have to move to this new thing! Akhavan: I will fully admit I felt the FOMO. As I personally got super engaged in blockchain gaming and blockchain technology, to me it was like, “Wow, this feels like this is going to be a big part of not just the future of gaming, but the future of many different things in our digital lives and beyond that.” For me there was an element of–I just felt like I needed to be involved. That genuine pull, when you feel that, it’s a sign to take the leap. At this point I’m glad I did. Above: Forte enables blockchain game economies. GamesBeat: Have you heard any interesting reactions from people you’ve talked to yet? Akhavan: In general, most people are not shocked. I’ve talked to a lot of people I know in the industry, and they get it. Everyone’s looking at this space like, “Wow, this might be the next big moment for gaming.” I’ve been pleasantly surprised that no one’s told me, “You’re a moron.” I might have had a few conversations with people who are clearly not sold on it yet, but I’d say by and large everyone’s said, “Yeah, I get why you’d go after that.” GamesBeat: Was there a different reaction when you were switching into mobile games or Facebook games? Akhavan: I feel like Facebook and mobile games–I almost had a similar arc. I’m looking back to the early days of mobile, when I was at Tapjoy. The App Store had finally rolled out in-app purchases. Back then the companies we worked with were not the big publishers. It was the early pioneers. They changed their names later, but I remember people like TinyCo and Pocket Gems. Back then they had a different name that escapes me. That initial wave was driven more by those small studios jumping. I still remember the view of a lot of the big companies back then. “Mobile’s too small. It’s not worth our time yet.” The difference this time around, I think, is that the cycle will be much shorter. You’re already seeing so many big companies actively looking at blockchain. I feel like the early stages of mobile gaming–the way I recall it is that the big companies firmly sat on the sidelines for quite some time before they realized that mobile was going to be a big deal. Perhaps, having gone through that experience of being a bit late to mobile, maybe that will drive a faster and more serious look into the world of blockchain gaming from the big players. This is certainly looking like it could be the next major shift in the ecosystem. GamesBeat: Turning the tables a bit to me and my industry, there were so many publications that didn’t want to cover crappy Facebook games. They didn’t want to cover crappy mobile games. The core of the journalistic industry has been eviscerated. Lots of journalists are out of work. We’re still standing as a small thing 13 years later because we’ve always moved to cover these new things. Akhavan: Do you typically get good engagement on blockchain stories? Above: Josh Williams is CEO of Forte. GamesBeat: The last one in particular, the Axie story, was fairly popular, where we were describing their fundraising and all the different reasons. I did an interview with Jeff Zerlin for that. We got into as much of the weeds as we could there. That story definitely had good readership, broader readership than normal. Akhavan: It’s good to hear that people are engaging. I get a lot of the mobile free-to-play podcasts. I’ve noticed over the past couple of months that blockchain is the topic. It’s permeating across a lot of the media landscape, a lot of the conversation that’s happening around gaming right now. GamesBeat: My logic is that when people start betting billions of dollars, somebody’s going to find the right solution, the right model. That business is going to take off. When you bet these billions of dollars, they’re not all going to be wrong. It’s the classic lesson. Follow the money. If the money is all going one way– Akhavan: With gaming it’s just so obvious to me. We’ve all spent our lives playing games that have gray marketplaces. I remember games we had at Glu, Racing Rivals was one, where there were all these Facebook groups full of people figuring out their own ways to buy and sell cars with each other. I play Counter-Strike, and I’ve put all this money over the years into gun skins and knife skins. There’s an actual marketplace where I can buy and sell those in-game items. I feel like with gaming it’s almost assured that this concept of in-game economies that are real and measurable and transparent–gamers have been demanding this for many years. We as an industry have not provided that to audiences in a reliable way. That’s what I think blockchain is going to be able to solve. It’s a core gamer desire that’s been around as long as I can remember. GamesBeat: It always seems like sitting on the sidelines with the status quo feels safer. But the way that disruption works, that’s only true for so long. Akhavan: And then it happens fast. You can imagine a world where blockchain gaming really does take off. In some sense, if you’re just making a normal free-to-play game where people don’t own their stuff and have no ability to trade value–if you’re stuck in that world and the world shifts really quickly, then those games, in some sense, become obsolete. Why would you spend all your time and energy in a traditional free-to-play game if you have all these new high-quality games where you actually own a piece of the economy? Companies need to be mindful that this is the kind of thing that could be a very fast shift, if it indeed takes off the way it looks like it will. Above: Rina Hahm (left) of Facebook’s Audience Network talks with Chris Akhavan (formerly) of Glu Mobile (center) and Joseph Kim of Lila Games about monetization best practices and myths. GamesBeat: The strategy of sitting on the sidelines as a big company and waiting for it to shake out, and then buying whoever is left–that’s a familiar strategy. But it’s not going to work when Softbank puts $680 million into Sorare, a 30-person company. It’s not as if EA is going to turn around and buy Sorare now. Akhavan: You bring up a good point. Buying your way into this market could be very expensive, given the excitement around the space, the valuations. There’s a risk that if your strategy as a game studio is just to wait around and buy one of these successful companies, that could end up being a very expensive strategy to take on. It’s exciting. It’s this huge shot of energy into the ecosystem. For me personally, I think the industry needed it. I’m really excited. GamesBeat: Anything else you wanted to mention about your transition? Akhavan: I do have some KPIs that might be interesting. Forte now has more than 10 million wallets across the network of partners we work with. It’s now processed more than $1 billion in sales across Forte-powered games and apps. We have 30-plus developers integrating right now that represent more than 100 million MAU. We’re seeing big numbers in the pipeline. It’s going to be exciting when they all hit the market. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,275
2,021
"The DeanBeat: Facebook's ambitions to be the metaverse | VentureBeat"
"https://venturebeat.com/2021/10/29/the-deanbeat-facebooks-ambitions-to-be-the-metaverse"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: Facebook’s ambitions to be the metaverse Share on Facebook Share on X Share on LinkedIn Mark Zuckerberg speaks at Facebook Connect. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook CEO Mark Zuckerberg ‘s decision to change his corporation’s name to Meta yesterday at the Facebook Connect online event was a historic moment. The decision faced heavy criticism and turned into a meme. Seamus Blackley, co-creator of the Xbox, tweeted that Zuckerberg’s new Meta logo should come with the caption, “Here’s a schematic representation of my testicles.” Many more unkind and funny jokes mocked Zuckerberg’s grand plans. “Here’s a schematic representation of my testicles.” pic.twitter.com/rEXXt2i1b2 — Seamus Blackley (@SeamusBlackley) October 28, 2021 People said the move was tone deaf, coming at a time when Facebook has caught so much flak for putting profits above caring for people or political peace. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! While many people made fun of Zuckerberg, I saw some shrewd moves. Lots of people have mocked the metaverse , the universe of virtual worlds that are all interconnected. But I’ve been thinking about it ever since reading novels such as Snow Crash and Ready Player One. It has been decades in the making, and while it’s here in some small forms like Second Life and Grand Theft Auto Online, and Roblox, it isn’t really here yet. As futurist Matthew Ball said, the metaverse is something that you should feel you’re inside of. And as Zuckerberg said, you should feel a sense of presence, or that feeling you have been transported somewhere else. I have been mocked for believing in the metaverse. Against the advice of some, I created a metaverse event in January 2021, and it had 30 panels on the metaverse and brought together many of gaming’s leaders. (We’ve got another conference, GamesBeat Summit Next on November 9-10 where we’re talking about the metaverse again, and our second annual metaverse event will come in January.) While people laughed at me, I stayed the course. And I never expected Facebook, a company with enough money to do the job, to embrace the metaverse like it did yesterday. Zuckerberg had some awareness he would be criticized. In a press event ahead of the announcement, he said (given the controversies), “I just want to acknowledge that I get that probably there are a bunch of people will say that this isn’t a time to focus on the future. And I want to acknowledge that there are clearly important issues to work on in the present. And we’re committed to continuing to do that and continuing the massive industry-leading effort that we have on that. At the same time, I think there will always be issues in the present. So I think it’s still important to push forward and continue trying to create what the future is.” All of us science fiction geeks have been laughed at. The smart pundits at places like CNN are mocking Zuckerberg for being such a geek. But Jensen Huang, the CEO of Nvidia, is famous for saying “we’re living in science fiction.” He meant that AI struggled for so long and finally began working around six years ago. Now AI is huge, with 8,500 startups working on so many new technologies, and it is leading to advances in other areas. And yes, AI is one of the necessary breakthroughs that we need to build the metaverse. Because we succeeded with AI, we have a chance to succeed with the metaverse. I think a lot of smart people realize that, and they’re dreaming about it too alongside Zuckerberg. I know we are a long way from the real metaverse that we all want to be amazingly immersive and instantaneous and ubiquitous regardless of our locations in the world. What we are hoping for is just a long way from reality. But Zuckerberg’s cool images and videos got the feeling right. If he can deliver on his animated visions with the real thing, then that would be really something. If I were to bring up my favorite adage again — follow the money — I would conclude that so much money is going into the metaverse that it is going to happen. You don’t orchestrate something so huge, something on the scale of the Manhattan Project, and then come out of it on the other side without an atomic bomb. The metaverse will happen because capital is betting that it will happen, and I’ll grant that Zuckerberg has some wisdom in seeing this. The war between Facebook and Apple Above: Facebook’s demo of the metaverse. But he’s not the only one who sees it. I see the war brewing between Facebook and Apple. They both want to control the future of computing. Apple won in one respect with mobile devices, snaring more than 600 million people who use more than a billion Apple devices. Facebook failed to get anywhere with its own smartphone efforts. But Facebook has more than 2.9 billion monthly active users who use its ad-based services such as Facebook, Instagram, and Whatsapp for free. While Apple made $20.6 billion in net income on $83 billion in revenue, while Facebook made $9.2 billion in net income on $29 billion in revenue in the most recent September 30 quarter. Apple CEO Tim Cook doesn’t really like this state of affairs as he views Facebook as a kind of parasite. The business models of these companies are opposites. Apple sells devices, while Facebook sells ads. Apple tries to charge high premiums for its products, far more than rivals such as Microsoft’s Windows or Google’s Android products. It has the highest quality, but it is often inaccessible to the masses. Facebook, by contrast, gives away its products for free, or it sells its Oculus Quest 2 virtual reality headsets for low prices ($300 or more). Its advertising model generates enough revenue to subsidize the free products. In return, Facebook asks for a lot of personal data so that it can better target ads and generate more value with each ad. Above: Facebook said the metaverse will make you feel presence. Cook considers this to be a violation of privacy, and, while it didn’t say it, it tried to hobble Facebook by changing the rules for its Identifier for Advertisers this year. The IDFA data can no longer be used unless users explicitly give their permission to be tracked for ad purposes. Not many people are doing that, based on the very direct wording of the permission prompts. This move has hurt businesses such as Facebook. Suffice to say, they’re at war. And while Zuckerberg said that Facebook is investing (or losing) $10 billion a year in its Facebook Reality Labs division, it’s fair to say that Apple has been investing a huge amount of money in its own VR/AR efforts. Some folks laughed when Magic Leap didn’t succeed after raising more than $2 billion for its mixed reality glasses. But that is chump change compared to what the likes of Apple, Facebook, Microsoft, and Google are spending. No one wants to lose this war for the next generation of computing. Zuckerberg showed he was fully aware of what is at stake here. “We basically think of the metaverse as the successor of the mobile internet, in the sense that the mobile Internet didn’t completely replace everything that came before it,” Zuckerberg said. “It’s not that the metaverse is going to completely replace something that comes before it. But it’s the next platform. In that sense, it’s not a thing that a company builds. It is a broader platform that I think we’re all going to contribute towards building in a way that is open and interoperable.” That’s why Zuckerberg made his moves so early. He bought Oculus for around $2 billion in 2014. He’s been regularly investing in it every year, refining headsets and stoking the VR game and app ecosystem. And yesterday, he threw down the gauntlet by changing his company’s name to Meta and even changed the stock symbol to the metaverse-like MVRS. Zuckerberg said a high-end standalone VR system, dubbed Project Cambria, is being developed, as is the Nazare augmented reality glasses. Those projects are aimed at heading off anything Apple has coming to the market. The right message Above: Facebook is headed toward the metaverse. While many made fun of this as a waste of money, Zuckerberg aligned himself with allies. He said that the metaverse should be open and not built by just one company. By doing this, he could deflect critics. Rather than have people point fingers at Facebook’s walled garden, Zuckerberg could rally some friends together and point fingers at some party that was even less open: Apple. The message was, “We’re open. We’re all in this together.” The subtext was, “They’re not.” He said that gaming will be the way that people step into the metaverse for the first time, as gaming has the infrastructure for economies through virtual goods and engagement with fans. That may be an attempt to drive a wedge, as Apple has pretty much said it doesn’t care about game developers, thanks to the IDFA moves, as those game companies have been hurt by Apple’s moves. Zuckerberg said that privacy and safety have to be built into the metaverse from the start. That is a move to get kids and parents on board. In his announcements, he showed that — through products like Horizon Workroom , Horizon Home, and Horizon Worlds — he plans the metaverse to be a place where we will live, work, exercise, and play. He also said that VR wasn’t the only way to log in to the metaverse. You could also do so with AR glasses, or use a PC or console or smartphone or any other device you want. It isn’t going to be something for only the elite (like Apple customers). It is going to be something for everybody. This is a populist message and one that matches the views of many open metaverse advocates. The poisoned name Above: You can log in on a work account in Facebook’s idea of VR work. However, since Facebook has faced a lot of controversies related to alleged privacy invasions, algorithmic bias, favoring profits over the mental health of its users, and antitrust issues, Zuckerberg doesn’t have the high road. His credibility has been under attack, thanks to leaks by the whistleblower Frances Haugen, who exposed toxic business practices. Facebook’s own cred is also in a tough state. Kent Bye, a VR podcaster, noted in a Clubhouse room that Oculus hasn’t behaved in the most open ways compared to its rival SteamVR. If you want to get cross-platform capability, it has been a lot easier on SteamVR than Oculus. And so a lot of people looked right through Facebook’s intentions in renaming itself. It seemed like renaming was convenient. It shifted attention to the metaverse ambition, but it also deflected criticism of the Facebook brand. Facebook has a PR problem now, and it faces deeply skeptical observers. It will be hard to win allies, win over lost users, and convince regulators it can be trusted. The long road Above: Working in a Facebook Horizon Workroom. These were actually real people talking to me. And so the course for Zuckerberg should be to not try to take the high road. He should take the long road. That is, he should really live up to his ambitions and knock the metaverse out of the park. If he delivers the metaverse and Apple doesn’t, it will be free. It will be accessible, and a lot of people around the world who don’t have access to the finest technology will be able to use it. That is, Zuckerberg should push his business model into a full-scale war with Apple. If he gets the users into the metaverse, the brands will come, and they will give him advertising revenues like he has never seen before. And that will enable him to subsidize his devices and bring them out at prices that everybody on the planet could afford. And I would go one step further. He talked about the creator economy, about people like streamers who are making a living by being influencers that are courted by brands. He said the goal was not only to create a good business, but to create a full economy for creators and developers, so that they get to share the benefits of the metaverse. Play-to-earn Above: Facebook’s metaverse vision. Zuckerberg should not only give away his products and services for free, as he has done in the past, but he should also pay us. He should pay us for giving his social media and his devices our attention. You’ve heard of universal basic income. Zuckerberg’s company is one of those that could make it happen, paying us to use his devices so that we can make a living in his ecosystem. If Zuckerberg does this and raises the tide for all boats, then we would be happy to give him what he wants in exchange. So far, we’ve been giving away our personal information for far too cheap. We should control it, and sell it back to him. That’s something that Apple is never going to do. It’s against its business model or its religion. It wants to respect our privacy, but it also wants to command the highest brand status and the highest profits. But Apple is at risk of falling further behind Facebook when it comes to reaching the most people with the next generation of computing. If Zuckerberg takes the long road, spends his money wisely, and comes up with great devices, he could reclaim his cred and lift many in the world out of poverty. Facebook, or Meta, has a natural chance to be the good guy. It should not drop this ball. Other forces are certainly at work here, like Google, Microsoft, Amazon, and even Netflix. It’s a complicated war, and not just one involving Apple and Facebook. There are also the game companies like Epic Games and others who favor decentralization, or taking power away from big tech. There are small companies now that are espousing paying us to play games, under the “ play-to-earn ” banner. And some nation-states may put their feet down in the name of holding onto power. You can argue with Zuckerberg’s ethics and business tactics. But don’t underestimate him or Meta. The first shot has been fired in the big war. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,276
2,021
"The DeanBeat: Why NFT game startups will win while big publishers wait for regulation | VentureBeat"
"https://venturebeat.com/2021/11/12/the-deanbeat-why-nft-game-startups-will-win-while-big-publishers-wait-for-regulation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: Why NFT game startups will win while big publishers wait for regulation Share on Facebook Share on X Share on LinkedIn NFTs in games are starting to come on strong. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Just like the metaverse has captivated the imagination of the game industry, so to have nonfungible tokens (NFTs). But while the metaverse has been dismissed as science fiction hype, NFTs have stirred even more haters. But after listening to a lot of talks at our GamesBeat Summit Next online event this week, I’m convinced that the startups are going to win the NFT gaming market as it moves toward mass adoption. I’ll back up my argument, but first I have some explaining to do because some very big gaming companies are turning like battleships into the NFT market. If you look at Google Trends, you’ll see that NFTs started picking up in February and skyrocketed after related NFT sales like digital art and NBA Top Shot took off. Dapper Labs has now seen sales and resales of those NFTs top $780 million. In March, an NFT digital collage by the artist Beeple sold at Christie’s for $69.3 million. NFT sales hit $1.2 billion in the first quarter, $1.3 billion in the second quarter, and a whopping $10.7 billion in the third quarter as games such as Axie Infinity took off. It’s a logical assumption that NFTs will sweep through gaming, much like they have through art. NFTs use the transparent and secure digital ledger of blockchain to uniquely authenticate one-of-a-kind items. This establishes digital ownership for players in games like Sky Mavis’ Axie Infinity , which has seen $2.4 billion in trading this year. This ownership model allows players to sell their leveled-up game characters for a profit, enabling a new business model for games dubbed “play-to-earn,” where players earn rewards. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! NFT resales can easily be tracked and credited to either the original creators or the owners themselves. And so players or even the original creators can benefit from item resales. It’s part of a Leisure Economy that is lifting people out of poverty around the world. Velocity Above: Axie Infinity lets players battle with NFT Axie characters. Amy Wu, a partner at Lightspeed Venture Partners and a crypto/NFT/game investor, said at our event the NFT games space went from zero to 100 in about two months, when Axie Infinity went from tens of thousands of dollars in revenue per month to $15 million per day. It has scaled up to two million daily active users and introduced a different crowd of players to “play-to-earn” gaming. They’re earning money much like the gold farmers did in other games, but on a scale that is “unprecedented,” she said. “It’s a pretty incredible pace,” she said. “That shows the paradigm of Web 3 and decentralization can do.” Blockchain enthusiasts believe that Web 3, or decentralization, is the next wave in computing that will return transactions to a peer-to-peer level and get rid of all of the big tech platforms and middlemen. The success of the games has caught the imagination of indie studios and game publishers. Wu said we’re seeing a huge influx of capital into the intersection of blockchain and gaming. Publishers have moved from skepticism to action. Hundreds of companies are now trying to do blockchain gaming, and many of them are “crypto native” game companies run by cryptocurrency enthusiasts without much experience making games, she said. These companies have developer direct relationships with their communities on the web, Discord, or Twitter, without the middlemen of the app stores. “Gaming native” teams are also coming into the space, some via the big game publishers, Wu said. Those teams haven’t quite grasped that Web 3 and the community mindset as being critical to success. Player guilds like Yield Guild Games are fueling the growth of blockchain games. What matters about these games is that the community is driving the game. “It gave us a stone to use against the Goliaths,” said Jeff Zirlin, cofounder of Sky Mavis, maker of Axie Infinity, on a panel. “When you’re a startup, you’re always trying to figure out what is your advantage? What is your secret sauce that you can use to create a niche for yourself in this market? For us, that was really this idea of community ownership, empowering gamers using this technology to create a new dynamic between gamers and developers.” Hundreds of guilds now exist in places like the Philippines to support the games. “The fact that people can create things of value inside the game, and then sell them to other people, is creating the future of work, where people are willing to do more things inside these games and virtual worlds,” said Gabby Dizon, CEO of Yield Guild Games, on a panel. “It’s leading to all sorts of creativity in the economy, from the players who are doing the actual work. People are, for example, becoming land brokers or doing esports and streaming.” Breaking the myths Above: Strauss Zelnick (right), CEO of Take-Two Interactive, shows off his NFT to Mike Vorhaus. If you don’t know what NFTs are, that is one of the obstacles to mainstream adoption of blockchain games. NFTs, cryptocurrency, and blockchain games have been tough to understand for developers, and consumers often have no clue how to use things like cryptocurrency wallets. About 97% of our audience believes that the public does not have a good understanding of blockchain. But startups like Forte are working on the infrastructure to make the technology more accessible, speed up transactions, and lower the costs. They and others are signing up traditional game makers to onboard them to the proper technology. (This is why Forte raised $725 million today, and why it also invested money into the $10.5 million funding for Mark Long’s Neon. “We really want to explore all of these levers to pull in the play-to-earn (economy),” said John Linden, CEO of Mythical Games, on a panel. “We’ll be introducing a whole series of new games very soon from other studios and other great gaming companies on both PC console and mobile….We could say with nearly 100% certainty, we will see at least one of the top three or four publishers in games to enter the space within the year. It’s definitely happening.” One of the big points of contention is that Bitcoin and Ethereum transactions use a lot of computing power because they are based on “proof of work” systems. Instead, more of these protocols and sidechains are being built now on “proof of stake” systems that don’t use a lot of computing power. Sidechains are offloading the transactions so that they can do more transactions per second, and again reduce the computing load from transactions on the blockchain. The fees — which represent the use of computing power — are dropping to nothing. Many of the blockchain haters among gamers bring up this point about blockchain games causing climate change. At the height of the Bitcoin mining phase, that was a real concern. But mining has dropped off, and it’s irrelevant to the subject of NFTs anyway. And when you think about it, an all-digital cryptocurrency financial system is likely to be more climate-friendly than all the physical banks in the world and other wasteful aspects of our financial system based on paper money. Others object to the market being filled with scams and money launderers. But compliance is in the interest of the startups, and they’ve got the lawyers and the sophistication to understand the anti-money laundering laws and “know your customer” requirements. While big companies could point to all of these myths and perpetuate the resistance to blockchain games, they aren’t doing that. Instead, the CEOs of big companies recognize the opportunity. Andrew Wilson, CEO of Electronic Arts, told investors that NFTs are the future of gaming. Zynga’s Frank Gibeau just hired a chief blockchain gaming leader. Strauss Zelnick, CEO of Take-Two Interactive, said he is a fan of NFTs. “We know that people love to collect things,” he said. “We know that people value rarity. What makes a collectible valuable? An intersection of a perception of underlying value and quality, and rarity. NFTs can offer that. They can offer a digital version of that. There’s no question people value digital goods. It’s digital, rare, and collectible.” The big question Above: Clockwise: Kevin Chou of SuperLayer, Jay Chi of Makers Fund, Amy Wu of Lightspeed Venture Partners, and Nick Tuosto of Griffin Gaming Partners. The big question is who will take NFTs to mass adoption. The upstarts are capitalizing on that trend and they’re hoping for mass adoption. And they all expect to reach mass adoption and disrupt gaming’s status quo. The big companies are paying attention, mainly because their investors are asking them questions about NFTs. The CEOs of Zynga, Ubisoft, Electronic Arts, Activision Blizzard, and Take-Two Interactive have all said they’re investigating NFT games. History favors the startups, particularly if you examine the explosion of free-to-play mobile games, which gave us the roadmap for predicting this new era. At the outset, as the app stores were created on iPhone and Android, many hardcore gamers hated the new games. They condemned them for schemes such as pay-to-win, loot boxes, addict targeting, and scams. But over time, it turned out that designers could create fun games and win over a much larger audience for free-to-play titles. It turned out that you could design games that weren’t scammy, that rewarded skill, that were fair for all, and let players express themselves through buying skins. And the game designers at many small companies who did this were richly rewarded and they unlocked much larger audiences to the point where mobile games are more than half the industry and free-to-play is about 78% of the market. While hardcore gamers number in the hundreds of millions, more than 2.9 billion people in the world play games, according to market researcher Newzoo. And while companies like Electronic Arts, Ubisoft, and Rockstar Games tried to succeed in mobile, they mostly ran aground. Instead, the mobile-first game companies took the market early and they never let go. The biggest mobile game companies include Supercell, Playrix, Niantic, MiHoYo, Tencent’s TiMi, Scopely, Wildlife, Nexters, Jam City, PlayStudios. You’ll notice the list doesn’t include most of the big game companies — with the exception of Activision Blizzard, owner of King. The strategy tax Above: Dean Takahashi moderates a session with Chris LoVerme (center) of SpacePirate Games and Witek Radomski of Enjin. Twenty years ago, Microsoft was creating the original Xbox. And while some renegades wanted to move fast, others were trying to kill the project. This internal debate was deemed by insiders to be the “strategy tax.” It held the company back and slowed it down, and it wound up entering the console market about 20 months after Sony launched the PlayStation 2. Sony won that generation. The same thing could happen here because the big companies are caught in some big dilemmas. Their legal teams are exploring the dangers of NFTs, and they have discovered plenty. Certain activities, such as letting players earn rewards and then sell them, could mean that NFT tokens could be classified as securities, and big companies would have to get a license to sell securities. Valve decided to ban all NFT games out of its concern that they could be ruled to be illegal gambling, particularly in the state of Washington, which has a strict gambling law. To close off the opportunity to all game design types just because one type might be classified as gambling is pretty shortsighted, said Chris LoVerme, CEO of SpacePirate Games, whose title Age of Rust was banned by Steam. He saw hidden motives in Valve’s move. “I truly think it has to do with competition with their Steam marketplace,” LoVerme said. “It’s about insulating themselves with their walled garden approach with the Steam marketplace and controlling their APIs and how games work with a centralized model for the games that they host.” Witek Radomski, chief technology officer of Enjin, said on a panel that Valve’s blanket ban was overly aggressive on a major game platform since developers have so many ways to build blockchain games. “I don’t think the world understands exactly what blockchain games are yet,” Radomski said. “It would have been a better move to investigate games that are more like specific casino games or those that focus on the crypto aspect of it.” Skill-based games, for instance, are not considered gambling in 41 states in the U.S. “We’ve seen many games with modding platforms and open APIs that grow so much bigger,” Radomski said. “We think Valve should work with game developers and see what the concerns are. We have seen the problem with mobile platforms as well with cryptocurrency. But I don’t think it is something they can fight for very long. We ran some polls during our event that were telling. About 84% of our audience thought that Valve should allow NFT games. Another 60% of the audience said they were considering making blockchain games. And the audience predicted it would take two to three years for blockchain games to go mainstream. Big game publishers don’t want to tread in this space until they get clarity from the Securities and Exchange Commission or Congress. They are waiting for the regulatory shoe to drop. This may be a very prudent strategy, or it could be yet another example of the strategy tax. About 71% of our audience thought regulators will crack down on blockchain gaming and 62% believe that NFT game companies are overvalued. I know that big game companies are studying this space. But if they wait too long, the free-to-play disruption will be repeated with NFTs. The investment boom Above: The play-to-earn panel at GamesBeat Summit Next. Meanwhile, new unicorns are being born among the NFT startups. Sorare, a Paris-based NFT fantasy soccer game maker with 30 people, received a $680 million investment from SoftBank at a valuation of $4.3 billion. Animoca Brands raised $203 million this year at a $2.2 billion valuation. Axie Infinity maker Sky Mavis raised $153 million at a $3 billion valuation. And Mythical raised $225 million this year at a $1.25 billion valuation. Forte, which is arming many of these companies with blockchain infrastructure technology, raised $185 million. The usual strategy for big companies is to wait for startups to fight it out and then buy one of the winners. That strategy is working a little, as we’ve seen companies like EA buy Glu Mobile and Playdemic recently, and Zynga has executed this strategy exceedingly well. But with NFT games, I don’t see the corporate boards buying startups like Sorare that have become so valuable so fast. And NFT game startups are receiving plenty more capital while the big companies are having trouble hanging on to their employees who are leaving for the startups. These startups aren’t held back by a strategy tax. They’re burning the boats and going all-in on NFT games. Some of these companies may get caught up in regulatory violations, like the companies that raised money through initial coin offerings (ICOs), only to be struck down with fines from the SEC. But if the companies design their NFT games correctly, they don’t have to worry about getting banned or fined, Radomski said. So if 1,000 startups are born, we may lose 50 of them to regulatory actions. But that means plenty of companies will survive and take the market. Which startups will win? Above: Mythical Games makes vinyl characters in Blankos Block Party. Wu believes that the “crypto native” game makers have been winning so far, and they’re being rapidly joined by the “gaming native” companies. The combination of these native communities will likely be the ones that can design games that are fun and enjoyable to the mass market, while still creating the economic effects that can turn a good game into a financial flywheel. Tuosto said a big franchise or brand hasn’t come into blockchain gaming yet. But based on talks with Forte, one of Griffin’s investments, a lot of conversations happening. It’s just that a lot of the big brands have a lot to lose and so they have to be careful as they move into the space where regulators may intercede. Companies have to set up compliance for money transmittal. “It requires a new kind of specialization where you harness resources from both crypto and gaming,” Tuosto said. “When that match is made, it will be absolutely transformational.” The long game Above: Yield Guild Games is featured in Play-to-Earn, a documentary set in the Philippines. Sky Mavis’ Jeff Zirlin said that many of Axie Infinity’s customers have made a living playing the game in the midst of the pandemic when places like the Philippines had 40% unemployment. And since Sky Mavis distributes its game on the web, it doesn’t pay a 30% tax to the app stores. It also doesn’t pay publisher fees, and it doesn’t have to spend a lot on user acquisition since its Discord users spread the word for it. And that’s why the company has money to share with its own users, Zirlin said. This world has no room for middlemen like platform companies or big tech. And that’s why the blockchain game companies see themselves as revolutionaries and crusaders, betting on the power of decentralization and open source to dismantle the old ways. I believe they have a fighting chance to accomplish this mission because so much capital is coming in to bet on them and not the status quo. While it is disruptive, Sky Mavis is trying to play by all the rules in the world. In the Philippines, the company said to players that if the local government requires them to pay taxes on earnings in a game, then they should do that. Zirlin is excited because of the income generation among underserved people around the world. About 25% of the players are “unbanked,” meaning they have no bank accounts. And 50% have not previously used cryptocurrencies, while 75% are new to NFTs. “It’s beautiful that we have been able to introduce crypto to people that have traditionally not been using it,” Zirlin said. “We’re getting this technology in the hands of the people that really need it. And that is super rare.” Big publishers still have a chance to buy in. Much like Bitcoin, NFTs have already seen a rapid rise and a crash, and a rise and a crash. (About 40% of our audience thought the NFT bubble would burst and 60% think that NFT startups are overvalued). The acquirers could buy on the crash. They could also spread FUD and get regulators to crack down more quickly through lobbying efforts. But if the scenario we saw in free-to-play mobile games repeats itself, then the NFT game startups will race ahead at valuations that will be too big for them to be acquired, and eventually, they will be the ones doing the acquiring. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,277
2,021
"The DeanBeat: Building the metaverse for free | VentureBeat"
"https://venturebeat.com/2021/11/19/the-deanbeat-building-the-metaverse-for-free"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: Building the metaverse for free Share on Facebook Share on X Share on LinkedIn Jensen Huang gets the chip industry's highest honor. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Jensen Huang received the chip industry’s highest honor, the Robert N. Noyce award, last night in an event with his peers. And while the evening was all about semiconductors and AI, one thing the cofounder and CEO of Nvidia said caught my attention. Speaking on stage with former New York Times journalist John Markoff, Huang said that Nvidia has found a purpose in using its artificial intelligence and simulation technology made possible by its graphics chips and deep learning neural network algorithms. His company is gathering all of the experts it needs to simulate climate science so the world’s biggest supercomputers will be able to model climate change and predict how the Earth will change over decades. And Huang will use this to warn us all about the fate of the planet. On top of that, he will build us the metaverse for free. In order to do that, Nvidia will have to build something on the foundation of its Omniverse, the “metaverse for engineers” that has become a platform for simulating “digital twins.” For instance, BMW is building a digital twin of a car factory, and once it gets the simulation right, it will build the exact same thing in the physical world. Above: Following on the heels of announcements from both Facebook and Microsoft, Nvidia became the third major tech company over the past few weeks to announce its push for the metaverse. On Thursday evening, Huang echoed something he announced at the Nvidia GTC conference last week. He said that Nvidia plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of the Earth in Omniverse. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use,” Huang said last week. He said that the simulation would be so precise it would need meter-level accuracy, and if necessary Nvidia would spend the money to offset the computing power used to run the simulation. The challenge would be to take decades of climate data from satellite recordings and then ingesting that into the supercomputer for simulation purposes, Huang said. Above: Jensen Huang, CEO of Nvidia. And last night, he added, “We’re going to go build that digital twin of the Earth. It’s going to be gigantic. This is going to be the largest AI supercomputer on the planet. It’s going to bring some of the brightest computer scientists, the brightest physical scientists and climate scientists on the planet to go and use that computer to predict” how the Earth will change over decades. It hit me that if Nvidia builds this physically accurate world in the Omniverse, that would be the equivalent of a real world metaverse. And so I went up to Huang afterward and I took a selfie with him. Then I asked him, “If you build this digital twin of the Earth, do you get the metaverse for free.” And he said, “If we build the digital twin of the Earth, we will get the metaverse for free.” It’s going to be a huge mission, and Nvidia can justify finding the funding sources and expertise necessary to take on climate change as a problem by building out this Omniverse, which could replicate a lot of the details of the Earth that game developers and other metaverse developers would need to create their own virtual worlds, like a replica of New York City that we could use in a game. For sure, some of the details are going to be different. As Richard Kerris, head of development at Omniverse, told me in an interview, game developers will want a lot of the world to operate fast so people can move at a lightning 60 frames per second, and he noted that some of the details of the world would be too fine-grained for that purpose. But this is the thing. Nvidia is going to build this version of the world anyway. The game developers can use that for their own purposes and use the Omniverse as the free foundation for whatever they want to build for entertainment purposes. Dual-use technologies Above: Jensen Huang of Nvidia and Dean Takahashi of GamesBeat. I have always wondered how ambitious companies are going to build something like the metaverse, and now I see that some things are more important to simulate, and if we accomplish those things, we’ll get a very special bonus in the form of a replica of the Earth, which we can use as a jumping point for our imaginations to build our own versions of the metaverse. This reminds me of “dual use” technologies that were once developed for the military, like the original 3D simulators that trained soldiers how drive tanks or fly jets. Those simulators became the foundations for modern 3D video games, from virtual reality simulations to fighter jet games. Here, we’ll have one of the most important scientific projects of all time, with contributions from so many experts. Maybe the governments of the world can provide Nvidia with some of the funding it needs, or Nvidia, which has a market value of $789 billion, could afford to build this on its own. And once it’s built, it can become the foundation for the modern metaverse. My point is this. We don’t often get a lot of things for free. But if we invest in solving a problem that could decipher our climate and help everybody understand something that could kill our planet, then we should be grateful that we should get something like the metaverse — which would help us enjoy our virtual worlds without killing the planet — as a byproduct. Omniverse Avatars in enterprises and games Above: Jensen Huang, CEO of Nvidia, introduces Omniverse Avatar. On another level, Nvidia is also creating AI-like characters to model the behavior of pedestrians and self-driving cars. It is making the investment to re-create human-like behavior, and last week it also unveiled its Omniverse Avatars, which are 3D characters that can use AI to serve as things like non-player characters in games. Huang said these Omniverse Avatars will be full of cool technologies, including speech recognition, speech synthesis, natural speech synthesis from text, multi-language translation, face animation, eye tracking, and other things that will go into making quality game characters. “When you’re playing these games, you’re just going to talk to the characters inside,” he said. “They’ll understand. They’ll literally understand. You’ll say, ‘Go up there and take a right.’ You’ll be able to talk to your team, talk to your partners. It’s going to be so interesting. They’ll talk back. They’ll have computer vision. They’ll go forward and see there’s a tank coming around the corner, and because they see it, not just because it was coded into the game that way. They’ll see it. It’ll be a lot easier for characters to use perception than for the programmer to write everything into the software.” Huang thinks that one of the more important reasons to create the Omniverse Avatars is to provide better customer service. Above: The Earth-2 simulation is coming. “There’s a severe shortage in drive-throughs, fast food. We’ve now made it possible for you to have a very good conversation with an avatar,” he said. “You can say it in just about any way you like. You could ask it for recommendations. There are a lot of ways to describe a burger now, but it will still recognize what you mean. Everything from customer service, intelligent retail checkout, you name it. There are 25 million restaurants and stores, and everybody’s short on labor. This is going to be one of the top areas.” The Omniverse Avatars are another example of dual-use technologies. They will be built for customer service, which is a compelling business model. (And I hope it won’t simply eliminate human jobs.) But it will also provide the human-like avatars that we’re going to need by the millions as the realistic characters inside a game world or a metaverse world. These Omniverse Avatars could be detached from the Omniverse if we’d like them to be, Huang said. But if they learn new behavior that is more realistic, that should be fed back into the Omniverse to improve the models for the characters. Again, I brought up the notion of how the Omniverse could become the metaverse — at least the kind that game developers would create. Above: Jensen Huang tells John Markoff that he has prepared a 90-minute speech for his award ceremony. “There are so many different applications for the metaverse,” Huang said. “For consumers, for video games and consumers we’ve already spoken about, it will likely be that we’re the engine. We’re the underlying technology. We’re the engine of it. In the case of enterprise use, particularly industrial use, we will be the entire engine, the entire simulation engine for the digital twin.” He added, “In the case of edge — this retail application we were just talking about is really edge computing from enterprise terms. It’s just so cute that you don’t think about it as an edge computing application anymore. But this is almost the ultimate edge computing application. Instead of a fleet of tractors or a fleet of AMRs, autonomous moving robots, it’s a bunch of little animated characters. This is the thing that moves those pixels. These are really robotics applications. They’re just really cute.” And he said, “Enterprise edge remains one of the great opportunities for the metaverse, for Omniverse. On the other extreme, of course, is all of the consumer stuff. With consumers, the way we’ll work with the industry is very similar to the way we work with the video game industry today. We’ll provide the underlying engine, the infrastructure. It might be in the cloud. It might be GFN. It might be in their cloud. We’ll provide the engine-level infrastructure.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,278
2,021
"Deloitte: Why the chip shortage is lasting so long | VentureBeat"
"https://venturebeat.com/2021/12/06/deloitte-why-the-chip-shortage-is-lasting-so-long"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deloitte: Why the chip shortage is lasting so long Share on Facebook Share on X Share on LinkedIn Deloitte sees the chip shortage lasting through 2023. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. If a single $1 chip isn’t available, it could hold up the shipment and sale of a device, appliance, or vehicle worth much more, according to a new report by accounting and consulting firm Deloitte. With the COVID-19 pandemic and the spike in demand in the recovery, the semiconductor industry has seen one of its longest shortages, from the spring of 2020 through the fall of 2021. Deloitte expects it will last at least through 2022, pushing the lead times out for the shipment of some components into 2023. The impact is still being felt across PCs, smartphones, datacenters, game consoles, other consumer goods, and especially the auto sector. The cumulative revenue impact of the shortage will likely be over $500 billion in lost sales globally from 2020 to 2022. The next semiconductor shortage could be as big or bigger than this one. Given the ever-increasing importance of chips to multiple industries, the economic harm could be even greater, the firm said. So it did a study on what semiconductor manufacturers, distributors, customers (the semiconductor supply chain), and governments can do to avert another potential catastrophe. The problem is so big that no single company, or even industry, can make a difference on its own. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Some might think that today’s shortage is a one-time event. As long as we don’t have a once-in-a-century global pandemic, a massive fire at a key Japanese chip plant, a Texas freeze, and a ship stuck in the Suez Canal — all coinciding — the next shortage couldn’t possibly be as severe, right? Above: The chip industry has seen a half-dozen dips and shortages over the decades. But Deloitte said that in the coming decade, it’s a near certainty that some combination of events such as a global recession, major weather event, and disruption near a critical maritime port or strait could all occur roughly at once. The chip manufacturing industry and supply chains, as they currently exist, inherently are vulnerable to disruptions, which makes shortage inevitable. Over the past three decades, we’ve seen six shortages of duration or magnitude similar to today’s. Sometimes shortages occur or are exacerbated by external shocks such as the tech bubble or 2009 recession, but sometimes they “just happen.” Adding capacity in the chip industry has always been expensive and chunky. It occurs in waves driven by both technology and market forces and has long lead times between deciding to build a fab (or semiconductor fabrication plant) and that fab producing its first output (finished wafers). So, the real question is not if there will be another shortage, but when, and how severe? Breaking the bullwhip Above: Solutions to the chip shortage. The bullwhip is the seesaw of sales when supply and demand are out of sync due to poor communication in the supply chain. All of the various players need to do all of their respective parts, work together, and at the same time not create a glut. Deloitte said the companies should choose a specific action or a combination of actions depending on what role they play in the broader semiconductor ecosystem and value chain. The whole chip industry is committing to increasing overall output capacity at an unprecedented level. Capital expenditures from the three largest players will likely exceed $200 billion from 2021 to 2023, and could reach $400 billion by 2025. Governments have committed hundreds of billions more. Deloitte expects global wafer starts per month of 200-millimeter wafers (which are processed in chip factories and sliced into individual chips) to increase from about 20 million in 2020 to 30 million by the end of 2023. Capacity will grow at both the 200mm and 300mm wafer size, at about the same rate for each. To be clear, growth in 200mm comes mainly from increasing capacity in existing fabs, rather than the construction of entirely new plants, which will account for nearly $12 billion of capital equipment spending between 2020 and 2022. Above: Deloitte’s capacity predictions for chips. From a technology perspective, capacity at mainstream nodes and the more advanced 300mm process nodes (under 10nm, mainly at 3nm, 5nm, and 7nm — where nm refers to nanometers, or the width between circuits) will grow more rapidly than more mature process nodes. It is worth noting that demand is growing for both wafer sizes and at all process nodes, not just the most advanced. Increasing capacity broadly by 50% in only three years will more than cover any future shortage, right? The answer is not so obvious. On top of raising overall capacity, the chip industry should build local capacity. Chip manufacturing is geographically clustered and needs to be distributed across more regions. The 2020 level of concentration in East Asia (including Japan and China, which are nearing 60%) has attracted government attention from the United States, Europe, and China, and plans are already underway to build new plants in those countries or regions, as well as Israel, Singapore, and others. Above: The chip industry is concentrated in Taiwan and the rest of Asia. Moving the needle on the geographical concentration of chip supply is hard. There are over 400 semiconductor manufacturing facilities globally, and there are announced plans to add 24 new 300mm fabs by 2022, but only 10 new 200mm fabs in the same period. Some of those are in South Korea and the Taiwanese region. Adding a couple of dozen in new locations outside these clusters may help. Deloitte said that new locations will only cause concentration in East Asia to drop a few points, meaning it would still produce more than half of all chips by 2023. The industry should also become strategically lean — chip buyers, distributors, and retailers need to determine which level of lean to choose. There is such a thing as too lean. Breaking the bullwhip on the demand side is needed. Original equipment manufacturers (the system companies), distributors/suppliers, and customers are affected by the bullwhip effect where delayed communication between stakeholders at each tier in the supply chain is amplified by judgments placed on demand signals. This needs to change. Above: The U.S. share of chip manufacturing has fallen over decades. Smart operations capabilities are vital to semiconductor manufacturing, which is complicated and sensitive in nature, largely automated, and enabled by capital-intensive factories. Capabilities that facilitate digital process modeling (such as digital twins), operations monitoring, factory operations synchronized with materials availability, and responsive factory scheduling adjustments allow the factory operations teams to operate efficiently and with high asset utilization. And Deloitte said that many manufacturers began digital transformations by the spring of 2021. Continued innovation is necessary to be more adaptive to future supply chain-driven business disruptions. All of this will require close communication. Demand is growing roughly as quickly (or more) than planned capacity growth. Demand drivers include 5G, artificial intelligence and machine learning (AI/ML), intelligent edge, and internet of things (IoT). Some of these are about delivering increasingly powerful chips to products that already use a lot of chips, but some are about adding chips to products that had no chips before. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,279
2,021
"Epic Games shows off Unreal Engine 5 with stunning simulated city in The Matrix Awakens demo | VentureBeat"
"https://venturebeat.com/2021/12/09/epic-games-shows-off-unreal-engine-5-realism-with-stunning-the-matrix-awakens-demo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Epic Games shows off Unreal Engine 5 with stunning simulated city in The Matrix Awakens demo Share on Facebook Share on X Share on LinkedIn IO is a simulated person built with Epic Games' MetaHuman Creator. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Epic Games showed off the realism of Unreal Engine 5 with The Matrix Awakens demo amid a huge computer-generated city. It looked phenomenal, with a completely CG actress Carrie-Ann Moss and a version of actor Keanu Reeves that’s indistinguishable from a computer creation or the real thing. The Matrix Awakens: An Unreal Engine 5 Experience was shown at The Game Awards, and it’s a preview of The Matrix Resurrections film debuting on December 22. It’s also a pretty good sign that Epic Games is serious about building its own metaverse, or enabling the customers of its game engine to build their version of the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. One of the coolest parts of the demo is that Epic Games’ team spent the better part of the past year creating a virtual city that looks like a real city. And Epic Games is going to give that city away for free (with The Matrix parts removed) to the developers who use Unreal Engine 5. The city is 4.138 kilometers wide and 4.968 kilometers long, slightly larger than the size of downtown Los Angeles. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: This is not a picture. It’s a simulation in Unreal Engine 5. Kim Libreri, chief technology officer at Epic Games, said in an interview with GamesBeat that the idea was to show people a glimpse of the future of interactive entertainment and storytelling that is possible with Unreal Engine 5. “We built a whole city for this, as we’ve been working on new techniques, new streaming, new world management systems that allow you to build a whole city,” Libreri said. “All this exists in the engine.” The simulation of the city — the 35,000 walking pedestrians who look like real people — just keeps functioning in a procedural way as you move the camera through the 3D space. There is a freeway chase scene where you can shoot the tires out of Agent Smith cars chasing you. “We’re giving away that city without Neo and Trinity,” Libreri said. “It’s a full, living, playable city. If anybody wants to do a racing game in that city, they will be able to use all the blueprints, all the gameplay logic, all the AI, all the destructible cars. When you see the cars exploring, that’s not us adding explosions, where we animate it by hand. It’s all procedurally exploding because when a car gets impacted, it gets deformed, it will shatter its glass. This is all procedural. So what we wanted to really lean into this concept of The Matrix is about the simulated universe. It’s all simulation.” The demo Above: The older Keanu Reeves looks real here, but all of this is simulated in Unreal Engine 5. The eight-minute demo shows the first scene from the original movie starting with a young Thomas “Neo” Anderson (Reeves) with his face lying on a desk. It looks like video from the movie, but it is entirely computer-generated, as Epic’s team painstakingly re-created that scene. It shifts to the “bullet time” video from the original movie, and the older Reeves of today walks into the scene. And then we shift to a scene of Morpheus, young Neo, and the modern Reeves. Again, this scene if entirely animated, but I thought it was video of the real actors. It goes on like this, showing action scenes in a city, but it’s all animated. The actors joke that it was thrown in to have something sexy for the marketing people. There were plenty of times I did a double-take and thought I was seeing reality, only to have Libreri say it was animated. As Moss says in the video, faces and bodies change as easily as we change clothes. “Everybody loves games. Everybody loves movies. Some of it is live-action. Some of it is computer generated,” Libreri said. “Everybody is looking at the teaser and debating what’s real or not.” Libreri proceeded to show the polygons behind the scenes to show what was real or not, like the opening scene. “When Neo opens his eye, that’s Unreal,” he said. And it runs on consoles, not a supercomputer. The Matrix Awakens: An Unreal Engine 5 Experience is now available to download for free on PlayStation 5 and Xbox Series X/S. The Matrix crew Above: A gameplay scene in the demo. This boundary-pushing technical demo is an original concept set within the world of Warner Bros’ The Matrix. Written and cinematically directed by Lana Wachowski, it features Reeves and Moss reprising their roles as Neo and Trinity and — in a reality-flipping twist — also playing themselves. The project reunited many of the crew that worked on the seminal The Matrix trilogy, including James McTeigue, Kym Barrett, John Gaeta, Libreri, Jerome Platteaux, George Borshukov, and Michael F Gay, in collaboration with teams across both Epic Games and partners, such as SideFX, Evil Eye Pictures, The Coalition, WetaFX (formerly Weta Digital), and many others. Wachowski, Libreri, and Gaeta have been friends since the days of the original trilogy of The Matrix. “When I told them I was making another Matrix film, they suggested I come and play in the Epic sandbox,” said Wachowski, in a statement. “And holy shit, what a sandbox it is! I imagine the first company to build an actual Matrix — a fully immersive, persistent world — will be a game company and Epic is certainly paving the way there. It’s mind-boggling how far games have come in twenty years.” He added, “Keanu, Carrie, and I had a blast making this demo. The Epic sandbox is pretty special because they love experimenting and dreaming big. Whatever the future of cinematic storytelling, Epic will play no small part in its evolution.” Building the demo Above: A real Keanu Reeves walks into a simulated scene. While the movie doesn’t use Unreal Engine 5, there is a small scene (with a dojo) in the film that was built with the engine. With Wachowski and many of the original crew on board, the team set out to create a demo with Unreal Engine 5 that’s nothing short of spectacular: an experience that merges art forms in exciting new ways. The demo starts out with a cinematic that features exceptionally realistic digital humans, before morphing into a fast-paced interactive experience of car chases and third-person shooter action. All of this takes place in a huge, bustling, and explorable open-world city that — like the simulated world of The Matrix — is incredibly rich and complex, Epic said. Sixteen kilometers square, photoreal, and quickly traversable, it’s populated with realistic inhabitants and traffic. Many of the characters are built with Epic’s MetaHuman tool, which creates characters with real human faces. The experience is a tangible demonstration that UE5 offers all the components you need to build immersive, ultra-high-fidelity environments. And that’s Epic’s sales pitch. A realistic simulated city Above: There are 7,000 simulated buildings in the Unreal Engine 5 demo. The city is inspired by parts of San Francisco, New York, and Chicago, but it isn’t an exact copy of anything. It has global illumination (like the sun shining down and casting shadows) and ray tracing for reflections and shadows. Asked how hard it was to build the city, Libreri said, ‘It took us a while to get our heads into it. But it’s pretty sustainable. And we’re actually going to put out tutorials for all customers. How do you use side effects today to do procedural layout? How do you bring stuff in the engine?” Despite the city’s complexity, a relatively small core team was able to create the experience thanks to a set of procedural tools including SideFX’s Houdini. Procedural rules define how the world is generated: from the size of the roads and the height of the buildings, all the way down to the amount of debris on the sidewalks, Epic said. Using this workflow, you can modify the input rules and the whole city will change, redefined by those new instructions. For small teams looking to build open worlds, that is incredibly powerful, Epic said. It means you can regenerate the entire city, right up until the last day of delivery, and continue to adjust and improve it. This opens up so many creative possibilities — and proves that any team can make a triple-A-game-quality open world in UE5, irrespective of size, Epic said. As for the city in The Matrix Awakens: An Unreal Engine 5 Experience, it’s a living, breathing environment that never stops. Because the systems that drive its actors are part of a global simulation that is evaluated continuously, the activity that takes place in the city is far more consistent and believable. Block after block, it ticks with photorealistic AI-driven characters and vehicles — whether you’re looking at them or not, Epic said. This isn’t opens the door to a completely new way of storytelling, Epic said. The high-fidelity simulation capabilities of UE5 are enabling an entirely new process: cinematic creation through simulation. Many of the action scenes in the demo originated with crew members driving cars around the city to capture exciting shots. The team was able to use the simulated universe to author cinematic content, like live-action moviemakers scouting a city to find the best streets to tell their story — but without the physical constraints of the real world. Above: Crossing the uncanny valley? Where the sample project Valley of the Ancient gave a glimpse at some of the new technology available in UE5, The Matrix Awakens: An Unreal Engine 5 Experience goes a step further: It’s an interactive experience running in real time that you can download on your Xbox Series X/S or PlayStation 5 today. The demo features the performance and likeness of Reeves and Moss as realistic digital humans. To achieve this, Epic’s 3Lateral team captured high-fidelity 3D scans of the actor’s faces and 4D captures of their performances in their Novi Sad studio. The open-world city environment includes hero character IO, who was the launch character for MetaHuman Creator, as well as thousands of MetaHuman agents, demonstrating exciting new possibilities for high-fidelity in-game characters at scale, Epic said. AI systems drive the characters and vehicles, while procedural systems built using Houdini generate the city. Unreal Engine 5’s World Partition system makes the development of the vast environment more manageable. The movement of vehicles, character clothing, and the destruction of buildings are all simulated in-engine using Unreal Engine’s Chaos physics system. During the chase experience, because the car crashes are simulated in real time with Chaos, the same crash will never occur twice. It’s unique at every run, Epic said. The technical demo also puts previously showcased UE5 features Nanite and Lumen through their paces. In a dense, open-world city environment, UE5’s virtualized micropolygon geometry system comes into its own. The city comprises seven million instanced assets, made up of millions of polygons each. There are seven thousand buildings made of thousands of modular pieces, 45,073 parked cars (of which 38,146 are drivable), over 260 km of roads, 512 km of sidewalk, 1,248 intersections, 27,848 lamp posts, and 12,422 manholes. Nanite intelligently streams and processes those billions of polygons, rendering everything at film quality, super fast, Epic said. Above: Agent Smiths in a car chase in the Unreal Engine 5 demo. Unreal Engine 5’s fully dynamic global illumination system Lumen leverages real-time ray tracing to deliver incredibly realistic lighting and reflections throughout the interactive parts of the demo. Real-time ray tracing is also used for the cinematic element to generate the beautiful, realistic soft shadows of the characters, Epic said. Temporal Super Resolution, UE5’s next-gen upsampling algorithm, keeps up with vast amounts of geometric detail to create sharper, more stable images than before, outputting high-resolution images at a low processing cost. That brings more geometric detail, better lighting, and richer effects at higher resolutions. Epic said the ability to take these technologies and build vast open worlds presents thrilling possibilities as we enter the era of the metaverse. The Matrix Awakens: An Unreal Engine 5 Experience offers a glimpse at what those worlds could look like. They could be highly stylized like the environments in Fortnite — or they could look almost as real as the physical world. The Matrix Awakens: An Unreal Engine 5 Experience is not a game, but this tech demo offers a vision for what the future of interactive content could be; from incredibly rich and complex cities and environments, to photoreal, visually arresting cinematic spectacles. City details Above: Is The Matrix real? Here’s a recap on some details on the city: The city surface is 15.79 km² The city perimeter is 14.519 km long There are 260 km of roads in the city There are 512 km of sidewalk in the city There are 1,248 intersections in the city There are 45,073 parked cars, of which 38,146 are drivable and destructible There are 17,000 simulated traffic vehicles on the road that are destructible 7,000 buildings 27,848 lamp posts on the street side only 12,422 sewer holes Almost 10 million unique and duplicated assets were created to make the city The entire world is lit by only the sun, sky and emissive materials on meshes. No light sources were placed for the tens of thousands of street lights and headlights. In night mode, nearly all lighting comes from the millions of emissive building windows. 35,000 simulated MetaHuman pedestrians Average polygon count? 7,000 buildings made of thousands of assets and each asset could be up to millions of polygons so there are several billions of polygons to make up just the buildings of the city. Asked if it’s part of a game, Libreri said, “It’s purely a technology demo just to show what our customers can expect to be able to do when they’re using Unreal Engine 5. And in fact, when we ship the engine, for real next year, this entire city with all the AI for the traffic, all the building blocks, every air conditioning unit, every little piece of that will be made available.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,280
2,021
"Jam City unveils blockchain division and first NFT game Champions: Ascension | VentureBeat"
"https://venturebeat.com/2021/12/22/jam-city-unveils-blockchain-division-and-first-nft-game-champions-ascension"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Jam City unveils blockchain division and first NFT game Champions: Ascension Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Mobile game maker Jam City is taking the plunge into nonfungible token (NFT) games with the launch of a new blockchain division and its first NFT game, the fantasy adventure title Champions: Ascension. The move is a big one that echoes the moves of other companies such as Ubisoft, Com2Us , and Zynga have made in recent days as they embrace Web 3 — the future of decentralized computing — and blockchain. Famous game developers such as Will Wright, Peter Molyneux, Graeme Devine , Austin Grossman, and others have been diving into NFT games as startups pioneered a multibillion-dollar market this year. But some projects, such as GSC Game World’s Stalker 2 , have received pushback from gamers who are worried about a variety of concerns, such as environmental impact, scams, money laundering, and a focus on earning money over gameplay. In that way, this moment feels like the beginning of free-to-play games at the outset of social and mobile games. NFTs promise a new business model for things like digital collectibles by using the security and transparency of the digital ledger of blockchain to authenticate unique digital items. NFTs enable players to own their own gear in games. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Jam City has been watching this space and believes it has a big opportunity to empower players with direct ownership of their characters and other advantages, said Chris DeWolfe, CEO of Jam City, in an interview with GamesBeat. “We believe NFTs have been interesting and, to a certain degree, maybe overhyped, but where they’re really going to shine is where they can be plugged into an environment where there’s utility for the NFTs,” he said. “And so fundamentally, we think that there is going to be a user behavior change. It’s happening right now, where folks want more control over their web experiences, their computer experiences.” By spending in the game, you’re acquiring an asset, and DeWolfe said you maybe able to use that asset in other entertainment experience. Or you can sell it. “And there will be a more collaborative experience between the game developers and the players in terms of the direction that the game is going to take,” DeWolfe said. “With direct input from that perspective, it’s going to be much more of a community. It’s going to be way more socially driven. We’re going to be utilizing Discord in a lot bigger way.” The first title is a “play-to-earn” game dubbed Champions: Ascension , an adventure game set in a high fantasy universe developed for Web 3, said Johnny Casamassina, head of design at Jam City, in an interview with GamesBeat. He has been making games for 20 years, with much of it in the PC/console space. He moved into mobile games a decade ago and heads design at Jam City, and he said he has wanted to make this game for a long time. “We feel very heavily invested into Web 3 computing, and games are going to be one of the first big applications there,” he said. “We have been investing over the last nine months to creating blockchain experiences for users. One of the projects is one that we’ve been working on for quite a while.” Limited run NFTs Above: Champions: Ascension is coming from Jam City. Ahead of the game launch, Jam City will drop a limited run of 10,000 NFTs called Prime Eternals — the ultimate ascension tier in the Champions universe — starting with a whitelist-only private sale. The team has created characters called Champions that players that players can own in the world of Massina, which has been at peace for 1,000 years. Ruled by generations of Emperors, the people are united and entertained by its gladiatorial games in a grand arena fought by outlandish beasts known as Eternals. These alchemical creations are imbued with a primordial essence such as life, death or arcane, and possess special abilities. These essences are worshiped as deities by the people as the balance between the different Houses influences their daily lives. Players will be able to impact changes to the world of Massina with the help of blockchain technology. The 10,000 one-of-a-kind Prime Eternals NFTs will be minted just once, and holders of Prime Eternals will have the highest in-game earning potential and access to unique perks not available to other players. These include — among other things — special gear, the ability to play an early version of the game, and access to the game’s developers via Discord , where they will have direct input over Champions’ direction and evolution via feedback and regular votes on new products and characters being introduced in Massina. Gameplay focus Above: Champions: Ascension will be an accessible title. Players will engage in strategic, high-stakes decision making, allowing them to outsmart, outmaneuver and overpower opponents via gladiatorial combat. Players will be able to train, equip and prepare Champions for deadly exchanges; forge weapons, armor and gear from the very bones of fallen opponents; battle for land ownership to build forges to let you craft imbued gear; splice the essence of a Champion with others to create more powerful Champions; form alliances and climb the leadership ranks. Later, Jam City will introduce additional Champions tiers with varying levels of in-game earning potential and access to gear. Champion NFTs can be used in the game to find, socialize and battle other players. But it won’t just be for hardcore gamers and crypto enthusiass. “We’re shooting for all of our games to be very approachable, where you know, when you’re actually playing the game, you don’t have to download all these crazy wallets,” DeWolfe said. “There is all this friction getting into a game. But this will be as easy to start a game as any other free-to-play game.” Each character will have a different level of rarity. “These are assets that essentially just exist forever,” said Casamassina. The Champions will be doing battle, one against another. And out of those battles, there will emerge one victor, and one loser. The Champions who rise will essentially be able to absorb the essence of the fallen champions, making them stronger. ” The winners ascend to the next level of competition. The game is heavily influenced by films like 300 and Thor Ragnarok. In later versions, Jam City will make it possible for players to build structures and forge fight gear for their characters to use or sell on the player-to-player marketplace. All of these activities will enable players to expand their collections and their presences in the Champions Universe, helping to drive community engagement and creating opportunities for players to leverage gameplay, peer-to-peer sales and staking to increase the value of their Eternals and the digital goods held in their wallets over time. DeWolfe said the company will focus on web games first and then move into mobile later. Jam City will do what it can to build a metaverse to support the game, with new technologies such as NFTs, minting, staking, and trading. Moving early Above: When your hero dies, its essence is absorbed by the victor in Champions: Ascension. Jam City considers this to be an opportunity to be an early mover for a big change in games, just as it was early when the app stores and free-to-play evolved with mobile games. DeWolfe and cofounder Josh Yguado believe this will democratize the player experience and change the market again. Jam City didn’t say what tech vendors it will be using, or which mobile platforms it is targeting in addition to the web. Jam City is betting that Web 3 games will have deep gameplay, whereas the current blockchain games out there have light gameplay that was made by people from the blockchain world, not necessarily game developers. “We have a deep understanding of managing game economies, live operations, incredible graphics, great gameplay, engaging the customer, understanding customer motivations, creating utility in collection mechanics,” DeWolfe said. “We have really mastered the game collection mechanic. Blockchain games are going to be very loosely utilizing a collection mechanic.” Lots of resources Above: Your characters will evolve in Champions: Ascension. Jam City recently raised $350 million in funding and acquired Ludia for $165 million. Jam City’s Cookie Jam has generated more than $800 million in lifetime bookings and Panda Pop has generated $400 million in lifetime bookings to date. It also created the role-playing game Harry Potter: Hogwarts Mystery. Back in 2003, DeWolfe started MySpace, which ushered in Web 2.0 and became one of the largest social networks in the world until Facebook overtook it. “We were at the forefront of mobile, which we saw as the next big change in computing,” DeWolfe said. “We think that blockchain is bigger than that.” DeWolfe doesn’t think it’s all vaporware. He noted countries like Colombia and El Salvador embracing blockchain. Jam City has 1,300 employees now, and it will devote significant resources to the blockchain division. “To be successful, you need to have all the qualities that a great free-to-play game maker has developed over the years,” DeWolfe said. “It’s still about creating really deep, rich entertainment experiences. And that’s not going to change.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,281
2,021
"The DeanBeat: Predictions for gaming in 2021 | VentureBeat"
"https://venturebeat.com/business/the-deanbeat-predictions-for-gaming-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: Predictions for gaming in 2021 Share on Facebook Share on X Share on LinkedIn Dave Baszucki, CEO of Roblox, talks about the metaverse. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. This past year was one of the most unpredictable in all of human history. The pandemic threw off our ability to predict what will happen in the game industry. It surely messed up my predictions about where the games industry would go. Game companies had a record year in 2020, but I never would have predicted that in March as the world seemed headed into an unprecedented global recession caused by the coronavirus. But people turned to gaming as a solace, and the whole industry not only survived. It prospered. Market researcher Newzoo is predicting that the entire game industry — PC, mobile, and console — will grow 19.6% to more than $174.9 billion in 2020. With two new consoles launched in November, the industry will likely grow again in 2021, helping gaming stand apart from the crowd. This year, we saw a huge surge in venture capital investments in game studios, and a big wave of acquisitions as well, with Microsoft buying ZeniMax Media (Bethesda) for $7.5 billion and Embracer Group announcing it had bought 12 game studios in one day. A continuation of this growth is the easiest prediction to make. But the pandemic has changed the predictability of the industry in many ways. Esports will continue to struggle as it moves forward in a digital-only format, and it’s not clear when we will ever be able to go to live events again. Game conferences are one of the places where we can catch up on future trends, but so many of those events have been canceled or gone digital (such as our GamesBeat/Facebook Summit on Driving Game Growth and Into the Metaverse on January 26 to January 28). But we can expect games to continue to dwarf other forms of entertainment. Movie and TV production has been hobbled, and cinemas are all but shut down. With that limited horizon, I’m going to boldly make my bad predictions once again. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! For the usual comparison and embarrassment, here are my predictions for 2019 , 2018 , 2017 , 2016 , 2015 , 2014 , 2013 , and 2012. At the bottom of the story, I’ve also given myself grades for the predictions I made a year ago for 2020. Lastly, I’ve been very publicly calling for ideas on social media about my predictions, and I appreciate all of the followers and readers who have pitched ideas to me. Thank you for your help, and Happy New Year! May it be a better one than the one we just endured. 2021 predictions Above: Tim Cook, CEO of Apple, is a big advocate for privacy. 1) Apple’s IDFA change will hobble targeted advertising for iOS games Apple is on a quest to put user privacy above all else. But that means it will no longer allow advertisers to extract user data to do targeted advertising. And that’s what Apple’s retirement of the obscure Identifier for Advertisers (IDFA) is all about, and the game industry is caught in the middle in this fight between Apple and advertising companies. Apple warned the change in its opt-in rules for IDFA usage was coming and it planned to launch it in mid-September. But Apple postponed the change after the ad, app, and game industries warned about the disruption it would cause. But the reprieve was only temporary, and Apple is moving ahead in early 2021 with plans to require users to specifically opt-in if they want to be tracked for advertising purposes. Without proper explanations for what it means for app pricing, most people are opting out. And that could cause a big disruption in iOS games, which generated perhaps a quarter of the industry’s $174.9 billion in 2020. Since the effect is so unpredictable, some mobile marketing companies are raising the alarm bells, but game companies are saying it may not be a big deal. I predict it will have different effects on different players in the industry. Eric Seufert , monetization expert and the owner of Mobile Dev Memo , believes both Google and Facebook will be impacted. He thinks that those companies might better oppose Apple by noting how consumers could lose access to free apps and games that advertising allows them to enjoy. He thinks highly monetized strategy games, role-playing games, social casino games, and other titles that need to reach very specific customers will suffer, while casual games and games that naturally go viral on their own, without the need for targeted ads, should do well. He thinks there will be little impact on subscription apps and those that are only moderately dependent on ads or in-app purchases. I worry it could trigger a recession in games and cause the fastest-growing part of the industry to stall. That said, I believe this is a very unpredictable but important issue that is far too opaque. For the opacity, I blame Apple. It might just come out and say it wants to change the way that games become successful on the app store, but that might mean more legal trouble for Apple. But one thing is clear. Ignore the IDFA change at your peril. 2) Epic Games may lose its legal case but win a wider war Above: Epic Games launched the Free Fortnite Cup with Apple as the villain. The sad thing about the IDFA is that Apple is judge and jury, and the industry can’t do much about it. And that reminds me of Epic Games’ quixotic antitrust case against Apple. Epic Games has assembled good evidence, and it is a bold strike to fight back against Apple’s control of mobile gaming. At the cost of getting its own Fortnite game booted off the App Store by Apple, Epic Games is doing a big favor for game developers in standing up to Apple and trying to get rid of its 30% royalty cut on all App Store sales. But antitrust law is antiquated, and it doesn’t necessarily protect a company like Epic Games when a platform owner like Apple decides to cut it off. If a judge decides that Epic has plenty of other choices where it can take Fortnite without much direct harm to consumers, then Epic Games could lose the legal case even though it has the moral high ground. But if Apple does everything it can to crush Epic Games as it has so far, Apple could lose the wider war. Regulators could change their policies or Congress could amend antitrust law and curtail Apple’s power. But the game industry could also aggressively seek to escape the platforms and the app stores that the tech giants run. They could support HTML5 games such as Facebook’s Instant Games or Snap’s messaging games or Nvidia’s GeForce Now that use the open web to circumvent the app stores. By creating downloadless game experiences with HTML5 or royalty-free cloud games, game companies could bypass the gatekeepers and escape the rules of the tech giants. The open web could be a viable path to an industry that doesn’t have to pay the platform tax. If regulators or the rest of the industry force Apple to become more open, then Epic will have accomplished its goals, even if it doesn’t reap benefits for itself. In the long run, the game industry and its platforms could become more open, and we could thank Epic’s Tim Sweeney for that. 3) Game IPOs will continue and change the game industry Above: Unity Technologies has 1.5 million monthly active users. Because gaming has done so well in the pandemic, more investors have noticed the industry and are moving money into it. One way is through initial public offerings (IPOs), and another is special purpose acquisition corporations (SPACs). Game engine maker Unity went public and is now valued at $40 billion, far more than the $17 billion value of the larger rival Epic Games at its last funding in 2020. Now Unity is too big to be acquired by most other game companies. Skillz went public via a SPAC , and Roblox and Playtika are expected to follow up with IPOs soon. These companies are exploiting a historic window of opportunity that will enable them to stay independent. And that means that they won’t be acquired anytime soon by tech giants or the biggest game companies. And from our first two predictions, we can understand some of the danger of companies becoming too big, either through their own great business ideas or by acquisitions. I don’t want to sound like a free-market-at-all-costs advocate. But if big game companies acquired a bunch of the big game developers, that could stifle innovation and creativity for a time. With the IPO window open, there’s still a way for the public to get in on the action and reward the best game makers with a market value that is inflated in the public markets and makes it impractical for another big game company to try to take them over. That’s good, as I don’t want to see all the good game developers get acquired. IPOs are the market’s way of saying that if you create something great, you don’t have to sell it to a big corporation to make it pay off. You can sell it to all of us, and keep control of it. Don’t get me wrong. Money pouring into games instead of into other industries is a good thing. That’s happening on the level of game startups, and it’s good for the owners of mid-sized companies, and it’s good for the owners of the newly public companies. Hopefully, the markets will stay strong and it will be good for public stock investors as well. 4) Game streaming and movie streaming will get hitched Above: GamePass with xCloud. The big Hollywood companies — and their owners — are all pouring money into the streaming of movie and TV shows in a bid to ward off Netflix. But Netflix itself is moving into games, where engagement with an intellectual property can be far higher and more lucrative. We have seen Apple, Disney, NBCUniversal, HBO, and more move into movie streaming. At the same time, we’ve seen Google, Microsoft, Sony, Amazon , Nvidia , Shadow, and Facebook all move into the streaming of cloud-based games. Microsoft has launched its Xbox Game Pass subscription in the hope of becoming the Netflix of gaming. It may not make tactical sense, but big companies will see the strategy that they can pursue to become even bigger and lock up more users. In the words of former MIT Media Lab director Nicholas Negroponte, “bits are bits.” You can stream games or stream movies and make money from both. Surely, someone in this vast marketplace will see that the convergence of technologies and the economies of scale could favor a company that brings game streaming and movie streaming under one roof. Disney could gain a lot of subscribers if it bought Electronic Arts and made its games available as part of the streamed Disney+ service. Strategically, such a service could be a way to aggregate consumers and concentrate media power into the hands of a single company with a single subscription. But this requires a skill that the biggest tech and streaming companies have not mastered: understanding gaming and allowing game companies to be their best. Let’s just hope that broadband technologies such as 5G networks will enable us to stream so much entertainment into homes. 5) The metaverse will begin to emerge as social gaming grows Above: Roblox is holding events related to Ready Player Two by Ernest Cline. Such a company as we’ve envisioned in the previous prediction could become so strong that it could launch the Oasis, a metaverse controlled by a single company, offering gaming, movie, TV, and other entertainment services so that you’ll never have to leave it. We desperately need a metaverse to escape the Zoomverse that we have all been stuck in during the pandemic. We need something that is more immersive and enthralling than video. Realistic or fantastic game worlds can deliver that. While Ready Player Two has been criticized by many observers, I would love to hang out in the worlds of J.R.R. Tolkien, as envisioned in Ernest Cline’s latest book. The metaverse should offer a rabbit hole of fun for everybody, whatever your particular preferences are. And there are many ways for it to emerge. Netflix could launch a vast game world full of its entertainment properties. Epic Games or Roblox or Microsoft’s Minecraft could create a metaverse for their fans. Every company that has amassed an audience has to make that audience more engaged and more social, and connecting fans in a world — preferably a game world — they never have to leave is my expectation for a real metaverse, not one that tries to trick us by being a metaverse in name only. A lot of companies will try and fail to create what author Neal Stephenson envisioned with Snow Crash back in 1992. I’d like to see it succeed soon (and that’s why we’re holding our own GamesBeat Summit: Into the Metaverse event on January 27-28). It will take years to build and perfect the metaverse, but let’s start it in 2021. I realize it will take time, but we need this. For our own mental wellness and every other reason as well. 6) God of War: Ragnarok will remind us of Sony’s greatness Above: God of War: Ragnarok is coming sometime. Hopefully in 2021. At The Game Awards , Sony showed a small teaser for the next big exclusive game for the PlayStation 5, and it will be God of War: Ragnarok. The sequel to 2018’s winner of many Game of the Year awards will hopefully debut in 2021. Cory Barlog, the game director at Sony Santa Monica, is busy at work trying to top his previous creation. But this game is much more than just a sequel. It’s a reminder that Sony believes in giant single-player games with a shitload of storytelling. Exclusives like God of War made the PlayStation 4 stand out and pull ahead of other consoles in the last generation, and Sony still has many studios working on such games for the PS5, which is off to a good start. Barlog took what might have been a weak God of War 4 and turned it into a father-son tale that was more widely appealing. This next God of War title will have a heavy burden. It has to show that big, exclusive single-player narrative games still make sense when triple-A titles are under attack from free-to-play games that last forever. Sony has shown more than any other game company that it still believes in these narrative masterpieces in the face of competition from year-round franchises such as Call of Duty and FIFA. 7) Halo: Infinite will put Microsoft back in the game Above: Halo Infinite is still getting ready for Master Chief. We haven’t seen shipment numbers yet, but it certainly feels like Sony had a more balanced launch for the PlayStation 5, with good exclusives such as Spider-Man: Miles Morales and Astro’s Playroom to stir demand. Microsoft showed up with Xbox Game Pass and lots of compatible games, but the launch lineup was underwhelming. The missing part of the console launch was Halo: Infinite , which got a poor reception in its preview. 343 Industries and Microsoft shook up the team’s leadership and brought in former Bungie leader Joseph Staten. Now the game will ship in the fall of 2021 , so long as there isn’t another delay. Microsoft has always tried to align a good launch lineup with its console launches. It has also tried to launch new systems with new Halo games, but it has succeeded only in doing that once, with the launch of the original Xbox. With Xbox Game Pass available and a good strategy on backward compatibility, the company can focus on getting lots of units into the market even without a tent-pole title. By the fall of 2021, however, it will need a system seller to keep pace with the PS5. Titles from Microsoft’s acquired studios will only begin to show up around that time, and the development job should become simpler as making titles that run on both generations — Xbox One and Xbox Series X/S — should get easier with experience. I’m hoping Microsoft will use the time to double down on content for Halo: Infinite multiplayer, esports tournaments, and bigger marketing plans for what could be its biggest Halo yet. 8) Nintendo will unveil the Switch successor in 2021 Above: Nintendo Switch. The Wall Street Journal reported that Nintendo was readying a successor to the Nintendo Switch in 2020. But Nintendo didn’t announce the system, and it has focused on cranking up production of the Switch and the Switch Lite. At some point, however, sales of the PS5 and the Xbox Series X/S will start to eat away at potential Switch buyers. If we have something like an Electronic Entertainment Expo (E3) in 2021, that would be a good time for Nintendo to announce a next-generation system. Developers could get a head start on developing games for the system, and Nintendo could launch it in the spring of 2022, about five years after the launch of the original Switch. I’m not going on insider information, so this is speculation. But it would make sense for Nintendo to stay away from the launch cycles of its console rivals and pursue a strategy of being an alternative to Microsoft and Sony. Nintendo definitely found a broad niche with the Switch, as a hybrid machine that is both playable on the TV and as a portable device. If Nintendo focuses on that niche and expands it further, it could withstand the forces around it such as cloud gaming, multiplayer universes, and mobile gaming. 9) Regulators will come after both games and game platforms Above: Star Wars: Battlefront II. Gaming has become front and center of the entertainment universe during the pandemic. But that means it will draw the attention of governments and regulators. China has cracked down on games with censorship, and slowed the approval of new mobile games to a trickle. It is removing games that don’t have proper registration. It has put limits on how much minors can play out of concerns about addiction. The rest of the world’s regulators won’t be as harsh, but they will pay more attention to games and their effects on society. I wouldn’t be surprised if more countries ban loot boxes as illegal gambling or regulate it as entertainment for adults. The game industry is walking a delicate tightrope. Campaigns such as #PlayApartTogether , aimed at getting people to social distance during the pandemic, are broadly appealing. But free-to-play games that have pay-to-win mechanics, aggressive monetization that can prey upon the young or people with addiction problems, privacy-invasive advertising, or gambling-like hooks could prompt regulators to crack down. That’s all in the name of protecting people from game companies. But as we’ve seen with Apple and Epic’s clash, regulators may also pay attention to the platforms that host games and whether they’re enabling fair competition. And I think we would like to see the platforms create an open metaverse to host the games of the future. If they don’t, the crackdown will come. It’s time for the game industry to get in front of this problem, aggressively. 10) Riot Games will establish Valorant as an esport, and other games will follow Above: Valorant is a 5v5 shooter game. Counter-Strike: Global Offensive has been a staple of esports for decades. But Valve hasn’t invested much in the esports ecosystem, in contrast to Riot’s efforts to establish a permanent esports ecosystem around League of Legends. Riot will now leverage that ecosystem to establish its second major esports game: Valorant. It still has a long way to go to catch on with the masses of gamers. But esports pros have been switching over to Valorant from CS:GO. Valve will have its hands full trying to reinvest in its game as a counterattack, but Riot is a far bigger company with 3,000 people. It can afford to invest in Valorant, but the key will be to bring in new esports fans into the fold, rather than just stealing the audience from CS:GO. For the past few years, esports has grown dramatically in terms of its audience, but it still needs fans to spend money in order to generate profits the way that traditional sports teams can do. That’s hard to do while we’re in a pandemic and physical events aren’t possible. But it is possible to grow a huge digital audience and ramp up the fan base for the day when physical events could happen again. I hope somebody knocks it out of the park because we could sure use another billion-dollar esports game. 11) Game startups will continue to thrive and generate huge game ecosystems Above: Griffin Gaming Partners’ focus. During 2020, more than 30 game-focused venture capital funds set up shop to invest in game companies. Game investment site InvestGame estimated that more than 100 game studios received funding in 2020. Combined with acquisitions, the deals led to more than $20.5 billion in transactions in the first nine months of 2020. When I started at VentureBeat 12 years ago and started GamesBeat, there were no such venture capital funds. Traditional VCs slowly picked up game-savvy investors, and the specialty funds evolved out of that as game investors and entrepreneurs became successful and plowed the money back into new funds. March Capital is on its second game-oriented fund with a $60 million raise for its March Gaming Fund , and Griffin Gaming Partners has raised $235 million. That new capital has barely begun to work, even though it feels like a couple of fundings per week is a bit much. What I enjoy seeing is the economic benefits of the job creation that happen alongside these investments. If you look at Turkey, for instance, it had the core of a mobile game industry arise with the success of Peak Games and Gram Games. Zynga bought those companies for enormous sums, and some of the people who got their first jobs with those companies have now splintered off into their own startups. Game VCs are investing in those studios, and Turkey is now a hot spot for games, with a lot of economic goodness resulting from that. Countries such as the U.S., China, the United Kingdom, and Canada still have the strongest ecosystems, but there’s no reason for them to monopolize all the jobs. A strong game ecosystem can arise anywhere now, and the game VCs are the fertilizer for that growth. These small studios will grow, launch hits, and then get acquired by the big publishers over time, starting the cycle over again. Lastly, here is my scorecard for my 2020 predictions from a year ago. 2020 Scorecard 1) The Last of Us: Part II will be my favorite game of 2020 Letter grade: A+ This game didn’t turn out anything like I had expected a sequel to be. Naughty Dog’s The Last of Us was my favorite of all time. But this title took what I liked about this 2013 hit – the characters and the special relationship between the father figure and the daughter figure of the previous game — and destroyed it. Then The Last of Us Part II proceeded with a script that was the logical consequence of Joel’s decision in the first game to lie to Ellie about why she didn’t need to be sacrificed to develop a plague vaccine. The game introduced us to new characters in the universe of the post-zombie apocalypse, and it told a story about revenge and redemption that I totally didn’t expect. Even so, it was a moving story, with compelling characters, flawless execution on graphics and gameplay, and everything else I expect from a Naught Dog production. It made a statement about violence through a story in an extremely violent video game. It was emotionally exhausting to play it, and it wasn’t what a lot of people consider to be fun. But I’m glad Naughty Dog created it and that I played it through with my daughter. 2) Sony’s PlayStation 5 will be a smashing success Letter grade: A We don’t yet know how many units Sony has sold for the PlayStation 5, which debuted on November 12. But we know that it likely outsold Microsoft’s Xbox Series X/S. (It really is just a matter of which company did the best job lining up its supply chain.) If I were to gamble, I’d say that Sony won, with a better lineup thanks to the likes of Spider-Man: Miles Morales and Astro’s Playroom. While Microsoft made some big strides in matching Sony when it comes to first-party studios, Sony had its studios in place for a longer time. It managed to bring some big projects home at the same time as the launch, and that made a big difference in the eyes of gamers. Sony outsold Microsoft by more than 2-to-1 in the last generation, and it’s going to be hard for Microsoft to steal away those gamers. This console war is Sony’s to lose. 3) The Xbox Series X will also be a big success Letter grade: B Microsoft had everything going for it when it readied the launch of the Xbox Series X/S. It had two different consoles to target two different types of buyers: the hardcore spenders and the budget-conscious fans. It lined up a lot of new studios with acquisitions. But nothing came in for the finish line in terms of big games that could shine on the new console. The biggest game of all, 343 Industries’ Halo Infinite, was delayed a year until the fall of 2021. Microsoft showed up without a major exclusive to make its console stand out. But it did show that its Xbox Game Pass subscription had grown quite valuable in the eyes of consumers, and it also made it easy for fans to upgrade to the new machine by making its Xbox One games compatible with the Xbox Series X/S. With those moves, Microsoft will hang on to its own hardcore base. Microsoft’s fans will have to be patient as they await big titles and new games coming from Microsoft’s acquisitions, however. 4) Fry’s Electronics will shut down, and so will many video game stores Letter grade: C Fry’s Electronics is definitely a dinosaur from another era. It should have become dominant in the age of big box retail, but the chain never expanded that aggressively. And yet somehow, the chain is holding on. The company closed another big store in Campbell, California , in addition to one in Palo Alto, California. But like other big box retailers, Fry’s has been Amazon’d. It’s like the walking dead. But for some reason, it’s still alive, prompting my C grade on this one. With the coronavirus still running rampant, big retail’s days are numbered. Most shoppers report that Fry’s stores are bereft of merchandise. They’re big empty shells. It’s sad, as Fry’s Electronics, which grew up with groceries in one aisle and memory chips in another, is a Silicon Valley institution. I’m not expecting it to be around much longer. GameStop isn’t faring much better, with 462 stores closed in 2020. 5) Nintendo may reveal new hardware, but won’t ship it in 2020 Letter grade: C I scored an F when it came to whether Nintendo would unveil new hardware to replace its successful Switch. But I scored an A in noting that Nintendo was not likely to launch the said unannounced console in 2020. Nintendo should be in no rush. It launched its successful console-handheld hybrid Switch console in March 2017. And now it can benefit from being off the cycle of Microsoft and Sony, which both launched new machines this year. While the PlayStation 5 and Xbox Series X/S were in short supply this holiday season, Nintendo probably cleaned up with plentiful supplies of the Nintendo Switch. 6) Amazon, Facebook, and Microsoft will join Google in launching cloud gaming services Letter grade: A Cloud gaming has come a long way since OnLive gave up the ghost. Google launched its cloud gaming service Stadia in November 2019. But it had a very slow launch, and that left the door open for rivals. In this prediction, all three of the rivals came through with their own cloud gaming service launches. Microsoft debuted Project xCloud; Facebook did a small launch of its cloud gaming service, which evolved from its acquisition of startup PlayGiga in Spain; and Amazon launched Luna. On top of that, Nvidia finally formally launched its GeForce Now cloud gaming service. 7) Big companies and VCs will continue to invest in game companies Letter grade: A+ While the pandemic made 2020 miserable for many of us, gaming prospered. And game venture capital firms multiplied. March Capital launched a $60 million gaming fund, Griffin Gaming Partners launched a $235 million fund, and by the end of 2020 we had more than 30 game VC funds investing in games around the world. InvestGame , which tracks game investments, said more than 100 game studios were funded in the first nine months of 2020. And if you add the money raised together with the acquisitions, the total value of deals in 2020 exceeded $20.5 billion, according to InvestGame. I aced that prediction, but something else happened that I didn’t expect. Gaming prospered in the pandemic as people turned to games as a distraction and for remote socializing. That opened a window for initial public offerings and SPACs (special purpose acquisition corporations) for game companies. Unity went public and saw its market value rise to $52 billion. Skillz went public via a SPAC while Roblox and Playtika filed to go public. 8) Esports companies will continue to soar in viewers, valuations, and acquisitions — but not profits Letter grade: A This prediction proved accurate, but not in the way I expected. Esports companies were hit with a broadside when the pandemic arrived and led every company to cancel their physical events. But the esports industry recovered as the audiences shifted to watching matches online, in a digital-only format. Riot Games went ahead and launched Valorant in the pandemic, and it kicked off esports tournaments for the anti-Counter-Strike game by the end of 2020. Quantum Tech Partners still expects esports to generate a lot of deals and acquisitions going forward, and new esports holding companies have emerged to acquire esports properties. And yes, nobody is really bragging about the buckets of profits they’re making from esports yet. 9) VR will have its biggest games yet, but will continue to struggle Letter grade: A We had some great games debut in 2020 on virtual reality platforms. Facebook launched its Oculus Quest 2 headset, and Valve came out with its Valve Index. Respawn Entertainment’s Medal of Honor: Above and Beyond and Valve’s Half-Life: Alyx were among the triple-A games that debuted for VR during the year. But the consumer market for VR continues to struggle. VR arcades were wiped out due to the pandemic. Facebook is doing its part by acquiring struggling VR studios and launching new hardware at lower prices. But the enterprise is keeping VR going as full immersion is extremely valuable to companies that are trying to train and educate their personnel. Let’s hope that VR hangs in there until the masses truly arrive. 10) Augmented reality glasses will become more practical Letter grade: D There was small progress on augmented reality this year, but not enough to warrant a good grade on this prediction. Apple didn’t announce its plans to go into this market. And while Facebook said it will launch AR glasses in 2021, it didn’t actually introduce any new hardware in 2020. We still expect great things from companies that are investing in the tech, such as Qualcomm and Niantic. But 2020 wasn’t the year for AR. 11) Regulatory forces will gather momentum Letter grade: A The Federal Trade Commission has been investigating game loot boxes and microtransactions for deceiving consumers and spawning addictive gambling-like behaviors. Regulators didn’t act against game companies this year, but the concern about regulation is growing. Sheldon Evans, an assistant professor of law at St. John’s University in New York, wrote a paper on how states should crack down on loot boxes as a form of gambling. The state of Washington has begun treating social casino games as leading to possible gambling addictions. Congress went after the big tech companies for antitrust violations, but gaming could be the next target. Add to that China’s own crackdown on making sure games are registered — and forcing Apple to delete tens of thousands of games that weren’t in compliance — and you can see the effect that governments around the world are having on gaming. 12) Subscription gaming will gather steam Letter grade: A Microsoft made good headway toward its goal of creating the “Netflix of games.” It has moved fast to add premium games to its Xbox Game Pass Ultimate subscription service. And Game Pass is turning into a juggernaut. In September, the company reported that subscriptions grew 50% to 15 million in six months. Apple Arcade now has a full year under its belt, and Stadia does as well. Consumers are going to have a lot of choice when it comes to subscription services, and that’s a good thing. 13) Intellivision will be the wild card of 2020 Letter grade: F Intellivision delayed the launch of its Amico family game console from October 2020 to April 2021. And with that, it skewered my prediction about a 2020 wild card launch. I’m still expecting good things from Tommy Tallarico’s retro project, but it better live up to its family focus more than ever, now that it is coming out after the PS5 and the Xbox Series X/S, which should be in plentiful supply by April. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,282
2,014
"The Last of Us being turned into a movie and the video game movie issue | VentureBeat"
"https://venturebeat.com/community/2014/03/07/the-last-of-us-being-turned-into-a-movie-and-the-video-game-movie-issue"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The Last of Us being turned into a movie and the video game movie issue Share on Facebook Share on X Share on LinkedIn This post has not been edited by the GamesBeat staff. Opinions by GamesBeat community writers do not necessarily reflect those of the staff. Well this is odd. I had planned to make a post on video game movies and now The Last of Us has been announced as being turned into a movie. Perfect timing I guess. And my reaction to that is…well I’m kind of terrified. Don’t get me wrong; I totally want people to experience The Last of Us. Everyone should. It’s by far one of my favorite pieces of art ever, not just video games. I do wish that people would spend the time to learn video games and play them because The Last of Us is meant to be a video game and nothing else. And right there is the issue with video game movies. They were never meant to be movies especially considering how unique video games are to any other medium, that interaction with the controller is just as important as the writing, gameplay, acting, whatever. It’s as crucial especially for something like The Last of Us where the interaction makes you feel like Joel, it makes you feel the tension he feels and it makes you feel even more so for Ellie. Beyond that, it’s hard for me to imagine anyone but Ashley Johnson and Troy Baker playing their respective roles, they are both too perfect, especially Ashley, I have my doubts that any other actress could get Ellie’s quirks down as well as her, especially considering the character is apart of her and not just Neil Druckmann’s writing and direction. Speaking of Neil, I would love for him to direct the movie but I realize that probably won’t happen considering he is probably going to be busy with another project at Naughty Dog relatively soon I’m sure. Overall, I just can’t imagine other people taking the reigns of this story and crunching it down into a shorter time frame, it was meant to be a 15-20 hour experience and that makes the whole thing feel weird to me. I hope (as that is all I can do) that they make something really amazing for everyone to experience (not just the fans of the game) in a 2-3 hour span. The whole video games movie issue is still very strange to me. My overall preference is that we just not make them. There isn’t a reason to make them besides money and more exposure for people who otherwise wouldn’t play games. It feels cheap and somewhat like selling out. I’m still waiting for the day when people pick up and play games just like they might read a book or watch a movie but I fear that day will never happen because of the barrier of entry that video games has. It has a learning curve even for the simplest of games where as a movie or book has it all there for them, the audience doesn’t have to do a single thing to make the experience move forward. Overall, the only real reason for video game movies existence (besides money) is for games to get more exposure to a wider audience which I’m personally okay with IF it is adapted well which is the real issue, how does one adapt such a unique experience into a much, much shorter experience? The unfortunate answer: it’s near impossible. I wish I had a better answer. URL: http://www.deadline.com/2014/03/screen-gems-takes-on-sony-playstation-vidgame-adaptation-the-last-of-us/ VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,283
2,023
"Data science vs. data analytics: Key comparisons | VentureBeat"
"https://venturebeat.com/enterprise-analytics/data-science-vs-data-analytics-key-comparisons"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data science vs. data analytics: Key comparisons Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents Data science and data analytics: Clearing up the confusion Data science vs. data analytics: Similarities and differences Data science and data analytics are two overlapping and complementary functions within the data department of a modern enterprise. Data science , however, is more specifically involved in creating systems by which large — and often unstructured — datasets are used to drive machine learning (ML) capabilities and therefore to inform predictive and prescriptive analytics and processes. Data analytics , by contrast, is more involved in reporting or presenting more traditional, descriptive operational data or results for use by professionals in other departments within the business. Data science and data analytics: Clearing up the confusion For example, a data science department might incorporate data and write algorithms to integrate real-time social media sentiment, geographically-based economic information, supply availability and costs, and customer demand to create detailed revenue and profitability forecasts and resource allocation models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Meanwhile, a data analytics department might focus on providing visualization tools to help business analysts in finance and other operational departments. Data scientists may write algorithms that data analysts use to provide dashboards and other visualization tools for nontechnical users to access and use organizational data. There is some confusion about the difference between the terms because they have both been used in broader and more specific senses, and therefore are sometimes used variably within different organizations. When “data science” is used broadly, it represents the overall function of gathering, organizing, crunching and presenting data; in that context, data analysis is just one stage of the process pipeline. When “data analytics” is used broadly, it represents the analysis of data generally, with data science being one particularly rigorous and mathematically oriented subset within the discipline. The Cleverism job site provides an organizational chart depicting how the two functions may be positioned within a data department. Both require business and statistical understanding, while a data scientist also especially needs strong programming skills, and the analytics practitioner needs strong communications skills. The University of Wisconsin provides more detail on what is required for each role and function. Data science vs. data analytics: Similarities and differences Similarities between data science and data analytics The similarities between data science and data analytics include: Both disciplines involve the extraction of key business insights from data. Both disciplines require the conversion of insights into more usable or understandable forms. Both disciplines require a combination of programming and statistical skills applied with significant understanding of the business. Differences between data science and data analytics Key differences between data science and data analytics include: Data science is more involved with newer, larger, more complex and unstructured datasets (that is, incorporating more real-time and external data), while data analytics primarily makes use of more traditional, operational data. Data science tends to use the artificial intelligence (AI) capabilities of ML, whereas data analytics is based more on enhancing the traditional reporting of operational business results. Data science requires more extensive programming and statistical skills, whereas data analytics requires more communications and collaboration skills for understanding and responding to the requirements of nontechnical users in other departments. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,284
2,021
"The data privacy Cold War is here. Which side are you on? | VentureBeat"
"https://venturebeat.com/2021/03/14/the-data-privacy-cold-war-is-here-which-side-are-you-on"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The data privacy Cold War is here. Which side are you on? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Apple and Facebook have entered an all-out Cold War in the name of consumer data privacy. The battle started when Apple announced it will soon require users to opt in to personal data tracking. Facebook, which makes money from that tracking, took out full-page ads in major newspapers condemning the move. Apple CEO Tim Cook fired back in a recent speech , rebuking companies that gather as much data as possible and warning of dangerous consequences. Both companies have put a stake in the ground, and the impact will be felt across the tech and business worlds. Meanwhile, conversations about data privacy are going mainstream. WhatsApp users expressed outrage when they had to accept new privacy terms or lose the app, and data privacy bills are gaining momentum in state legislatures. All of this means time is up for the companies that have sat on the sidelines of this debate until now. Every tech company has access to user data, and each one now must decide which side of the data privacy war they’re on: the one that collects and exploits consumer data, or the one that respects and protects data and the users it belongs to. Prioritizing consumer data privacy doesn’t always mean a company must overhaul its policies. Rather, it’s about communicating those policies to consumers in a way they can understand and holding internal teams accountable to them. Privacy policies should pass the user test, not the lawyer test Every company that collects and shares consumer data needs a version of its privacy policy that users, not corporate lawyers, can understand. It seems simple, but privacy policies are often so long and stacked with legal jargon that users scroll through without absorbing a word. A digestible privacy policy should articulate what data the company believes it owns and what belongs to the consumer. It should be clear, jargon-free and understandable without a dictionary. Women’s health app Clue does this well, outlining exactly what data it collects from users and why. Especially when users are sharing data as sensitive as health information, this transparent communication fosters consumer trust. Last year, 91% of companies with very mature privacy practices – which include transparency – saw increased user trust and loyalty. Another benefit of a user-friendly privacy policy is that it can help a company’s leaders decide whether to change their data privacy practices. If leaders aren’t comfortable telling consumers what the company is doing with their data, it’s time to rethink those practices. Data privacy “road signs” can help users navigate In addition to a user-friendly privacy policy, companies should give consumers privacy “road signs” to help them navigate the confusing landscape of data collection and make informed decisions about what data they’re willing to share. There’s a misconception that Facebook is under scrutiny for using consumers’ data to target ads, but in fact it’s because the company historically hasn’t given its users any of this signage. Its mass collection of user data without explanation of how or why has hurt consumers’ trust in its brand. Data privacy road signs go beyond a bare-bones privacy policy, giving users context that helps them decide what data they’re comfortable sharing. For example, a company can tell users what it doesn’t do with their data. When it comes to an abstract, complex topic like data privacy, people are often better at understanding what they’re not comfortable with. An organization like Signal does that work for users by outlining that it can’t access their messages and “does not sell, rent, or monetize your personal data or content in any way — ever.” Good privacy signage also tells users what kinds of partners and third parties a company shares data with and why. Twilio clearly communicates that it shares some user data with other companies to improve users’ call quality. These clear guidelines build user trust and are a compelling reason for consumers to choose one product over another that offers less clear data privacy signage. Make data privacy part of company culture Companies should communicate their data privacy practices early and often to users, but upholding those practices is an inside job. Leaders can take steps to ensure their company culture encourages employees to act as respectful data custodians. One of those steps is rewarding employees or teams who do their jobs well with the least consumer data. For example, leaders can invite a team that exceeded its goals while reducing data access to share how they did it and what they learned at an all-hands meeting. A company can also implement tokenization, which swaps out sensitive data with digital “tokens” — like poker chips or arcade tokens — that would be useless if intercepted or leaked. The data itself moves into a private vault that the company can’t access. These changes foster a culture that depends less on data access and encourages creativity. Finally, leaders can designate an executive privacy sponsor who advocates for user data privacy and holds leadership accountable to follow company privacy guidelines. Apple and Facebook have thrown down the data privacy gauntlet, and it’s time for all companies to pick a side. In the coming years, consumers will flock to companies that respect and protect their data. Those that are transparent and encourage good internal data privacy practices will gain more trusting and loyal users and in turn, stronger businesses. Frederick “Flee” Lee is Chief Security Officer at payroll and benefits service provider Gusto. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,285
2,022
"Databricks targets healthcare with industry-specific lakehouse | VentureBeat"
"https://venturebeat.com/2022/03/09/databricks-targets-healthcare-with-industry-specific-lakehouse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks targets healthcare with industry-specific lakehouse Share on Facebook Share on X Share on LinkedIn Ali Ghodsi, the CEO and cofounder of Databricks. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. San Francisco-headquartered data and AI company Databricks today expanded its product portfolio with another vertical-specific platform – Lakehouse for Healthcare and Life Sciences. The offering, as the company explained, brings a set of tailored data and AI solutions aimed at solving the most common challenges enterprises face while innovating in the healthcare industry. It follows the launch of Databricks’ lakehouse for retail and financial services sectors. “As organizations fully transition to electronic medical records, new data types like genomics evolve, and IoT and wearables take off, the industry is awash in massive amounts of data. But…teams don’t have the tools to properly use it,” Michael Hartman, senior vice president of regulated industries at Databricks , said. “With Lakehouse for Healthcare and Life Sciences, we can drive transformation across the entire healthcare ecosystem and help empower our customers to solve specific industry challenges and drive better outcomes for the future of healthcare.” How would Lakehouse for healthcare help? Currently, enterprises in the healthcare sector largely rely on legacy data architectures that keep information from different systems and functions in fragmented silos. This makes advanced analytics difficult, restricting the pace of innovation across areas such as patient care and drug discovery. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The vertical-specific lakehouse , on the other hand, eliminates this challenge by providing health institutions with a single cloud-backed platform for data management, analytics and advanced AI. It enables organizations to leverage data easily and accelerate the development of more advanced, data-driven solutions such as those aimed at disease risk prediction, medical image classification, biomarker discovery and drug development. Cross-functional teams – from physician-scientists to computational biologists – can also use data from this product to build a holistic view of the patient and make real-time decisions. To make these applications simpler, Lakehouse for Healthcare comes with a bunch of solution accelerators and open-source libraries (such as GLOW for genomics) backed by a certified ecosystem of partners. When combined together, these elements help data users jumpstart their analytics projects and save weeks to months of development time. The partner ecosystem, Databricks explained, includes companies like Deloitte, Lovelytics, John Snow Labs and ZS Associates. John Snow Labs will help enterprises analyze unstructured medical text using NLP for use cases such as oncology research, drug safety monitoring and anonymizing personal health information. Meanwhile, Lovelytics and ZS Associates help with automating the ingestion of streaming Fast Healthcare Interoperability Resource bundles and improving biomarker discovery for precision medicine. Customer base Databricks is making the new lakehouse offering generally available starting today. However, some enterprises have already had a chance to use it early, including GE Healthcare, Regeneron, ThermoFisher and Walgreens. “The Databricks Lakehouse for Healthcare and Life Sciences is helping GE Healthcare with a modern, open and collaborative platform to build patient views across care pathways. By unifying our data in a single platform with a full suite of analytics and ML capabilities, we’ve diminished costly legacy data silos and equipped our teams with timely and accurate insights.” Joji George, CTO of LCS Digital at GE Healthcare, said. The move further expands Databricks’ lakehouse ecosystem, which competes directly with Snowflake’s data cloud. Other players in the same space are Dremio, Google BigQuery, and Amazon Redshift. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,286
2,022
"22 open source datasets to boost AI modeling | VentureBeat"
"https://venturebeat.com/data-infrastructure/22-open-source-datasets-to-fuel-your-next-project"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 22 open source datasets to boost AI modeling Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Some say, “ data is the new oil,” with an air of seriousness. And while the phrase may capture a certain truth about the modern digital economy, it fails to model the way that bits can be copied again and again. Sometimes the ease of sharing creates a distinct absence of scarcity and that changes the economics of the entire game. One of the best ways to visualize this is to tap into some open source datasets that are proliferating on the Internet. All are free to use and one of them might be just what your project needs. Why do people share them? Some are using them for promotion, a kind of cheap advertising. Some cloud providers build out the datasets knowing that people who need them are much more likely to sign up for computational power from the same company. If the data is ready, why wait to ship it across the country. Some governments share them because it’s part of a tradition. The taxpayers should get something — in these cases, transparency about what their money is funding. Others understand that collaboration often wins. Datasets built from hundreds, thousands or even millions of small contributions can be more accurate and useful than datasets from a standalone company. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Still others share the data because it’s part of the scientific process. Maybe it was collected thanks to a grant that required it be shared. Perhaps the team responsible wants others to build upon it. Possibly, there is someone who believes that the scientific community might be able to use it. Undoubtedly, some of this information may not be as accurate as we need. Sometimes a good proprietary data collection is the only way to pay for trustworthy information. But if your project can sustain the risk, if your calculations can work with the data’s error range, well, it’s best not to look a gift horse in the mouth. Here are 22 options for free data: OpenStreet Map They call it a “map of the world, created by you.” Their browser-based editor makes it relatively easy for anyone to reach into the dataset and edit the locations of streets, buildings, signs and more. The results are bundled into a big tarball that anyone can use – including the big map making and route-finding companies. U.S. Census While the details of each census are kept secret by law for 72 years, the U.S. Census Bureau shares statistics with everyone. They run several portals that make it possible to download details of neighborhoods and cities. Fast food restaurants use the information to plan new locations. States use them to allocate funding to local governments. See here , here , here or here to start. Kaggle The organization is devoted to data science, learning data science and the data itself. Their portal offers easy access to notebooks filled with Python and R code, as well as some lessons for learning how to use them and even some competitions. One corner is a big collection of datasets that range from essential to bizarre. From omicron daily cases , tabulated by country, to the winning numbers of the South Korean lottery. Data.gov Governments run on data and the US government sometimes shares it. Data.gov is a central clearing house listing many data sources like the Integrated Postsecondary Education Data System , filled with data about college, or the US Geological Survey’s collection of topographic data about every square mile of the country. And in an extra bit of meta surprise, they also offer a listing of data hubs in the individual agencies, bureaus and departments for further digging. Data.Europa.EU Europe also believes in opening up data to the world and Data Europa is a project run by the European Union to collect bytes from all of the member countries. At this writing, there are 1,397,730 datasets in the collection and they span a wide variety of subjects from agriculture to transportation. Traditional areas of government supervision like policing and the economy are well-represented, but there are plenty of odd and unexpected finds like a list of all medieval manuscripts in the Basel University library or a survey of Internet users in Switzerland. Data.Gov.UK There’s no reason to wonder about the state of Brexit. The United Kingdom also publishes a list of public data sources of their own. Some data comes from the central government and some comes from local authorities or even some public organizations. PLOS The Public Library of Science was founded in 2001 to be an alternative to the for-profit scientific journals that dominate the world of research. Along the way, it also created PLOS Open Data , a collection of open datasets that are usually connected to the research in the journal. If you have a question about the analysis or you just want to rerun the numbers differently, there’s a good chance the data will be available. This has been a crucial opportunity for scientists creating meta analysis by combining the research from multiple studies to search for larger patterns and issues. Open Science The Open Science Data Cloud is another mechanism where scientists from many different disciplines can share their lab data with each other. Some of the biggest projects include Harvard’s Cultural Observatory’s Bookworm , a collection of books and other textual materials, and Bionimbus , a collection of biological and biomedical data for studying cells. University Collections Many disciplines and sub-disciplines maintain their collections of data, often curated by dedicated researchers with a particular understanding of the field and what other researchers might want to use. The machine learning group at UC Irvine, for instance, has a collection of hundreds of datasets already set up for training machine learning algorithms. CERN, the home of the big particle accelerator, shares petabytes and petabytes of data for physicists. City Data Many of the cities in the country have embraced open data with varying degrees of devotion. The tax databases and the real estate information is usually the first to appear. Some sprinkle the data throughout their various web sites, but some have directories filled with pointers. See New York City , Baltimore , Miami , or Orlando for starters. Many smaller places like Ithaca or Auburn are also online. Amazon AWS offers a wide collection of datasets and also preloads them into some of their best services like EMR , often to use as an example. Many of these include some of the biggest government datasets like the NEXRAD weather radar system or the Landsat images. The company has been pushing environmental awareness in this area so many of the collections focus on natural data as part of the Amazon Sustainability Data Initiative and Earth on AWS. In January, they updated bioacoustic recordings of Orca sounds with streaming audio from around Puget Sound. Azure The Azure Open Datasets are curated and preprocessed to make them easier to use with Azure’s instances and AI routines. Many of the big government sets like the weather data are routinely polled and updated so the freshest information is available in the same location. Economists can track inflation with details from the Producer Price Index compiled by the US Department of Commerce. Urban planners, for instance, might be interested in New York City’s yellow taxi cab records that contain pick up and drop off times but no personal information. Google Google’s cloud stores a wide variety of different datasets from many of the governmental sources. They’ve also explored making it easier to use the data directly without building anything. The Public Data Explorer lets you drill down directly into the data to create charts and graphs interactive from sources like the World Economic Forum’s global competitiveness report. Google’s Colab offers a Jupyter Notebook interface to track any R or Python analysis of the open data or even your own private data. IBM For the data scientists who need information, IBM runs the Data Access Exchange (DAX). A collection of datasets gathered from the major government and open data sources. The focus is on supporting machine learning and artificial intelligence in the industries that form the foundation of the IBM customer base. The Oil Reservoir dataset, for instance, is filled with, 30,000 different simulations. The Fashion dataset, for instance, comes with 60,000 images of outfits that have been standardized for training machine learning algorithms. Companies that want to create their own data repositories can also turn to the Open Data for Industries , a hybrid collection of tools designed to break down data silos in organizations while simplifying analysis, reporting and AI training. FiveThirtyEight The popular data journalism site FiveThirtyEight often includes the data that constitutes the foundation for their analysis and writing. The NHL predictions , for instance, are based on thousands of simulations that are updated after each game. Political polling on questions like whether voters prefer a Republican or Democrat generic ballot are ready for your own statistical investigations. And if you’re curious which polls are more trustworthy, FiveThirtyEight distributes their meta analysis on pollster ratings too. GitHub Security Programmers who use GitHub to store versions of their code need to worry about security issues and GitHub wants to help them. They collect security advisories about flaws found in the various frameworks, libraries and other open source blocks of code for developers to watch. They also decided to open up the collection , so anyone can contribute. Autonomous Cars One of the big challenges for the automobile industry is creating the autonomous cars of everyone’s dreams. Many of the car companies are sharing datasets collected by their cars or lab equipment, so anyone can experiment with building some of the many layers that are necessary to make it all run smoothly. Some of the different sets include data from Audi , ApolloScape. Google , Motional , Oxford , and Waymo. Yelp As of this writing, Yelp distributes a subset of their vast collection of opinions about the restaurants, shops and other establishments. The current batch contains almost 7 million reviews of more than 150,000 businesses from eleven major cities. Yelp expects the text and photos will make great opportunities to train natural language processing algorithms and other AI applications, but maybe you’ll come up with a different idea. DBpedia Many datasets are fairly raw and unstructured. DBpedia is an effort to create an open knowledge graph full of ontological information that can be queried with SPARQL. The structure makes it possible to create queries that include strong inference and don’t just rely upon raw keywords to find the answer. Most of the information comes from the various Wikipedias. Facebook Many of the bits of cultural flotsam are found in Facebook’s social network and one way to search them is through Meta’s Graph API. We’re all just nodes in this huge data structure and your code can poke around it through the API seeing, more or less, the same things that you might see if you logged in. GitHub While many think of repositories like GitHub as places for code, many also store data inside, sometimes alongside some code but also just as a standalone source. The approach brings all the built-in features to track the evolution of the files over time, something that’s often missing from many databases. Some quick searches often reveal several repositories that might do what you need. MIT’s course on Deep Learning, for instance, stores sample material for class assignments like training autonomous cars. If you’re studying NFT’s, some Python analytics may do what you need. Thousands of repositories are squirreled away. Industry Organizations Many industries rely on networks of membership organization to handle tasks that benefit all the members like publish magazines, run conferences, sponsor studies, lobby the governments and, sometimes now, gather datasets that everyone can use. The British Film Institute , for instance, tracks box office receipts over the years and releases the data in raw form and statistical yearbooks. The American Iron and Steel Institute tracks raw steel production. Most major industries support someone collecting useful data. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,287
2,022
"Metaverse vs. data privacy: A clash of the titans? | VentureBeat"
"https://venturebeat.com/data-infrastructure/metaverse-vs-data-privacy-a-clash-of-the-titans"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Metaverse vs. data privacy: A clash of the titans? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It may well be another “clash of the titans” when the upcoming metaverse – such as we understand it now – meets data privacy. The metaverse wants to harvest new, uncharted personal information, even to the point of noting and analyzing where your eyes go on a screen and how long you gaze at certain products. Data privacy, on the other hand, wants to protect consumers from this incessant cherry-picking. That, friends, describes the upcoming battle in the future world of harvesting new personal preference information, and companies are already planning on how to monetize this potential bonanza for themselves. One can bet that in the new online economy of the future, plenty of new startups will be lining up on both sides. “When you talk about going into a virtual or an augmented reality (AR), it’s all about information as power,” said David Nuti, senior VP for North America Channel at Nord Security, an international online security provider. “They don’t create these platforms to feel good about bringing people together. They’re mining information that is sold off to serve you content that is relevant to what you like to do. “For example, if I’m in an augmented reality environment, a company may want to serve me an advertisement for a couch because they can see in my augmented environment that my couch is kind of ratty in the background. Through artificial intelligence, they’ll serve me up a color of a new couch that matches the paint on the wall of my house. If I serve up an advertisement, it’s no longer knowing that I’m serving up the advertiser to the person, but how long my eyeballs are focused on that content.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Charting your eye movements on screen The value of those few seconds on the screen to advertisers can’t be overstated. The question is: should companies have use of the analytics of that long stare at the new phone you’re thinking about buying, or a new Peloton you’re admiring? If you contend this is an over-the-top invasion of personal space, get with it: that plane left the runway a long time ago. This is all going to depend upon whether a user even wants to enter the metaverse in the first place. Today, Jan. 28, is international Data Privacy Day , which hopes to highlight the coming struggle within a specific, though arbitrary, 24-hour period. A recent study from NordVPN revealed a whopping 87% of Americans expressed major privacy concerns if Facebook succeeds in creating its proposed metaverse. In fact, half of Americans fear it will be too easy for hackers to impersonate others in this brave and bold new world, thus threatening personal information privacy on an unstoppable basis. This is mostly fear of the unknown at the present time since, in the same survey, 55% of Americans hadn’t even heard of the metaverse, let alone know what it entails. In fact, only 14% of those polled said they could explain the metaverse to someone else. Let’s back up a little and define these terms. Metaverse is the term Meta’s (Facebook) CEO Mark Zuckerberg foisted on the world last October. At a high level, it means the comingling of the real and digital worlds, such that it becomes difficult to ascertain reality from unreality. In this new setting, personal avatars are quickly expected to multiply. Zuckerberg introduced the metaverse and even produced a video explaining what it will look like – a stunt that famously received mixed reviews. He called the metaverse “an embodied internet where you’re in the experience, not just looking at it.” Imagine that you could meet your friends from all over the world in virtual reality, discuss business with partners without leaving your office, or access fantasy worlds you’ve always dreamed about. That’s what Zuckerberg has in mind. Advertisers and online merchants – not to mention Meta itself – have other ideas, however. Some other data points from the NordVPN study: 47% don’t trust that their identity will be legally protected 45% fear that even more data can be collected and used against them 43% are concerned about not being sure of the identity of others 41% think it will be hard to safeguard their real identity from their metaverse identity 37% fear that their transactions won’t be very secure Once the metaverse was described to respondents, 66% said they think the metaverse can replace social media as we currently know and use it. What is biometrically inferred data? Kavya Pearlman, founder of the XR Safety Initiative , a nonprofit that advocates for the ethical development of immersive technologies, told VentureBeat that “privacy is all about the data collection. Because there is this enormous amount of data [that will be harvested], you can’t have the convergence of these environments. That’s what I am most concerned about. “This is now all about biometrically inferred data,” Pearlman said. “Our data privacy laws need to be updated because they are inadequate. This enormous eye-tracking, gait-tracking the way you move, the way you walk – all this analysis – can infer a lot of information about you. And then there are the intersections of these other technologies, which is just like a brain-computer interface that will provide the alpha, beta, gamma – and even your thoughts – at some point. What happens to privacy when our thoughts are not even protected?” All this information – stacked in cloud storage and constantly being analyzed by multiple buyers – could give companies a greater ability to understand individual traits, Pearlman said. An insurance company, for example, might see a behavioral clue inferring a customer’s health problem before the person notices anything herself. “Now, the data is in inferences,” Pearlman said. One common denominator about all of this that our sources agree upon is that this is only the beginning of a new phase of commerce and socialization on the internet. As time and tech move on, the results of the success of data privacy policies, software, and hardware will become apparent. The other item everybody agrees on is that national, international and local laws and regulations will lag far behind the advancement of technology, as it has for decades. Other takes on metaverse vs. data privacy Some other varying perspectives on the coming battle between data privacy and the metaverse: Peter Evans, CEO of Patriot One Technologies : We don’t expect the issues of data privacy or security to go away with the metaverse. As an industry, we see repeated examples where technology gets way ahead of security, data privacy, and good governance … and the world’s zeal to play with new and interesting things and leverage them for business benefit, competitive advantage, and profit. All the issues that we’ve recently seen in the press about Facebook’s use of data to drive marketing and revenue are examples of a marketing opportunity getting ahead of good governance, security, and protection. This has been going on for 20+ years, going back to the first introduction of the internet, online banking and ecommerce, AI and facial recognition, etc. We see these issues repeating themselves over and over again, with governments and data privacy often lagging. By the time the world opens its eyes to the data privacy data management issues, it’s too late, because the horse has left the barn. With each new iteration of innovation, we see an order-of-magnitude jump in both business benefits as well as the complexity of data privacy issues. I expect that we will see the same with the metaverse. Ben Brook, CEO of Transcend , a data privacy software provider : In the beginning, the metaverse can actually be good for privacy because people can adopt anonymous avatars. But over time, as we spend more time in the metaverse and our avatar becomes a bigger portion of our life (in a sense, we become our avatars and we shop as it, we consume content as it, and we form relationships as it), then all the same privacy principles will apply. It’s still too early to say what specific protections it will require as usage evolves, but the reality is we’re not starting from the most solid foundation. In many jurisdictions, consumers don’t yet have the protections they need for today, let alone for the metaverse and the myriad new ways their data may be used (and abused) tomorrow. More data means advertisers have a substantially richer cupboard to mine for far deeper targeting, often using the same platforms that are speaking most loudly about the metaverse’s potential. David Blonder, senior director, legal counsel, regulatory and privacy and data protection officer at BlackBerry: With the metaverse and creating a hybrid-reality, it’s important to remember one simple truism: people will trade security for convenience. The metaverse will see considerably more user interaction than a cellphone. Therefore, it is not unreasonable to assume it would collect much more information and attract many more attackers as well. For security to succeed in the metaverse, it will have to be implemented in a way that is robust without negatively impacting user convenience. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,288
2,022
"Snowflake launches data cloud for healthcare and life sciences | VentureBeat"
"https://venturebeat.com/data-infrastructure/snowflake-launches-data-cloud-for-healthcare-and-life-sciences"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake launches data cloud for healthcare and life sciences Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bozeman, Montana-headquartered data company Snowflake has expanded its “data cloud” ecosystem with the launch of a dedicated offering for healthcare and life sciences industries. Officially dubbed Healthcare & Life Sciences (HCLS) Data Cloud, the product aims to provide enterprises in these sectors with a single cross-cloud data platform to centralize, integrate and exchange critical and sensitive data at scale. It is tailor-made to solve key challenges that keep healthcare enterprises from leveraging data for innovation and continues to be backed by various technology, data, application and consulting partners, including Equifax, Dataiku, H20.ai, Cognizant, Deloitte and Strata. Need for HCLS Data Cloud For most healthcare or life sciences companies, the task of leveraging data for medical innovation can be described as one associated with regulatory and technical hurdles. The firms often rely on legacy architectures (that keep data in fragmented siloes) and have to follow stringent compliance rules, with no common models for data sharing with the industry. This makes downstream use difficult, affecting advanced analytics and AI projects, including those for patient care and optimizing clinical and operational decision-making. Snowflake’s Healthcare & Life Sciences Data Cloud solves these challenges by providing an agile and interoperable solution that facilitates the storage, sharing, and use of data. It eliminates technical and institutional silos while ensuring the security, governance and compliance required to meet industry regulations. In all, Snowflake’s data cloud ecosystem facilitates six key workloads — capabilities of a data warehouse, data lake, data engineering, data sharing, data science and data app development. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The Snowflake Healthcare & Life Sciences Data Cloud will unlock the next generation of innovation in the industry by enabling organizations to take advantage of borderless data access while ensuring strict data governance, security, and privacy compliance,” Todd Crosslin, Healthcare and Life Sciences Industry Principal at Snowflake, said. “The entire industry can benefit from this live, connected ecosystem to get access to the data they need when it’s needed.” The solution is already in use by various organizations, including Anthem, IQVIA, Komodo Health, Siemens Healthineers, Spectrum Health, Novartis and Roche. Snowflake’s competition The launch of Snowflake’s HCLS Data Cloud comes just a week after Databricks’ launch of lakehouse for healthcare and life sciences. The two companies are set out for the same goal – to be the one-stop shop for all things enterprise data. In fact, both Snowflake and Databricks, led by Ali Ghodsi, have been expanding their product ecosystem to cover more verticals and use cases. The former recently announced the acquisition of Streamlit to simplify data app development, while the latter has debuted lakehouse for retail and financial services, as well as a solution to integrate partner data tools with ease. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,289
2,022
"Snowflake launches data cloud for retail industry | VentureBeat"
"https://venturebeat.com/data-infrastructure/snowflake-launches-data-cloud-for-retail-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake launches data cloud for retail industry Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In another move to help enterprises better mobilize their data, Snowflake has announced the launch of data cloud for retail. The offering comes days after the launch of healthcare and life sciences (HCLS) data cloud and aims to serve as a dedicated platform to address the challenges retail industry stakeholders – retailers, manufacturers, distributors and consumer packaged goods (CPG) vendors – face while trying to extract value from their datasets. “The way that consumers, retailers and brands interact is quickly changing, and the data that organizations need to understand and adapt is not as available or easy to work with as it needs to be. Whether a business is working to deliver more personalized purchase experiences or adjusting amid supply chain, shortage, or inflation challenges – among others – the needed data is spread across internal and external stakeholder,” Rosemary Hua, Snowflake’s GTM lead for retail and CPG industries, told VentureBeat. How does Retail Data Cloud help? The new Retail Data Cloud solves this challenge by bringing together industry-specific datasets and various partner solutions on a single platform. This way, it drives industry-wide collaboration, enabling the stakeholders to not only access their own data (which can be siloed due to legacy systems) but also new information from other, external sources. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Our new Retail Data Cloud breaks down data silos within and across organizations and offers retail organizations the specific AI and analytics tools they need to make quick, insight-driven decisions to overcome obstacles and adapt to trends across functions – from manufacturing to logistics, marketing, merchandising and beyond. In short, regardless of where precisely in the retail ecosystem an organization lives, we’re making more data more available and easier to work with so they can optimize every part of their business,” Hua said. Snowflake notes that enterprises using the Retail Data Cloud can share data across the ecosystem in real-time across all major public cloud platforms. To ensure strict data governance and regulatory compliance, the company employs a suite of easily managed security capabilities, including data clean rooms, double-blind joins, restricted queries, centralized RBAC (role-based access control) and row/columnar level obfuscation that enables data to be shared without movement and risk of revealing personally identifiable information. Solutions for industry-specific problems On the partner solutions’ side, the Retail Data Cloud provides enterprises with applications/products, curated by Snowflake’s technology, consulting and data partners, to solve industry-specific problems and reduce the time to value. “We are announcing new joint solutions that are being brought to the market starting Monday, March 28th,” Hua said. “One such example is AWS bringing Vendor Central and improved PO forecasting data onto Snowflake for consumer product brands that sell through Amazon.com’s marketplace. Another example is Numerator (Snowflake data marketplace partner) bringing a new product called Secure Enrich, which is built on Snowflake data clean room. Retailers and brands will be able to consume and join these new datasets directly in their native Snowflake environment, making it seamless to manage inventory and enrich customer data,” she added. Currently, the Retail Data Cloud is being used by players such as 84.51°, Albertsons, Kraft Heinz and Rakuten. Overall, Snowflake claims more than 1,000 retail and CPG companies have signed up for its platform, which supports six key workloads – data engineering, data sharing, data warehousing, data application development, data lake and data science. Notably, a similar offering has also been curated by Databricks and Dremio as part of what they call the data lakehouse approach. Databricks, in particular, has been strengthening its offering with partner integrations and vertical-specific offerings. The company was last valued at $38 billion , while Snowflake’s current (as of March 25) market capitalization stands at about $66 billion. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,290
2,022
"5 ways to use predictive insights to get the most from your data | VentureBeat"
"https://venturebeat.com/enterprise-analytics/5-ways-to-use-predictive-insights-to-get-the-most-from-your-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 ways to use predictive insights to get the most from your data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For several years now, pundits have declared that data is more valuable than oil. But are companies really succeeding in extracting the most value from their data? What are some of the hidden costs of gathering and storing data, and how can companies get more from their data? Storms of data Today, companies are faced with an enormous amount of data. Collecting, storing and securing that data in a warehouse or data lake comes at a big cost. The pandemic exacerbated the problem by spurring digital transformation and moving the entire buyer’s journey process online. That movement prompted many companies to put increased efforts behind data collection to make sense of a shifting world. But data in and of itself is not valuable. It’s only valuable when you can use it to understand a shifting world, and capitalize on those shifts to improve your company’s performance, such as by increasing revenue growth, gaining a competitive edge or raising the bar on operational excellence. An organization may have a pile of gold bricks, but if it doesn’t have a way to turn the gold into cash flow, that gold is essentially worthless. This is the challenge many organizations are facing right now when it comes to data. Many companies are sitting on a gold mine of data. But they have no way of turning it into valuable, prediction-driven insights that could inform the many “million-dollar” decisions and actions that revenue teams make every day. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! By prediction-driven insights, I mean the type of algorithmically-derived, probabilistic information that can help guide daily actions and predict what is highly likely to happen in the future — and most importantly, make an outsized impact on the bottom line. Today, most companies analyze their marketing data by focusing on the past: What did this segment of people do in the last quarter or the same period last year? But to shift from historical analysis to prediction-driven intelligence, the underlying question needs to be reframed as: Which specific individuals are most likely to do something in the future? Predictive insights: Using data to look ahead This shift to a predictive mindset gives a marketing person a lot to work with and numerous possible insights. They could create a personalized offer to influence the customer’s behavior to shift course or take an action sooner. They can also create much more accurate lookalike audiences, making their targeting more precise, or expand audiences in a highly strategic way by focusing on lookalikes of future high-value customers. Another option is to predict which customers are likely to churn and take action to try to retain them before they leave. Even a small increase in customer retention can provide a dramatic boost to profits. Say you’re a large D2C lifestyle subscription brand spending millions of dollars a month on acquisition campaigns. You’re also likely offering your potential new customers significant discounts on their first order, and perhaps even on their second and third orders, to really hook them in for the long run. Those acquisition costs can be sizable and eat into margins. These kinds of promotions are often directed by an established heuristic or business intelligence (BI) rule. For example, the rule might mandate offering a promotion to every VIP customer. But, in doing so, it extends promotions to those who’d buy again without the promotion — and also misses out on offering promotions to those who are likely to become VIPs. This rule-based approach is expensive and ineffective. It gives discounts to customers who don’t need them, and it fails to build loyalty with other customers who are likely to engage for the long term. Continuing with the subscription box example: There’s a good chance that less than 20% of your subscribers are profitable, and not until they’ve ordered at least six subscription boxes. Wouldn’t you want to know who those 20% are in the first week or two and quickly identify your “future best customers”? How about those who could turn into future VIPs with a bit of a nudge? Finding these premium customers early on will help identify similar audiences earlier in the engagement funnel. This type of predictive intelligence and insights can be generated from the customer event and transaction data that companies already gather as part of their day-to-day operations. AI-based predictive analytics can surface that information. 5 steps to using predictive insights When companies want to use predictive insights to drive more significant business results, they should focus on the following steps. Evaluate whether business intelligence rules are actually driven by the data Is your company using predefined rules or, worse, outdated rules to make decisions? Are you tracking actual results linked to those rules, and then adjusting them as needed to show real results? Ask yourself how your company defines a good customer, and how often that customer actively interacts with your brand. Churn might also take on different definitions in specific businesses. Churn may mean a customer vanished entirely, or it may mean their interactions have become much less frequent. The most common definitions may not really be indicative of your business performance, yet we base so much of planning, forecasting and budgeting on those definitions. Make sure your definitions of active user, good customer, and churn are regularly refined. These definitions need to work for your business — even as your business, the market conditions and the competitive environment evolves. Eliminate data silos With the proliferation of SaaS tools, we seem to be collecting so much more data, yet most companies still struggle to integrate it properly to extract insights that would be indicative of future performance. There are a variety of reasons for that: internal data privacy, legacy mindset around who owns what data, lags in data warehousing strategy or operational know-how about the mechanics of integrating it. Even within well-defined disciplines like marketing, siloing is still a challenge that hinders performance. The CMO Survey found that after a decade of integrating customer data across channels, marketers are still struggling, with most giving their organization a 3.5 out of 7 score on the effectiveness of their customer information integration across purchasing, communication and social media channels. Ironically, this score has actually gone down since 2014, with marketers saying their programs are getting worse over time. Creating a complete, integrated view of the customer by abolishing data silos will drive the best decisions. Watch out for the separation of the BI and AI disciplines When the BI team is reporting to the chief revenue officer, and an AI team is reporting to the CIO, it’s easy to create information silos that make it difficult to see the broader picture. It becomes even more challenging to find useful insights. Some companies are solving this by merging the two groups under the office of a chief data officer, but progress is slow here, thus hindering results. Don’t be over-enamored with actionable insights Most analytics endeavors will yield some useful information that can be acted upon. But does every insight that’s actionable offer equal value? Absolutely not. You need to focus on building data strategies and spending resources on surfacing the precise insights you need to achieve your most important business objectives. This focused approach is far more efficient than sifting through a haystack of actionable insights in the hopes of stumbling on the one that will give you just the right boost to your revenue or a major efficiency gain at this very moment. Go beyond observing dashboards and reading reports Too often organizations are overly focused on dashboards and analyzing past trends to determine future actions. Dashboards and reports are often thought of as the final deliverables of data, but this thinking is limiting data’s value. Think about how your acquisition, monetization and retention journeys are orchestrated today, then feed predictive scoring data right into those business systems and tools. This integration directly impacts your top line and bottom line, instead of just looking at the past. Predictive Insights: Getting the most from your data Calling data the world’s most valuable resource makes sense, especially given the importance and credibility that more and more organizations place on capturing and analyzing data. But if you don’t use your data correctly, you’re not going to get the best results from your marketing campaigns. Companies need to look at how they’re using their data and identify the most valuable insights they can glean from it — and then they can see what data is truly useful for their goals. After all, if 87% of data science projects never make it into production , is data being used in the most valuable way possible? Zohar Bronfman is cofounder and CEO at Pecan AI. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,291
2,022
"Cloud leader AWS shifts its database focus to DataZone and Zero-ETL | VentureBeat"
"https://venturebeat.com/data-infrastructure/cloud-leader-aws-shifts-its-database-focus-to-datazone-and-zero-etl"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud leader AWS shifts its database focus to DataZone and Zero-ETL Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud leader AWS’s impact on IT trends takes many forms, but none has become more impactful than its stable of database services. At most yearly re:Invent conferences, AWS has rolled out a shiny new database that affirms the company’s presence amid cloud-based databases. These were sometimes open source and often purpose-built. But this year was different. At AWS re:Invent 2022 , the company turned its sights toward making its existing array of cloud data tools more palatable to enterprise IT. That means data integration and data management are now due for attention. To that end, the company released Amazon DataZone data management services to catalog and govern data stored on the AWS cloud and on-premises. As well, DataZone could support third-party sources via APIs, AWS said, mentioning partners DataBricks, Snowflake and Tableau in this context. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The timing is right. Enterprises find the number of different sources of data they need to combine is growing dramatically. Management and governance of scattered data holdings grows onerous. Combining data feeds Now as before, cost effectiveness drives IT to the cloud, AWS CEO Adam Selipsky told re:Invent attendees. For AWS today, cost-effective data engines begin with Aurora, AWS’s version of open-source PostgreSQL, and Redshift, the columnar MPP data warehouse that upended the economics of data analytics with its 2012 introduction. The database procession that brought Aurora and Redshift also included RDS, Neptune , DynamoDB, DocumentDB, Elastic Cache, TimeStream , and the Quantum Ledger DB , some of these stirring controversy as start-ups wrestled with cloud giant AWS’s aggressive approach to open-source licensing. Selipsky did not come to re:Invent to tout a new database – though there were updates to several existing engines. Instead, he promoted the notion of tying the existing portfolio together more effectively. “Having all these tools to store and analyze data reveals the next challenge that people face … you need to be able to combine information across these different methods of data exploration to see the full picture and truly gain insights,” he said. Give ‘em ETL In his re:Invent address, Selipsky took aim at the integration challenges around Extract Transform Load (ETL), the long-simmering backwater of high tech that innovators have lately been revisiting. He announced new integrations said to eliminate the need for ETL between Amazon Aurora and Amazon Redshift services, and between Spark and Redshift. Selipsky’s aim here is clear-eyed. With low-code/no-code on the rise, it may be time to dial up “Zero ETL.” It’s a stage in data processing, involving a lot of repetitive custom scripting, that is necessary and generally glossed over when digital transformation is the enterprise’s ultimate goal. The dull work of ETL data preparation can stand in the way of progress. To show IT’s frustration with the process, Selipsky read an excerpt from a letter from a customer that described ETL as a “thankless, unsustainable black hole.” The new Aurora and Redshift capabilities help customers move toward a Zero-ETL future on AWS, he said. Echoes of Tableau Although perhaps overshadowed by machine learning and other announcements, the focus on larger data management issues at re:Invent 2022 suggests new maturity in AWS’s approach to IT’s data needs. There is also the implication here that Adam Selipsky is setting a new course for AWS cloud. Given his years at the helm of business intelligence provider Tableau , this isn’t entirely unexpected. Under his watch, Tableau distinguished itself for innovation in visual data presentation and established itself as an expert in ease of use and drag-and-drop integration support for both structured and unstructured datasets. AWS’s DataZone and Zero-ETL neatly fit in a similar picture of cloud data evolution. Future moves will be closely watched to see if AWS is moving more up-stack in the data edifice. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,292
2,022
"Snowflake 101: 5 ways to build a secure data cloud  | VentureBeat"
"https://venturebeat.com/data-infrastructure/snowflake-101-5-ways-to-build-a-secure-data-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake 101: 5 ways to build a secure data cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Snowflake is the favorite for all things data. The company started as a simple data warehouse platform a decade ago but has since evolved into an all-encompassing data cloud supporting a wide range of workloads, including that of a data lake. More than 6,000 enterprises currently trust Snowflake to handle their data workloads and produce insights and applications for business growth. They jointly have more than 250 petabytes of data on the data cloud, with more than 515 million data workloads running each day. Now, when the scale is this big, cybersecurity concerns are bound to come across. Snowflake recognizes this and offers scalable security and access control features that ensure the highest levels of security for not only accounts and users but also the data they store. However, organizations can miss out on certain basics, leaving data clouds partially secure. Here are some quick tips to fill these gaps and build a secure enterprise data cloud. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1. Make your connection secure First of all, all organizations using Snowflake, regardless of size, should focus on using secured networks and SSL/TLS protocols to prevent network-level threats. According to Matt Vogt, VP for global solution architecture at Immuta , a good way to start would be connecting to Snowflake over a private IP address using cloud service providers’ private connectivity such as AWS PrivateLink or Azure Private Link. This will create private VPC endpoints that allow direct, secure connectivity between your AWS/Azure VPCs and the Snowflake VPC without traversing the public Internet. In addition to this, network access controls, such as IP filtering, can also be used for third-party integrations, further strengthening security. 2. Protect source data While Snowflake offers multiple layers of protection – like time travel and fail-safe – for data that has already been ingested, these tools cannot help if the source data itself is missing, corrupted or compromised (like malicious encrypted for ransom) in any way. This kind of issue, as Clumio’s VP of product Chadd Kenney suggests, can only be addressed by adopting measures to protect the data when it is resident in an object storage repository such as Amazon S3 – before ingest. Further, to protect against logical deletes, it is advisable to maintain continuous, immutable, and preferably air-gapped backups that are instantly recoverable into Snowpipe. 3. Consider SCIM with multi-factor authentication Enterprises should use SCIM (system for cross-domain identity management) to help facilitate automated provisioning and management of user identities and groups (i.e. roles used for authorizing access to objects like tables, views, and functions) in Snowflake. This makes user data more secure and simplifies the user experience by reducing the role of local system accounts. Plus, by using SCIM where possible, enterprises will also get the option to configure SCIM providers to synchronize users and roles with active directory users and groups. On top of this, enterprises also should use multi-factor authentication to set up an additional layer of security. Depending on the interface used, such as client applications using drivers, Snowflake UI, or Snowpipe, the platform can support multiple authentication methods, including username/password, OAuth, keypair, external browser, federated authentication using SAML and Okta native authentication. If there’s support for multiple methods, the company recommends giving top preference to OAuth (either snowflake OAuth or external OAuth) followed by external browser authentication and Okta native authentication and key pair authentication. 4. Column-level access control Organizations should use Snowflake’s dynamic data masking and external tokenization capabilities to restrict certain users’ access to sensitive information in certain columns. For instance, dynamic data masking, which can dynamically obfuscate column data based on who’s querying it, can be used to restrict the visibility of columns based on the user’s country, like a U.S. employee can only view the U.S. order data, while French employees can only view order data from France. Both features are pretty effective, but they use masking policies to work. To make the most of it, organizations should first determine whether they want to centralize masking policy management or decentralize it to individual database-owning teams, depending on their needs. Plus, they would also have to use invoker_role() in policy conditions to enable unauthorized users to view aggregate data on protected columns while keeping individual data hidden. 5. Implement a unified audit model Finally, organizations should not forget to implement a unified audit model to ensure transparency of the policies being implemented. This will help them actively monitor policy changes, like who created what policy that granted user X or group Y access to certain data, and is as critical as monitoring query and data access patterns. To view account usage patterns, use system-defined, read-only shared database named SNOWFLAKE. It has a schema named ACCOUNT_USAGE containing views that provide access to one year of audit logs. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,293
2,022
"In today’s multi-cloud environment, you can’t secure what you can’t see  | VentureBeat"
"https://venturebeat.com/security/in-todays-multi-cloud-environment-you-cant-secure-what-you-cant-see"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest In today’s multi-cloud environment, you can’t secure what you can’t see Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For organizations to win the ever-growing fight against increasingly sophisticated cyberattacks, business leaders need innovative multi-cloud solutions that allow customers to connect and protect any workload in any location delivered via SaaS apps. On-premise security protocols of the past had to evolve to meet the IT needs of 10 years ago, and now cloud security needs to catch up with today’s hybrid workforce reality. The adoption of tools like Salesforce, Slack, Google Workspace, and Zoom only accelerated during the pandemic , with organizations of more than 1,000 employees using more than 150 SaaS applications on average. The need to secure the most critical cloud applications from cyberattacks is more prevalent than ever — and it won’t be going away anytime soon. With this in mind, business leaders are under pressure to ensure security protocols, budgets, and preparations are in place. Security and IT teams need more visibility On recent report showed that 94% of enterprises depend on cloud services and SaaS apps to operate in today’s hybrid workforce and store sensitive data. When a single application is breached, an organization’s entire application set — and the sensitive data behind them — becomes available to cybercriminals. We saw this with the recent GitHub breach , and it won’t be the last time that bad actors breach an organization’s critical infrastructure via one app. There is a shared responsibility that needs to be recognized between the SaaS application vendors and the security teams within organizations deploying the apps to ensure visibility into all of the network activity. To stop these growing threats, security and IT teams need more visibility into the current work environment that others can’t see. If they are unable to see what tools are being used, or who has access to them, they won’t be able to secure the network. We’ve seen massive cloud adoption over the past five years, and now we have to bring visibility along with it. It’s important not to forget the basics of security. As a decision maker, you’ve made the right call to move to the cloud — now you need to ensure the environment is secure. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Organizations need to prepare for an increase in lateral movement According to our recent survey , lateral movement was seen in 25% of all attacks, with cybercriminals leveraging everything from file storage apps (46%) to business communications platforms (41%) to rummage around inside networks. A full-fidelity threat intelligence solution is needed to protect businesses against threats targeting the apps and tools their businesses depend on to operate. Not all apps are created equal from a security perspective. As a business decision-maker, you need to take a 360-view of the risks your company is facing, get better visibility, and shift budgets to cover the most critical IT, cloud and and security needs. Advanced techniques are being used to make attacks more destructive and targeted. Cybercriminals are achieving this through emerging techniques, and catalyzed by the shift to remote work, 32% of respondents also experienced adversaries leveraging business communication platforms to move around a given environment and launch sophisticated attacks. This means that cyber attackers are accessing sensitive data in the cloud — from financial info like payroll and HR data to your customers’ and vendors’ info — which puts the entire company at risk. Businesses must prioritize cloud security tools amid budget cuts and economic uncertainty Security teams have spent years of their lives in the non-cloud world, and they’re aware of gaps and shortcomings. As a result, they’re now allocating one budget line item to the cloud, but that mindset doesn’t work. The more aware you are as a business decision maker, the better you will see budget needs and risks. You can’t cut incremental spend from one area of your budget and put it all toward the cloud either. The most important thing to consider when allocating or adjusting budget is the ROI you’re getting on tools. You need to take a bit of a ruthless approach: If certain tools are not showing a notable return, you need to move on. The cloud is here to stay and is you must focus on investing in and securing it. As we look to 2023, I expect it to be the year of large-scale and high-volume cloud-based cyberattacks. It’s up to business and security leaders to ensure the right cloud security protections are in place to prevent and stop these threats. Organizations have gone through years of migration to the cloud and infrastructure updates, so the opportunity for risk is there. Cybercriminals have been sharpening their own skills, and they’re prepared to breach organizations and gain critical information. Without the necessary visibility and security protocols in place, a perfect storm is created. It’s critical to get ahead of this now. Scott Lundgren is CTO of VMware’s Security Business Unit and a member of the Carbon Black founding team. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,294
2,022
"DeepMind unveils first AI to discover faster matrix multiplication algorithms | VentureBeat"
"https://venturebeat.com/ai/deepmind-unveils-first-ai-to-discover-faster-matrix-multiplication-algorithms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind unveils first AI to discover faster matrix multiplication algorithms Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Can artificial intelligence (AI) create its own algorithms to speed up matrix multiplication, one of machine learning’s most fundamental tasks? Today, in a paper published in Nature , DeepMind unveiled AlphaTensor, the “first artificial intelligence system for discovering novel, efficient and provably correct algorithms.” The Google-owned lab said the research “sheds light” on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices. Ever since the Strassen algorithm was published in 1969, computer science has been on a quest to surpass its speed of multiplying two matrices. While matrix multiplication is one of algebra’s simplest operations, taught in high school math, it is also one of the most fundamental computational tasks and, as it turns out, one of the core mathematical operations in today’s neural networks. Matrix multiplication is used for processing smartphone images, understanding speech commands, generating computer graphics for computer games, data compression and more. Today, companies use expensive GPU hardware to boost matrix multiplication efficiency, so any extra speed would be game-changing in terms of lowering costs and saving energy. AlphaTensor, according to a DeepMind blog post , builds upon AlphaZero, an agent that has shown superhuman performance on board games like chess and Go. This new work takes the AlphaZero journey further, moving from playing games to tackling unsolved mathematical problems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DeepMind uses AI to improve computer science This research delves into how AI could be used to improve computer science itself, said Pushmeet Kohli, head of AI for science at DeepMind, at a press briefing. “If we’re able to use AI to find new algorithms for fundamental computational tasks, this has enormous potential because we might be able to go beyond the algorithms that are currently used, which could lead to improved efficiency,” he said. This is a particularly challenging task, he explained, because the process of discovering new algorithms is so difficult, and automating algorithmic discovery using AI requires a long and difficult reasoning process — from forming intuition about the algorithmic problem to actually writing a novel algorithm and proving that the algorithm is correct on specific instances. “This is a difficult set of steps and AI has not been very good at that so far,” he said. An ‘intriguing, mind-boggling problem’ DeepMind took on the matrix multiplication challenge because it’s a known problem in computation, he said. “It’s also a very intriguing, mind-boggling problem because matrix multiplication is something that we learn in high school,” he said. “It’s an extremely basic operation, yet we don’t currently know the best way to actually multiply these two sets of numbers. So that’s extremely stimulating for us also as researchers to start to understand this better.” According to DeepMind, AlphaTensor discovered algorithms that are more efficient than the state of the art for many matrix sizes and outperform human-designed ones. AlphaTensor begins without any knowledge about the problem, Kohli explained, and then gradually learns what is happening and improves over time. “It first finds this classroom algorithm that we were taught, and then it finds historical algorithms such as Strassen’s and then at some point, it surpasses them and discovers completely new algorithms that are faster than previously.” Kohli said he hopes that this paper inspires others in using AI to guide algorithmic discovery for other fundamental competition tasks. “We think this is a major step in our path towards really using AI for algorithmic discovery,” he said. DeepMind’s AlphaTensor uses AlphaZero According to Thomas Hubert, staff research engineer at DeepMind, it is really AlphaZero running behind the scenes of AlphaTensor as a single-player game. “It is the same algorithm that learned how to play chess that was applied here for matrix multiplication, but that needed to be extended to handle this infinitely large space — but many of the components are the same,” he said. In fact, according to DeepMind, this game is so challenging that “the number of possible algorithms to consider is much greater than the number of atoms in the universe, even for small cases of matrix multiplication.” Compared to Go, which was an AI challenge for decades, the number of possible moves is 30 orders of magnitude larger. “The game is about basically zeroing out the tensor, with some allowed moves that are actually representing some algorithmic operations,” he explained. “This gives us two very important results: One is that if you can decompose zero out the tensor perfectly, then you’re guaranteed to have a provably correct algorithm. Second, the number of steps it takes to decompose this tensor actually gives you the complexity of the algorithm. So it’s very, very clean.” DeepMind’s paper also pointed out that AlphaTensor discovers a richer space of matrix multiplication algorithms than previously thought — up to thousands for each size. According to the blog post, the authors said they adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimizing arbitrary objectives,” the blog post said. Increased AI impact on science and mathematics Back in July , researchers showed that DeepMind’s AlphaFold tool could predict the structures of more than 200 million proteins from around a million species, which covered nearly every known protein on earth. Kohli said that AlphaTensor shows the potential that AI has not just in science but in mathematics. “To see AI fulfill that promise to go beyond what human scientists have been able to do for the last 50 years, it is personally incredibly exciting,” said Kohli. “It just shows the amount of impact that AI and machine learning can have.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,295
2,022
"Google brings machine learning to online spreadsheets with Simple ML for Sheets | VentureBeat"
"https://venturebeat.com/ai/google-brings-machine-learning-to-online-spreadsheets-with-simple-ml-for-sheets"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google brings machine learning to online spreadsheets with Simple ML for Sheets Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Spreadsheets are widely used by organizations of all sizes for all kinds of basic and complex tasks. While simple calculations and graphs have long been part of the spreadsheet experience, machine learning (ML) has not. ML is often seen as being too complex to use, while spreadsheet usage is intended to be accessible to any type of user. Google is now trying to change that paradigm for its Google Sheets online spreadsheet program. Today Google announced a beta release of the Simple ML for Sheets add-on. Google Sheets has an extensible architecture that enables users to benefit from add-ons that extend the default functionality available in the application. In this case, Google Sheets benefits from ML technology that Google first developed in the open-source TensorFlow project. With Simple ML for Sheets, users will not need to use a specific TensorFlow service, as Google has developed the service to be as easily accessible as possible. “Everything runs completely on the user browser,” Luiz Gustavo Martins, Google AI developer advocate, told VentureBeat. “Your data doesn’t leave Google Sheets and models are saved to your Google Drive so you can use them again later.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Holy sheets, Google’s Simple ML can do what with my spreadsheets? So what can Simple ML for Sheets do? Two of the beginner tasks in the beta release highlighted by Google include the ability to predict missing values or spot abnormal ones. Martins said that those two beginner tasks are easy for anyone to test the ML waters and explore how ML might benefit their business. Martins noted that beyond the beginner tasks, the add-on supports several other common ML tasks such as training and evaluating models, generating predictions, and interpreting the models and their predictions. In addition, since Simple ML can export models to TensorFlow, people with programming experience can use Simple ML models with their existing ML infrastructure. Overcoming the challenges of ML complexity with Simple ML for Sheets It’s possible for Google Sheets users to benefit from ML without Simple ML, but it may not be easy for the layperson. “We identified knowledge and lack of guidance as the prime factors for non-ML practitioners to easily use ML,” Mathieu Guillame-Bert, software engineer at Google, told VentureBeat. “Using a classical ML tool, like TensorFlow in Python, is like being in front of a blank page.” Guillame-Bert said that using a classic ML tool requires, among other things, for the user to understand programming, ML problem framing, model construction and model evaluation. He noted that such knowledge is generally acquired through classes or self-taught over a long period of time. In contrast, Guillame-Bert said that Simple ML is like an interactive questionnaire. It guides the user and only assumes basic knowledge about spreadsheets. Using decision forests to power Simple ML Martins explained that under the hood, the Simple ML add-on trains models using the Yggdrasil Decision Forests library. This is the same library that powers TensorFlow Decision Forests. “For this reason, once trained in the add-on, the advanced user can export the model to any TensorFlow Serving managed service, such as the TensorFlow Serving on Google Cloud,” Martins said. Guillame-Bert explained that TensorFlow Decision Forests (TF-DF) is a library of algorithms to train new models. In other words, the user provides examples to TF-DF, and they receive a model in return. He noted that TF-DF does not come with pretrained models; however, because TF-DF are integrated in the TensorFlow ecosystems, advanced users may combine Decision Forests and pretrained models. According to published research , the technology behind TF-DF, which is based on the concepts of Random Forests and Gradient-Boosted Trees, works exceptionally well to train models on a tabular dataset, like a spreadsheet. Looking forward, Guillame-Bert said Google will be working to further improve the usability of the add-on. Google also plans on adding new capabilities to Simple ML for Sheets that don’t require any ML knowledge from the user. “During internal tests, we identified several highly requested tasks we think will be popular with users,” Guillame-Bert said. “We hope to get feedback from this public launch to prioritize and design those tasks.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,296
2,022
"Intel unveils real-time deepfake detector, claims 96% accuracy rate | VentureBeat"
"https://venturebeat.com/ai/intel-unveils-real-time-deepfake-detector-claims-96-accuracy-rate"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel unveils real-time deepfake detector, claims 96% accuracy rate Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On Monday, Intel introduced FakeCatcher , which it says is the first real-time detector of deepfakes — that is, synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. Intel claims the product has a 96% accuracy rate and works by analyzing the subtle “blood flow” in video pixels to return results in milliseconds. Ilke Demir, senior staff research scientist in Intel Labs, designed FakeCatcher in collaboration with Umur Ciftci from the State University of New York at Binghamton. The product uses Intel hardware and software, runs on a server and interfaces through a web-based platform. Intel’s deepfake detector is based on PPG signals Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “You cannot see it with your eyes, but it is computationally visible,” Demir told VentureBeat. “PPG signals have been known, but they have not been applied to the deepfake problem before.” With FakeCatcher, PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components. “We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real,” Demir said. “Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams.” Detection increasingly important in face of growing threats Deepfake detection has become increasingly important as deepfake threats loom, according to a recent research paper from Eric Horvitz, Microsoft’s chief science officer. These include interactive deepfakes, which offer the illusion of talking to a real person, and compositional deepfakes, where bad actors create many deepfakes to compile a “synthetic history.” And back in 2020 , Forrester Research predicted that costs associated with deepfake scams would exceed $250 million. Most recently, news about celebrity deepfakes has proliferated. There’s the Wall Street Journal coverage of Tom Cruise, Elon Musk and Leonardo DiCaprio deepfakes appearing unauthorized in ads, as well as rumors about Bruce Willis signing away the rights to his deepfake likeness (not true). On the flip side, there are many responsible and authorized use cases for deepfakes. Companies such as Hour One and Synthesia are offering deepfakes for enterprise business settings — for employee training, education and ecommerce, for example. Or, deepfakes may be created by users such as celebrities and company leaders who want to take advantage of synthetic media to “outsource” to a virtual twin. In those cases, there is hope that a way to provide full transparency and provenance of synthetic media will emerge. Demir said that Intel is conducting research but it is only in its beginning stages. “FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection — deepfakes — responsible generation and media provenance,” she said. “In the shorter term, detection is actually the solution to deepfakes — and we are developing many different detectors based on different authenticity clues, like gaze detection.” The next step after that will be source detection, or finding the GAN model that is behind each deepfake, she said: “The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real.” History of challenges with deepfake detection Unfortunately, detecting deepfakes has been challenging on several fronts. According to 2021 research from the University of Southern California, some of the datasets used to train deepfake detection systems might underrepresent people of a certain gender or with specific skin colors. This bias can be amplified in deepfake detectors, the coauthors said, with some detectors showing up to a 10.7% difference in error rate depending on the racial group. And in 2020 , researchers from Google and the University of California at Berkeley showed that even the best AI systems trained to distinguish between real and synthetic content were susceptible to adversarial attacks that lead them to classify fake images as real. In addition, there is the continuing cat-and-mouse game between deepfake creators and detectors. But Demir said that at the moment, Intel’s FakeCatcher cannot be outwitted. “Because the PPG extraction that we are using is not differentiable, you cannot just plug it into the loss function of an adversarial network, because it doesn’t work and you cannot backpropagate if it’s not differentiable,” she said. “If you don’t want to learn the exact PPG extraction, but want to approximate it, you need huge PPG datasets, which don’t exist right now — there are [datasets of] 30-40 people that are not generalizable to the whole.” But Rowan Curran, AI/ML analyst at Forrester Research, told VentureBeat by email that “we are in for a long evolutionary arms race” around the ability to determine whether a piece of text, audio or video is human-generated or not. “While we’re still in the very early stages of this, Intel’s deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g. skin tone, lighting conditions, amount of skin that can be see in the video),” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,297
2,022
"Meta layoffs hit an entire ML research team focused on infrastructure | VentureBeat"
"https://venturebeat.com/ai/meta-layoffs-hit-entire-ml-research-team-focused-on-infrastructure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta layoffs hit an entire ML research team focused on infrastructure Share on Facebook Share on X Share on LinkedIn Mark Zuckerberg, CEO of Meta. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. After Wednesday’s Meta layoffs, which cut 11,000 employees, CEO Mark Zuckerberg publicly shared a message to Meta employees that signaled, to some, that those working in artificial intelligence (AI) and machine learning (ML) might be spared the brunt of the cuts. “We’ve shifted more of our resources onto a smaller number of high priority growth areas — like our AI discovery engine, our ads and business platforms, and our long-term vision for the metaverse,” Zuckerberg wrote. However, a Meta research scientist who was laid off tweeted that he and the entire research organization called “Probability,” which focused on applying machine learning across the infrastructure stack, was cut. The team had 50 members, not including managers, the research scientist, Thomas Ahle, said, tweeting : “19 people doing Bayesian Modeling, 9 people doing Ranking and Recommendations, 5 people doing ML Efficiency, 17 people doing AI for Chip Design and Compilers. Plus managers and such.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another member of the team, a senior software engineer named Emily McMilin, responded by tweeting, “It took me almost 7 years at Meta to find a team as amazing as Probability.” In his letter to employees (which came just a few weeks after Meta shares plummeted after its Q3 earnings call) Zuckerberg did note that he is currently in the middle of a “thorough review” of infrastructure spending. “As we build our AI infrastructure, we’re focused on becoming even more efficient with our capacity,” he wrote. “Our infrastructure will continue to be an important advantage for Meta, and I believe we can achieve this while spending less.” According to Meta’s Probability web page , the team makes “it radically easier for engineers to adopt machine learning techniques by deeply integrating machine learning into Facebook’s programming languages, developer tooling, and infrastructure.” Commenting on life after the Meta layoffs, Ahle added that after a year and a half on the Meta team, “I hope to stay in the Bay Area a while longer, if anyone needs some algorithms.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,298
2,022
"Will OpenAI's DALL-E 2 kill creative careers? | VentureBeat"
"https://venturebeat.com/ai/openai-will-dall-e-2-kill-creative-careers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Will OpenAI’s DALL-E 2 kill creative careers? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Last week, OpenAI announced it would expand beta access to DALL-E 2 , its powerful image-generating AI solution, to over one million users via a paid subscription model. It also offered those users full usage rights to commercialize the images they create with DALL-E , including the right to reprint, sell, and merchandise. The announcement sent the tech world buzzing, but it mostly amounted to gleeful Twitter feeds filled with the results of random DALL-E prompts like “ steampunk Jesus DMT trip under an electron microscope ” to a “ dark wizard using a magical smartphone to cast spells. ” But a variety of questions, one leading to the next, seem to linger beneath the surface. OpenAI on commercial DALL-E use For one thing, what does the commercial use of DALL-E’s AI-powered imagery mean for creative industries and workers – from graphic designers and video creators to PR firms, advertising agencies and marketing teams? Should we imagine the wholesale disappearance of, say, the illustrator? According to OpenAI , the answer is no. DALL-E is a tool that “enhances and extends the creative process,” an OpenAI spokesperson told VentureBeat. Much like an artist would look at different artworks for inspiration, DALL-E can help an artist with coming up with creative concepts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “What we’ve heard from artists and users to date is that it takes human direction to generate a good representation of the idea,” the spokesperson said. But how can someone who uses DALLE-2 to create an image attest that it is their own work? After all, the person using DALLE-2 is simply entering a prompt. How can the results of that prompt be their own? If they are allowed to sell those works commercially, are they really the artist? OpenAI insists that DALL-E creates original images, saying: Similar to how we learn as kids, DALL-E 2 has learned the relationship between images and the text used to describe them. As an example, DALL-E can learn what the city of Paris looks like from photos of Paris, including the Eiffel Tower and the Seine river. If you give DALL-E 2 the prompt “Paris,” it will generate a unique, original image of Paris based on what it has learned about the city. Creatives respond to OpenAI’s increased access Overall, the creatives who VentureBeat reached out to seem to be taking the appearance of DALL-E on the scene in stride – and exploring the tool’s potential to boost productivity and efficiency, and take advantage of its creative assistance. “For enterprise clients, this technology can provide a vehicle to get from idea to concept and then help to refine the concept much faster,” said Andy Martinus, global head of innovation at London-based public relations firm, Team Lewis. However, he emphasized that it can’t replace the ideas or creative direction. “For artists and marketers, while there might be skepticism initially, there is also an opportunity,” he said. “Creators can use the tool to build out their initial ideas and to create variations of an existing design or idea, [which] provides a greater level of creative control.” Meghan Goetz, director of marketing at digital agency Crowd Favorite, points out that enterprise brand clients often have strict brand guides and user personas that require the in-house marketing and design team. “For these teams, DALL-E can be utilized to create new, unique or custom stock media, which could be great for campaigns that require specific design styles,” she said. “It could be a great tool for prototyping or inspiration for design assets, while modifying and editing images can be a great way to utilize these tools to save time and money.” Expanded access to DALL-E is an opportunity for designers to update their workflows, said Juan Pablo Madrid, senior director of design innovation at New Orleans-based creative agency Online Optimism. He said he considers it similar to the widely adopted AI-powered algorithms that have simplified image processing in tools like Adobe Photoshop. “Some examples I have seen from other designers are using DALL-E 2 to create photorealistic mockups of brand materials or creating original blog post images,” he added. Increased competition could result from DALL-E use But while the commercial use of DALL-E 2 may expand creativity and provide artists with more options in creating and expanding their markets, it may create more competition in the creative space, cautioned Baruch Labunski, CEO at Rank Secure. “There is going to be a flood of creative work stemming from this, which can be good or bad, depending on your agency and your location,” he said. The advantage, he explained, is for small marketing agencies or small businesses that could, with the subscription, produce highly professional imaging that makes them more competitive with larger firms or businesses while keeping costs in check. It would also open up more opportunities for freelancers because they could also compete. The disadvantages, however, are increased competition in the creative space that could drive down the prices of creative work and marketing, especially in larger, urban areas where there is already heavy competition. “I don’t see it limiting jobs in the space,” he said. “I see it as creating more jobs and that will also mean more competition. Issues of DALL-E imagery ownership According to OpenAI’s spokesperson, user feedback found that full usage rights are what creators want. OpenAI, however, retains ownership of the original image “primarily so that we can better enforce our content policy.” But creative workers find the issues around ownership and copyright to be unclear. “Commercially, there are questions to be answered around ownership of the imagery that tools like DALL-E 2 create – is the image owned by DALL-E 2 or by the creative that directed it?” said Martinus. “If [OpenAI] owns it, do you buy the usage rights, and can others use the image as well, as with stock images? Could this be a longer-term alternative to stock imagery?” Goetz agrees that ownership rights seem “a bit hazy,” pointing out that when it comes to working for specific brands, “they tend to avoid any uncertainty when it comes to image and design asset licenses.” Madrid said he would be hesitant to adopt tools like DALL-E 2 for high-value client work, “… considering that users do not have an exclusive copyright to any image generations and cannot, therefore, transfer it to a client. So, I would not advise ad agencies to consider getting rid of their designers unless they’re prepared for potential legal battles over produced work.” However, he suggests they might want to reevaluate pricey stock photo subscriptions. “The price-point and ability to create virtually any image from a text prompt is pretty attractive,” he said. Hope for an artist-friendly future, says OpenAI OpenAI points out that artists and creative professionals are already using DALL-E in a wide range of projects. “Our hope is that DALL-E can be used by artists, designers and photographers as a tool to help with the creative process,” the OpenAI spokesperson said. “We have seen AI be a good tool for people in the creative space. For example, as photo editing software has become more powerful and accessible, it has allowed more people to enter the photography field. In recent years, we’ve also seen artists use AI to create new kinds of art.” Martinus emphasized that DALL-E 2 and other tools should not be seen as a threat to the creative field. “People tend to ‘hack’ tools and use them for tasks beyond their original intention,” he said. “I expect the same with DALL-E 2. People will use it, but differently than we expect.” Overall, Goetz added that she is not seeing full adoption of these tools – yet. “Many clients and projects demand the expertise and experience of the human aspect when it comes to final production,” she said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,299
2,022
"Why AI needs a steady diet of synthetic data | VentureBeat"
"https://venturebeat.com/ai/why-ai-needs-a-steady-diet-of-synthetic-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why AI needs a steady diet of synthetic data Share on Facebook Share on X Share on LinkedIn A sample of Parallel Domain’s synthetic data showing a map view of its virtual world capabilities. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) may be eating the world as we know it, but experts say AI itself is also starving — and needs to change its diet. One company says synthetic data is the answer. “Data is food for AI, but AI today is underfed and malnourished,” said Kevin McNamara, CEO and founder of synthetic data platform provider, Parallel Domain, which just raised $30 million in a series B round led by March Capital. “That’s why things are growing slowly. But if we can feed that AI better, models will grow faster and in a healthier way. Synthetic data is like nourishment for training AI.” Research has shown that about 90% of AI and machine learning (ML) deployments fail. A Datagen report from earlier this year pointed out that a lot of failure is due to the lack of training data. It found that 99% of computer vision professionals say they have had an ML project axed specifically because of the lack of data to see it through. Even the projects that aren’t fully canceled for lack of data experience significant delays, knocking them off track, 100% of respondents reported. In that vein, Gartner predicts synthetic data will increasingly be used as a supplement for AI and ML training purposes. The research giant projects that by 2024 synthetic data will be used to accelerate 60 % of AI projects. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Synthetic data is generated by machine learning algorithms that ingest real data to train on behavioral patterns and create simulated data that retains the statistical properties of the original dataset. The resulting data replicates real-world circumstances, but unlike standard anonymized datasets, it’s not vulnerable to the same flaws as real data. Pulling AI out of the ‘Stone Age’ It may sound unusual to hear that a technology as advanced as AI is stuck in a “Stone Age” of sorts, but that’s what McNamara sees — and without adoption of synthetic data, it will stay that way, he says. “Right now AI development is kind of the way computer programming was in the ‘60s or ‘70s when people used punch card programming — a manual, labor-intensive process,” he said. “Well, the world eventually moved away from this and to digital programming. We want to do that for AI development.” The three biggest bottlenecks keeping AI in the Stone Age are the following, according to McNamara : Collecting real-world data — which is not always feasible. Even for something like jaywalking, which happens fairly often in cities around the world, if you need millions of examples to train your algorithm, that quickly becomes unattainable for companies to go out and get from the real world. Labeling — which often requires thousands of hours of human time and can be inaccurate because, well, humans make errors. Iterating on the data once it is labeled — which requires you to adjust sensor configurations etc. and then apply it to actually begin to train your AI. “That whole process is so slow,” McNamara said. “If you can change those things really fast, you can actually discover better setups and better ways to develop your AI in the first place.” Enter stage right: Synthetic data Parallel Domain works by generating virtual worlds based off of maps, which it dubs “digital cousins” of real-world scenarios and geographies. These worlds can be altered and manipulated to, for instance, have more jaywalking or rain, to aid with training autonomous vehicles. Because the worlds are digital cousins and not digital twins, customization can simulate the sometimes harder-to-obtain — but essential for training — data that companies normally would have to go out and get themselves. The platform allows users to tailor it to their needs via an API, so they can move or manipulate factors precisely the way they want. This accelerates the AI training process and removes roadblocks of time and labor. The company claims that in a matter of hours it can provide training datasets that are ready for its customers to use — customers that include the Toyota Research Institute, Google, Continental and Woven Planet. “Customers can go into the simulated world and make things happen or pull data from that world,” McNamara said. “We have knobs for different kinds of categories of assets and scenarios that could happen, as well as ways for customers to plug in their own logic for what they see, where they see it and how those things behave.” Then, customers need a way to pull data from that world into the configuration that matches their setup, he explained. “Our sensor configuration tools and label configuration tools allow us to replicate the exact camera setup or the exact lidar and radar and labeling setup that a customer would see,” he said. Synthetic data, generative AI Not only is synthetic data useful for AI and ML model training, it can be applied to make generative AI — an already rapidly growing use of the technology — develop even faster. Parallel Domain is eyeing the field as the company enters 2023 with fresh capital. It hopes to multiply the data that generative AI needs to train, so it can become an even more powerful tool for content creation. Its R&D team is focusing on the variety and detail in the synthetic data simulations it can provide. “I’m excited about generative AI in our space,” McNamara said. “We’re not here to create an artistic interpretation of the world. We’re here to actually create a digital cousin of the world. I think generative AI is really powerful in looking at examples of images from around the world, then pulling those in and creating interesting examples and novel information inside of synthetic data. Because of that, generative AI will be a large part of the technology advancements that we’re investing in for the coming year.” The value of synthetic data isn’t limited to AI. Given the vast amount of data needed to create realistic virtual environments, it’s also the only practical approach to move the metaverse forward. Parallel Domain is part of the fast-growing synthetic data startup sector, which Crunchbase previously reported is seeing a swath of funding. Datagen, Gretel AI and Mostly AI are some of its competitors that have also raised multiple millions in the last year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,300
2,023
"23 AI predictions for the enterprise in 2023 | VentureBeat"
"https://venturebeat.com/ai/23-ai-predictions-for-the-enterprise-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 23 AI predictions for the enterprise in 2023 Share on Facebook Share on X Share on LinkedIn Image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s that time of year again, when artificial intelligence (AI ) leaders, consultants and vendors look at enterprise trends and make their predictions. After a whirlwind 2022, it’s no easy task this time around. You may not agree with every one of these — but in honor of 2023, these are 23 top AI and ML predictions experts think will be spot-on for the coming year: 1. AI will be at the core of connected ecosystems “In 2023, we’re going to see more organizations start to move away from deploying siloed AI and ML applications that replicate human actions for highly specific purposes and begin building more connected ecosystems with AI at their core. This will enable organizations to take data from throughout the enterprise to strengthen machine learning models across applications, effectively creating learning systems that continually improve outcomes. For enterprises to be successful, they need to think about AI as a business multiplier, rather than simply an optimizer.” — Vinod Bidarkoppa, CTO of Sam’s Club and SVP of Walmart VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2. Generative AI will transform enterprise applications “The hype about generative AI becomes reality in 2023. That’s because the foundations for true generative AI are finally in place, with software that can transform large language models and recommender systems into production applications that go beyond images to intelligently answer questions, create content and even spark discoveries. This new creative era will fuel massive advances in personalized customer service, drive new business models and pave the way for breakthroughs in healthcare.” — Manuvir Das, senior vice president, enterprise computing, Nvidia 3. AI will completely transform security, risk and fraud “We’re seeing AI and powerful data capabilities redefine the security models and capabilities for companies. Security practitioners and the industry as a whole will have much better tools and much faster information at their disposal, and they should be able to isolate security risks with much greater precision. They’ll also be using more marketing-like techniques to understand anomalous behavior and bad actions. In due time, we may very well see parties using AI to infiltrate systems, attempt to take over software assets through ransomware and take advantage of the cryptocurrency markets.” – Ashok Srivastava, senior vice president and chief data officer, Intuit 4. Open source ML tools will gain greater market share “Next year teams that focus on ML operations, management and governance will have to do more with less. Because of this, businesses will adopt more off-the-shelf solutions because they are less expensive to produce, require less research time and can be customized to fit most needs. MLOps teams will also need to consider open-source infrastructure instead of getting locked into long-term contracts with cloud providers. Open source delivers flexible customization, cost savings and efficiency. Especially with teams shrinking across tech, this is becoming a much more viable option.” — Moses Guttman, CEO, ClearML 5. Deep learning opportunities will boost demand for GPUs “The biggest source of improvement in AI has been the deployment of deep learning — and especially transformer models — in training systems, which are meant to mimic the action of a brain’s neurons and the tasks of humans. These breakthroughs require tremendous compute power to analyze vast structured and unstructured datasets. Unlike CPUs, graphics processing units (GPUs) can support the parallel processing that deep learning workloads require. That means in 2023, as more applications founded on deep learning technology emerge to do everything from translating menus to curing disease, demand for GPUs will continue to soar.” — Nick Elprin, CEO, Domino Data Lab 6. AI will create meaningful coaching experiences “Modern AI technology is already being used to help managers, coaches and executives with real-time feedback to better interpret inflection, emotion and more, and provide recommendations on how to improve future interactions. The ability to interpret meaningful resonance as it happens is a level of coaching no human being can provide.” — Zayd Enam, CEO, Cresta 7. Geopolitical shifts will slow AI adoption “As fear and protectionism create barriers to data movement and processing locations, AI adoption will slow down. Macroeconomic instability, including rising energy costs and a looming recession, will hobble the advancement of AI initiatives as companies struggle just to keep the lights on.” — Rich Potter, CEO, Peak 8. The role of AI and ML engineers will become mainstream “Since model deployment, scaling AI across the enterprise, reducing time to insight and reducing time to value will become the key success criteria, AI/ML engineers will become critical in meeting these criteria. Today a lot of AI projects fail because they are not built to scale or [to] integrate with business workflows.” — Nicolas Sekkaki, GM of applications, Data and AI, Kyndryl 9. Multi, hybrid-cloud MLOps and interoperability will be key “As the AI/ML market continues to flood with new solutions, as evident by the volume of startups and VC capital deployed in the space, enterprises have found themselves with a collection of niche, disparate tools at their disposal. In 2023, enterprises will be more conscious of selecting solutions that will be more interoperable with the rest of their ecosystem, including their on-premises footprint and across cloud providers (AWS, Azure, GCP). Additionally, enterprises will gravitate towards a handful of leading solutions as the disparate tools mature and come together in bundles as standalone solutions.” — Anay Nawathe, principal consultant, ISG 10. Advanced ML will enable no-code AI “Advanced machine learning technologies will enable no-code developers to innovate and create applications never seen before. This evolution may pave the way for a new breed of development tools. In a likely scenario, application developers will ‘program the application’ by describing their intent, rather than describing the data and the logic as they’d do with low-code tools of today.” — Esko Hannula, SVP of product management, Copado 11. With spending down, AI will shift to practical applications “This past year was filled with incredibly impressive technological advancements, popularized by ChatGPT, DALL-E 2, Galactica and Facebook’s Make-A-Video. These massive models were made possible largely due to the availability of endless volumes of training data, and huge compute and infrastructure resources. Heading into 2023, funding for true blue-sky research will slow down as organizations become more conservative in spending to brace for the looming recession and will shift from investing in fundamental research to more practical applications. With more companies becoming increasingly frugal to mitigate this imminent threat, we can anticipate increased use of pre-trained models and more focus on applying the advancements from previous years to more concrete applications.” —John Kane, head of signal processing and machine learning, Cogito 12. ChatGPT will change the contact center, but not the way you think “Chatbots are the obvious application for ChatGPT, but they are probably not going to be the first ones. First, ChatGPT today can answer questions, but it cannot take actions. When a user contacts a brand, they sometimes just want answers, but often they want something done — process a return, or cancel an account, or transfer funds. Secondly, when used to answer questions, ChatGPT can answer based on knowledge [found] on the internet. But it doesn’t have access to knowledge which is not online. Finally, ChatGPT excels at generation of text, creating new content derived from existing online information. When a user contacts a brand, they don’t want creative output — they want immediate actions. All of these issues will get addressed, but it does mean that the first use case is probably not chatbots.” — Jonathan Rosenberg, CTO, Five9 13. AI will drive the future of customer experience “Digital engagement has become the default rather than the fallback, and every interaction counts. While the emergence of automation initially resolved basic FAQs, it’s now providing more advanced capabilities: personalizing interactions based on customer intent, empowering people to take action and self-serve, and making predictions on their next best action. “The only way for businesses to scale a VIP digital experience for everyone is with an AI-driven automation solution. This will become a C-level priority for brands in 2023, as they determine how to evolve from a primarily live agent-based interaction model to one that can be primarily serviced through automated interactions. AI will be necessary to scale operations and properly understand and respond to what customers are saying, so brands can learn what their customers want and plan accordingly.” — Jessica Popp, CTO of Ada 14. AI model marketplaces will emerge “Coming soon are industry-specific AI model marketplaces that enable businesses to easily consume and integrate AI models in their business without having to create and manage the model lifecycle. Businesses will simply subscribe to an AI model store. Think of the Apple Music store or Spotify for AI models broken down by industry and data they process.” — Bryan Harris, executive vice president and chief technology officer, SAS 15. Explainability will create more trustworthy AI “As individuals continue to worry about how businesses and employers will use AI and machine learning technology, it will become more important than ever for companies to provide transparency into how their AI is applied to worker and finance data. Explainable AI will increasingly help to advance enterprise AI adoption by establishing greater trust. More providers will start to disclose how their machine learning models lead to their outputs (e.g. recommendations) and predictions, and we’ll see this expand even further to the individual user level with explainability built right into the application being used.” — Jim Stratton, CTO, Workday 16. 2023 will be a major year for federated learning “Federated learning is a machine learning technique that can be used to train machine learning models at the location of data sources, by only communicating the trained models from individual data sources to reach a consensus for a global model. Therefore instead of using the traditional approach of collecting data from multiple sources to a centralized location for model training, this technique learns a collaborative model. Federated learning addresses some of the major issues that prevail in the current machine learning technique, such as data privacy, data security, data access rights and access to data from heterogeneous sources.” — David Murray, chief business officer, Devron 17. NLP plus object recognition will take search to the next level “While most people write scrapers today to get data off of websites, natural language processing (NLP) progress has been made where soon you can describe in natural language what you want to extract from a given web page and the machine pulls it for you. For example, you could say, “Search this travel site for all the flights from San Francisco to Boston and put all of them in a spreadsheet, along with price, airline, time and day of travel.” It’s a hard problem, but we could actually solve it in the next year.” — Varun Ganapathi, CTO and co-founder, AKASA 18. Advances are coming in real-time speech translation “With remote work, boundaries are becoming increasingly blurred. Today it’s common for people to work and converse with colleagues across borders, even if they don’t share a common language. Manual translation can become a hindrance that slows down productivity and innovation. We now have the technology to use communication tools such as Zoom that allows someone in Turkey, for example, to speak their native language but allows someone in the U.S. to hear what they’re saying in English. This real-time speech translation ultimately helps with efficiency and productivity while also giving businesses more of an opportunity to operate globally.” — Manoj Chaudhary, CTO and SVP of engineering, Jitterbit 19. AI-enabled phishing will grow “By now, everyone has seen AI-created deepfake videos. They are leveraged for a variety of purposes, ranging from reanimating a lost loved one, disseminating political propaganda or enhancing a marketing campaign. However, imagine receiving a phishing email with a deepfake video of your CEO instructing you to go to a malicious URL. Or an attacker constructing more believable, legitimate-seeming phishing emails by using AI to better mimic corporate communications. Modern AI capabilities could completely blur the lines between legitimate and malicious emails, websites, company communications and videos. Cybercrime AI-as-a-Service could be the next monetized tactic.” — Heather Gantt-Evans, CISO, SailPoint 20. Companies will turn to a hybrid approach to NLP “In the year ahead, we will see enterprises turn to a hybrid approach to natural language processing combining symbolic AI with ML, which has shown to produce explainable, scalable and more accurate results while leaving a smaller carbon footprint. Companies will expand automation to more complex processes, requiring accurate understanding of documents, and extending their data analytics activities to include data embedded in text and documents. Therefore, investments in AI-based natural language technologies will grow. These solutions will have to be accurate, efficient, environmentally sustainable, explainable and not subject to bias. This requires enterprises to abandon the single-technique approach such as just machine learning (ML) or deep learning (DL) for their intrinsic limitations.” — Luca Scagliarini, chief product officer, Expert.ai 21. AI-generated music will see advancements “Advancements in AI-generated music will be a particularly interesting development. Now [that] tools exist that generate visual art from text prompts, these same tools will be improved to do the same for music. There are already models available that use text prompts to generate music and realistic human voices. Once these models start performing well enough that the public takes notice, progress in the field of generative audio will accelerate even further. It’s not unreasonable to think, within the next few years, that AI-generated music videos could become reality, with AI-generated video, music and vocals.” — Ulrik Stig Hansen, president, Encord 22. AI investments will move to fully-productized applications “There will be less investment within Fortune 500 organizations allocated to internal ML and data science teams to build solutions from the ground up. It will be replaced with investments in fully productized applications or platform interfaces to deliver the desired data analytic and customer experience outcomes in focus. [That’s because] in the next five years, nearly every application will be powered by LLM-based neural network-powered data pipelines to help classify, enrich, interpret and serve. “[But] productization of neural network technology is one of the hardest tasks in the computer science field right now. It is an incredibly fast-moving space that without dedicated focus and exposure to many different types of data and use cases, it will be hard for internal-solution ML teams to excel at leveraging these technologies.” — Amr Awadallah, CEO, Vectara 23. AI will empower more efficient devops “When it comes to devops, experts are confident that AI is not going to replace jobs; rather, it will empower developers and testers to work more efficiently. AI integration is augmenting people and empowering exploratory testers to find more bugs and issues upfront, streamlining the process from development to deployment. In 2023, we’ll see already-lean teams working more efficiently and with less risk as AI continues to be implemented throughout the development cycle. “Specifically, AI-augmentation will help inform decision-making processes for devops teams by finding patterns and pointing out outliers, allowing applications to continuously ‘self-heal’ and freeing up time for teams to focus their brain power on the tasks that developers actually want to do and that are more strategically important to the organization.” – Kevin Thompson, CEO, Tricentis VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,301
2,022
"Healthcare AI is advancing rapidly, so why aren't Americans noticing the progress? | VentureBeat"
"https://venturebeat.com/ai/healthcare-ai-is-advancing-rapidly-so-why-arent-americans-noticing-the-progress"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Healthcare AI is advancing rapidly, so why aren’t Americans noticing the progress? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no doubt that artificial intelligence (AI) in healthcare had a very successful year. Back in October, the FDA added 178 AI-enabled devices to its list of 500+ AI technologies that are approved for medical use. Topping the list for most approved devices were two massive players in the healthcare technology space: GE Healthcare , with 42 authorized AI devices, and Siemens , with 29. Together, the two companies accounted for nearly 40% of the new devices that made the list. However, despite the leaps and bounds made in the field thanks to these two giants, a recent survey from medical intelligence company Bluesight found that regardless of actual advancements made, around 50% of U.S. adults say they have not seen or experienced improvements in their own care as a result of medical AI advancements. Why is that? And, when will consumers start to reap the benefits? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! They already are, but may not realize it since many tools are used by clinicians behind the scenes in radiology and imaging, explained Peter Shen, head of digital health at Siemens Healthineers North America. But increasing personalized medical care by using AI tools is something Siemens is continuing to refine and prioritize. “Our strategy for AI goes beyond imaging and pattern recognition,” Shen said. “The informed diagnostics we derive from AI allow us to design better ways to take care of patients. For us, it is about more than efficiency and more than just decision-making. We want to start to drive personalized medicine toward the patients themselves and create accessibility in medical care.” Behind the curtain and beyond the hype The rapid improvements in AI technology are largely aimed at making healthcare more accessible to the patients it serves by eventually lowering costs, accelerating processes and providing even more accurate, personalized care. The benefits continue to evolve with each iteration of AI-enabled medical technology, and many of these advancements are also being used to assist medical professionals behind the scenes. Radiology and medical imaging continues to be the fastest growing sector of AI-medical advancements, making up more than 85% of the FDA’s total list of 521 devices. Experts from both GE and Siemens say to anticipate further growth in this area — particularly with the potential it holds to change healthcare outcomes and diagnoses for patients. For example, Vara AI , an AI-powered mammography screening platform, was able to detect roughly 40% of all cancers in clinical trials that were initially missed by radiologists. This is something patients may not notice being used, but certainly can make an impact on their diagnoses and treatment outcomes. “AI is moving past the hype cycle and becoming mainstream, increasing access to applications that use AI,” said Vignesh Shetty, SVP and general manager of Edison AI and platform at GE Healthcare. “As a result, the face of radiology, imaging and healthcare is changing and AI is becoming one of its distinguishing features.” When it comes to AI, he added, “It is no longer a fear of whether AI will replace healthcare professionals, it is more a matter of healthcare professionals who use AI differentiating themselves with those who do not.” Supporting patients with AI in 2023 Interestingly, Bluesight’s research found that although many patients reported not seeing or experiencing technology advancements directly in their medical care, 84% of patients responded either neurally or positively on a scale of 1-5 to the statement, “I think that technology is making healthcare more accessible. ” This could signal room to improve education and dialogue around AI’s use in healthcare to build trust — something Bluesight’s research also found that is lacking. At the same time, AI’s personalization capabilities may be able to aid with that trust, allowing patients to feel more seen, heard and supported as well. “AI technology has the potential to improve the accuracy and efficiency of medical diagnoses, which could help doctors provide better care for their patients,” Shetty said. “In addition, AI-powered tools can help to reduce the amount of time and effort that doctors and other healthcare professionals need to spend on routine tasks, which could free up more time for them to see patients.” As GE heads into 2023, the company is largely focused on patient support. Its solutions include the Edison platform for improving efficiency for patient processes; the Critical Care Suite 2.0, which automates high-risk procedures; and its platform Mural, which allows clinicians to access the status of ICU patients to provide care and reduce the time to intervention. “What customers really value is the reduction in uncertainty, whether it be in healthcare or in ridesharing or another industry,” Shetty said. “Now imagine 20 times improvement in the patient and provider experience using intelligent scheduling or reduced MR scan and reporting times. That is the transformation GE Healthcare is after.” As for Siemens, Shen said in 2023 the company plans to double-down its focus on using its AI algorithms and technologies to improve pattern recognition and train it on large amounts of clinical data and help derive better outcomes for patients. “Rather than using AI in one clinical space like radiology, we can train it to look at multiple clinical spaces like data from images and also data from lab results or blood work, or even pathology slides from a biopsy,” Shen explained. “If we feed it all into our AI systems and train the AI to find correlations between all of these clinical pieces of data, that will help clinicians make stronger, more informed diagnostic and treatment decisions about their patients.” This type of work with AI technology can also be used to model patient anatomy and eventually lead to the development of anatomically correct, personalized digital twins for patient care to perhaps test out the effectiveness of certain therapies in very specific digital twin simulations, before trying it out on the patient themselves, Shen said. These advancements likely won’t slow down anytime soon and are likely to become technologies that patients interface with more and benefit from further in healthcare. Looking ahead, AI solutions in healthcare are projected to skyrocket, making the market grow to be worth $188 billion by 2030 according to Statista. While focusing on growth, Siemens and GE Healthcare both ultimately plan to continue the prioritization of driving outcomes that better serve patients. “Driving outcomes will improve the future of AI and leverage info to help medical professionals make more informed diagnostic decisions and create personalized therapeutic treatments for patients,” Shen said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,302
2,022
"The dynamic duo: Strong UI and bias-free AI technology  | VentureBeat"
"https://venturebeat.com/ai/the-dynamic-duo-strong-ui-and-bias-free-ai-technology"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The dynamic duo: Strong UI and bias-free AI technology Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As of 2021, 91.5% of businesses report an ongoing investment in artificial intelligence (AI). As organizations consider their next big AI solution, there are two key components that must be kept top of mind throughout this search: A strong user interface (UI) and bias-free results. Poor UI design is a leading reason why certain technology doesn’t gain high adoption rates within organizations. If the UI of an AI solution is easy to use, delivers strong performance, and has engaging branding and design features, its business impact and usage will skyrocket. But, of course, it doesn’t stop with just looks and usability. Ensuring that organizations implement bias-free AI technology is key for ongoing success. AI algorithms are shaped by the data used to train them. That data, and the training process itself, can reflect biased human decisions or historical and social inequities — even if sensitive variables are removed. To maintain and build trust with new AI capabilities, companies must always value and enforce usability and accuracy while continuing to raise their expectations of such technology. The AI technology market take off As AI continues to evolve, it impacts not only how businesses operate, but how we function as a society. In fact, AI usage is so prevalent that the market size is expected to grow from $86.9 billion in 2022 to $407 billion by 2027. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Whether it be the use of AI in intelligent document processing (IDP), fraud detection software, self-driving cars or chatbots, this boom has left the definition of AI convoluted. To keep it simple, AI aims to mimic the human approach to common problems. As time goes on, AI will continue to become smarter as we continue to learn and utilize its capabilities for maximum potential and problem-solving. Today, we have reached a pivotal turning point in AI technological advancements and are able to tackle mundane tasks and overcome challenges in new, efficient, and innovative ways. That said, AI has also become a saturated market: Those looking to solve everyday business problems are now finding it difficult to pinpoint leading solutions. Many businesses are seeking tips around what foundational elements are most important when evaluating AI technologies, and its UI design and bias-free results must stand out. Prioritizing a strong user interface Deep learning is a type of machine learning (ML) based on artificial neural networks. These are mathematical structures loosely inspired by the form and function of the brain, and they are able to learn by example in a way that is similar to the way humans learn. Deep learning has evolved explosively over the past years and is constantly pushing the envelope of what’s possible with AI. It’s by far the fastest evolving area of AI, and at this point, non-deep learning areas of AI could be labeled as niche. To explain further, whenever a human corrects an AI mistake, the AI should not repeat the same mistake again. Unfortunately, if usage is limited, AI can no longer learn by example and will ultimately provide diminished results and poor data quality. In fact, poor data quality has cost organizations more than $12 million on a yearly basis and can significantly hurt business operations. Without a friendly UI, employees won’t use the AI solution, and those that do will use it less often than recommended or won’t use it properly. All of this devalues the AI investment because the models are not learning or getting better. For example, AI is being programmed into cars, and the user experience is key to its adoption and success. In particular, lane assist technology holds safety benefits, but the experience can be very startling and off-putting for drivers if they drift into another lane. Depending on the car model, the wheel may automatically move, alarms may go off or flashing may occur on the dashboard. If lane assist technology is overly sensitive or erratic, this can cause great strife for drivers, hurting adoption rates. Ultimately, the technology has stopped gaining the knowledge it needs to improve its capabilities. This goes for all deep learning AI technology. With many still not understanding the full scope of AI and its benefits, a powerful and easy-to-use UI must be at the forefront to ensure an ongoing and successful investment. Removing AI bias from the equation Bias is everywhere, and AI is no exception. AI bias is the underlying prejudice in data that’s used to create AI algorithms, and it is typically — usually unconsciously — built into technology from inception. This can happen by models being trained on data that is influenced by repeated human decisions and behaviors, or on data that reflects second-order effects of societal or historical inequities. This can result in discrimination and other social consequences. Data generated by users can also create a feedback loop that leads to bias, and bias can be introduced into data through how it is collected or selected for use. Depending on the solution, AI bias can also lead to algorithms full of statistical correlations that are societally unacceptable or illegal. For example, Amazon recently discovered that its algorithm used for hiring employees was biased against women. The algorithm was based on the number of resumes submitted over the past ten years, and since most of the applicants were men, it was trained to favor men. While this may have been a seemingly harmless oversight, its impact and effect on the advancement of women’s careers was vast. Further, one of the largest issues with biased AI technology is that it can deploy human and societal biases at scale , continuously providing inaccurate results and hurting trust between the end-user and vendor. Ensuring that any potential vendor prioritizes and consistently conducts research on AI bias is the key. Whether it is racial profiling, gender prejudice, recruiting inequity and/or age discrimination, bias is something all companies need to keep top of mind when on the market for new AI-powered technologies. Combining a strong UI with bias-free AI for maximum success When developing a product, bias can play a pivotal role in the success of a UI. Further, AI bias can be improved with a strong UI. For example, a graphic designer might want to include photos they that find engaging and thought-provoking on the landing page of a software platform. That’s a completely biased opinion and not based on any market research or feedback from customers. These photos can impact the user experience, and by eliminating photos selected based on personal preference, bias can be avoided. These two components of AI technology can quickly become intertwined, and if organizations are looking for a forward-looking technology partner, it is important to inquire about these elements — and their evolutions — from the forefront. While it’s clear that AI technology brings a plethora of value to organizations, there is still much to learn, so having a checklist of the important components to be implemented and remain the focus throughout the technology’s journey is crucial. In other words, finding a solution that not only has a strong UI but proactively works to cut out bias is the key to a long-lasting, highly-adopted, trusted, and scalable solution that will take businesses to the next level. Petr Baudis is CTO and chief AI architect at Rossum. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,303
2,022
"How conversational AI can remove sensitive information from contact center calls | VentureBeat"
"https://venturebeat.com/business/how-conversational-ai-can-remove-sensitive-information-from-contact-center-calls"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How conversational AI can remove sensitive information from contact center calls Share on Facebook Share on X Share on LinkedIn Every day, individuals call into customer contact centers and provide sensitive information, like credit card numbers, to agents by voice. Now, a conversational artificial intelligence (AI) solution using natural language understanding capabilities offers a way to remove that information from calls, while still passing data through for transactions. This is important because dealing with any sort of personally identifiable information (PII) inevitably involves an array of compliance with security and privacy regulations that can vary based on jurisdiction. There is also a non-trivial risk that sensitive information could potentially be leaked or stolen. In fact, there are known incidents where credit card information provided by voice have been written down by malicious agents, leading to undesirable outcomes. “There was an incident where an enterprise customer came to us with a real-life story saying, hey, look, this happened, somebody noted down the credit card numbers and those things were leaked in the open market,” Srini Bangalore, head of AI research at conversational AI vendor Interactions , told VentureBeat. “That led us to start thinking about the technology itself, and how to go about redacting personally identifiable information in real-time with low latency, without impacting user experience.” To that end, Interactions developed a new technology, Trustera, which is generally available as of today. The goal is to use AI and machine learning (ML) techniques to identify PII in real time, redact it from the live voice call and still pass off the information to the underlying digital systems for transactions in an encrypted approach. Taking a hybrid AI approach to conversational AI Interactions is a company that designs conversational AI technology platforms for organizations. Conversational AI technology is commonly associated with human interactions with bots, but that’s not the approach that Interactions has largely taken. Bangalore said that his company has taken what he called a hybrid AI approach. With the hybrid AI model, humans are part of the process alongside conversational AI to help support user experience in a frictionless approach. The Trustera system, for example, is not bot-driven, but is intended to operate in environments where an individual calls into a customer support center and then speaks with a human. Bangalore said that the process of redacting PII in human led conversations is more complicated than it is for purely bot and digital-driven interactions in an interactive voice response (IVR) type system. He noted that in IVR or bot conversations, the system knows when PII is being transmitted because it is part of the process and initiated by the system. With human-led conversations, it’s not always at the same point in a conversation when PII is requested or transferred. There is also a need to understand what PII is being sent, as well as understanding the actual human speaker. How Trustera conversational AI works to secure PII The AI technology that Interactions has developed for its conversational AI platforms has its roots in capabilities that come from AT&T Bell Labs. In 2014, Interactions acquired speech analysis technologies from AT&T , which is where Bangalore had formerly worked for 18 years. The speech recognition capabilities have steadily improved in the years since, with the integration of natural language understanding (NLU) functionality, which helps to enable the Trustera service. Interactions has trained its model on-call information to understand when different human speakers transfer PII. The model isn’t static and is constantly being updated. “We have a self supervised auto ML approach, where we take the previous day’s calls and we have a notional confidence metric to say these are data elements that we can add back to the model,” Bangalore said. “So we update the model periodically that way as well.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,304
2,017
"Carnival's Ocean Medallion wearable levels up guest experience on cruises | VentureBeat"
"https://venturebeat.com/2017/01/04/carnivals-ocean-medallion-wearable-levels-up-guest-experience-on-cruises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Carnival’s Ocean Medallion wearable levels up guest experience on cruises Share on Facebook Share on X Share on LinkedIn Carnival's Ocean Medallion wearable makes it easy to tailor a guest experience on a cruise ship. Carnival is introducing a personalized concierge service for its cruise ship guests through its Ocean Medallion wearable. The wearable will do everything from unlock your cabin door to check you in for a reservation at a restaurant, and it will work both during and after cruise vacations. Arnold Donald, CEO the world’s largest cruise company, introduced the Ocean Medallion at a keynote speech at CES 2017, the big tech trade show in Las Vegas this week. Donald isn’t from a tech company, but his presence at CES shows how technology has spread into all forms of business. The wearable combines a personal digital vacation concierge and an Internet of Things network of intelligent sensors to provide better service for millions of passengers on 10 different cruise lines. The Ocean Medallion enables sophisticated wayfinding, food and beverage on demand, and an array of interactive gaming and personalized entertainment experiences. It is the size of a quarter and weighs 1.8 ounces. Above: Ocean Medallion wearable can unlock your room on a cruise ship. The device is powered by proprietary technology developed by Carnival that features an IoT network of intelligent sensors and experiential computing devices. While the Ocean Medallion resembles the wearables used at theme parks, Carnival said its wearables go well beyond those in terms of capability. Nytec designed the hardware system, working with Carnival. The wearables will streamline and expedite the port embarkation and disembarkation process; allow guests to access their staterooms as they approach the door (no keycard required); locate friends and family around the cruise ship; enable guests to purchase merchandise without any transaction, cards, or paper; deliver enhanced dining experiences, based on food and beverage preferences; and power an array of interactive gaming and immersive entertainment experiences. “With this interactive technology platform, we are poised to have our global cruise line brands at the vanguard of forever changing the guest experience paradigm — not just in the cruise industry, but in the larger vacation market and potentially other industries,” said Donald, in a statement. “Our focus is on exceeding guest expectations every single day and consistently delivering great experiences, and we do that extremely well. Now we are in prime position to take the guest experience to a level never before considered possible and build on cruising’s popularity and value as the fastest-growing segment of the vacation sector.” The Ocean Medallion pairs with an optional personalized digital concierge called the Ocean Compass — a digital experience portal available online, on smart devices, on kiosks in home ports, on stateroom TVs, on interactive surfaces located throughout the cruise ship, and on devices carried by all guest service hosts. Above: Ocean Medallion lets you interact with interactive displays on a cruise ship. Both innovations combine with an invisible network of proprietary sensors and computing devices embedded throughout the ship, home ports, and destinations that collectively form the Experience Innovation Operating System known as xiOS. The proprietary xiOS uses a guest-centric, IoT approach to enable guests to maximize their experiences in real time based on their choices and preferences — delivering enhanced personalization across every aspect of their cruise vacation. The xiOS seamlessly leverages hardware and software to enable all guest experiences, including access, lodging, food and beverage, entertainment, retail, navigation, payment, and media. The new guest experience platform will debut on Princess Cruises’ Regal Princess in November 2017, followed by Royal Princess and Caribbean Princess in 2018. The new Medallion Class Ocean Vacations will be rolled out over multiple years on the entire Princess Cruises fleet. The guest experience platform is a key element of OCEAN, or One Cruise Experience Access Network, a new effort by Carnival focused on expanding the cruise vacation market through guest experience innovation. This includes the development of original experiential media content and inclusion of new TV programs airing on national TV and the expansion of its portfolio of exclusive and unique destinations. Guests can use Ocean Compass to enhance their vacations, select experiences, and create personalized, event-based itineraries. No guest is required to have a personal device to access Ocean Compass. The Ocean Compass also serves as a digital media platform that offers custom experiential media content, as well as access to vacation photos captured during a cruise. Above: You can interact with Carnival’s Ocean Compass portal via a smartphone or other devices. The Ocean Medallion wearable is laser-etched with the guest’s name, ship, and date of sailing and will be provided to passengers at no extra cost. Inside each guest’s Ocean Medallion are multiple communication technologies, including Near Field Communication (NFC) and Bluetooth Low Energy (BLE). “The Ocean Medallion creates an elevated level of service that’s made possible by technology but doesn’t feel like technology,” said John Padgett, chief experience and innovation officer for Carnival, in a statement. “Whether guests are exploring new experience options, having a drink delivered to their seat at the night’s show, or trying their luck gaming while lounging poolside, we will assist our guests wherever they are, while engaging with them in a uniquely personal way. Our mission is to help our guests make the most of every moment of their vacation.” It’s easy to get lost on a big cruise ship, but the Medallion enables point-to- point wayfinding across the ship, thanks to an intelligent navigation assistant — similar to a car or phone GPS app. It also facilitates ordering, menu exploration, and delivery of food and beverage services. Guests can even place orders and view the whereabouts of their food and beverage from anywhere on a ship, as well as placing future orders and having them delivered wherever they plan to be at a designated time. Guests will be able to conveniently and securely make payment for experiences using their Ocean Medallion. They can associate any number of credit cards, reward cards, gift cards, and advanced payment services (Apple Pay, PayPal, etc.) with Medallion Pay. Lippincott helped implement Carnival’s vision. “We were thinking pretty much like a startup, although Carnival Corp. is an established company with 50 percent market share,” said Randall Stone, chief experience officer and director of Lippincott’s Innovation Lab, in an email. “They enlisted the help of us to bring their vision to market. That aspect is often the last part of an innovation project. This time it was part of it.” “As an innovation team partner, we worked across teams to blend experience innovation with the expression of the new brand,” Stone added. “Our work is helping to establish a common language across development partners and an experience that never loses sight of the new guest experience.” VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,305
2,019
"Procter & Gamble shows off surprisingly cool tech in ordinary products | VentureBeat"
"https://venturebeat.com/2019/01/06/procter-gamble-shows-off-surprisingly-cool-tech-in-ordinary-products"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Procter & Gamble shows off surprisingly cool tech in ordinary products Share on Facebook Share on X Share on LinkedIn Gillette's heated razor Procter & Gamble (P&G) is 182 years old, but this week marks its first appearance at CES , the big tech trade show in Las Vegas. The company said its new focus is on leading disruption through technology, and it showed some very cool tech in what would otherwise be very ordinary products. It’s part of the internet of things trend, and P&G is focused on putting sensors and artificial intelligence into things like skin advisers, razors, and blemish removers. I was impressed with the company’s effort to put technology into ordinary products in such a way that it just fades into the woodwork. P&G said it is using technology to transform every aspect of the experience consumers have with its product lines, which range from Gillette to Olay. The products that make up P&G’s LifeLab concept include: SK-II Future X Smart Store is a traveling learning lab and pop-up store that uses facial recognition, smart sensors, and computer vision technology to provide next-generation smart skincare counseling. Olay Skin Advisor is an online beauty tool that uses artificial intelligence to analyze your selfies and make custom skincare recommendations. It is trained on 50,000 algorithms that can do things like calculate your age and come up with tailored advice. This launched last year and has already drawn more than 5 million visitors. It’s now getting an upgrade with even better tech. Aria is a connected home fragrance device that can distribute custom levels of scent throughout your home, managed conveniently through a mobile app. Funai Electric provided the scent-jet technology used to distribute the fragrance. Above: P&G’s smart toothbrush. Image Credit: Dean Takahashi Oral-B Genius X with artificial intelligence is a toothbrush that combines the knowledge of thousands of human brushing behaviors to assess individual brushing styles and coach users to achieve better brushing habits. The AI technology tracks where people are brushing in their mouth and offers personalized feedback on the areas that require additional attention. Above: P&G’s Opte Image Credit: Dean Takahashi Opté. After 10 years of development and over 40 patents, P&G Ventures, a startup studio within P&G, is introducing Opté, which combines proprietary algorithms and printing technology with skincare to scan, detect, and correct imperfections with precision application for visibly flawless skin tone. When you move the blue light device over your face, it zeroes in the on blemishes and treats those areas only, not the skin that surrounds it. Funai Electric provided the inkjet microfluidics technology used in the Opté. Above: Gillette’s heated razor turns orange when it is on. Image Credit: Dean Takahashi Gillette self-heating razor. This razor heats itself so you can enjoy a shave with a warm, comfortable blade. It has sensors that shut it down if the heat gets too high. I felt it and it was surprisingly warm, even though it looks like any other fancy metal razor. DS3 is an engineered soap that can clean with a lot less water. In fact, P&G estimates it can save 800 million gallons by eliminating the water wasted each day. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,306
2,019
"Watch Opté remove the spots on Dean Takahashi's face | VentureBeat"
"https://venturebeat.com/2019/01/09/watch-opte-remove-the-spots-on-dean-takahashis-face"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Watch Opté remove the spots on Dean Takahashi’s face Share on Facebook Share on X Share on LinkedIn Procter & Gamble surprised us by showing up at CES , the big tech trade show in Las Vegas, for the first time in the company’s 182-year history. And it showed off some cool products, like Opté, which removes spots on your skin, at least temporarily. The product is part of P&G’s effort to bring technology to ordinary products and lead market disruption. Doing so is, of course, a better strategy than allowing yourself to be disrupted. I like how the company makes technology fade into the woodwork — although Opté really is a brand-new product category. After 10 years of development and over 40 patents, P&G Ventures, a startup studio within P&G, has finally introduced the product, which combines proprietary algorithms and printing technology with skincare to scan, detect, and correct imperfections. The company touts its precision application as delivering a visibly flawless skin tone. When you move the blue light device over your face, it zeroes in on blemishes and treats those areas only, not the skin that surrounds it. A tiny inkjet printer deposits picoliter-size drops of the treatment on your skin. I got to use this tech in a demo at the Pepcom party last night, and you can see that it erased some big spots on my skin, without changing my overall skin tone at all. Sadly, when you wash the treatment off the spots come back. Over time, though, they may not look as bad. Funai Electric provided the inkjet microfluidics technology used in the Opté. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,307
2,022
"CES 2023: Tips and tricks for the biggest in-person tech trade event | VentureBeat"
"https://venturebeat.com/games/ces-2023-tips-and-tricks-for-the-in-person-techpalooza"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture CES 2023: Tips and tricks for the biggest in-person tech trade event Share on Facebook Share on X Share on LinkedIn The ice sculpture at CES Unveiled, 2019. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. I first attended the Consumer Electronics Show back in the 1990s when then-Microsoft CEO Bill Gates gave the opening keynote speeches every year. Las Vegas has changed a lot since then, but some of my advice about the show goes back to those good old days. For instance, it’s always important to wear comfy shoes. I learned that lesson after some blisters during a CES years ago. What’s changed? Well, I’d recommend wearing a mask most of the time at CES 2023, which starts for the press on January 3 and then gets under way for real on January 5 and ends January 8. Much of my advice is not rocket science, but I cooked this up with the knowledge that it will be a hybrid event — something forever changed by the pandemic that forced the show into digital form in 2021. The show took place in person in January 2022, but the Omicron wave of COVID-19 took its toll. Many people reported getting sick and only 45,000 went to the show, far below the 175,212 that showed up in 2019. Since there are new people attending the show every year and many going back for the first time since 2020, I feel obligated to offer my tips and tricks. (But I take no responsibility for bad advice.) I also have new tips, like pointing you to the CES app , which came out just recently. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! CES 2023 is expected to draw 100,000 techies, down some from 175,212 in 2019, 171,268 in 2020 and way up from 45,000 in 2022, according to an educated guess from Gary Shapiro, CEO of the Consumer Technology Association, the trade group that puts on the show. The whole event will seem a day shorter than usual, partly because New Years Day falls on a Sunday and many workers will be taking Monday off. That alone will account for fewer attendees across the entire event. But the event runs through Sunday the 8th. CES 2020 had about 4,500 exhibitors across 2.9 million square feet of space. This year, CES 2023 will likely have close to 3,000 exhibitors across at least 2.1 million square feet of space. Should you go? Well, the show is big. Last year, there were 3,100 media attending CES 2022. And they wrote 177,000 stories. CES 2020 had an estimated economic impact of $291 million in Las Vegas. Investors have contributed more than $10 billion to date to startups at Eureka Park. Those are some decent reasons to consider the show as a must-attend for some folks. On the other hand, everyone should consider the health risks of going to a big event. I still view CES as a bellwether for the tech economy, as no other event spans the entire tech world like it does. Companies want to create a buzz at CES, which is designed to signal products coming in the next year. I find the show a useful way to stay up to speed on the latest technology. If you find the health risk acceptable, then it can still be a valuable way to stay in touch. Apple doesn’t attend the show, but just about every other tech giant does. It’s where the tech industry will be next week, though it’s not so much of a game event these days. Sony, however, will be showing up and you can bet they’ll talk more about PlayStation VR 2. And while some of that can make you optimistic, the specter of tech layoffs is hanging over the show. Shapiro hasn’t seen a fall-off in registration, but he acknowledges a “rebalancing in the tech community.” “Tech has been very strong for many years. But there always is a business cycle. It seems like every 10 years — the last one was 2008-2009. In a way there was 2020. Before that you had 2000-2001, and 1990-91,” said Shapiro. “There are recessions. The tech industry is not immune to the impact of the economy overall. But in the long run, the tech industry, consumer technology, so many different flavors of solving human problems, has a great future.” Health protocols This year’s health procotols are different from CES 2022. Of course, the CTA will argue it’s safe. And it says it will “monitor COVID and other health developments across the globe and will follow health safety guidelines from the WHO and CDC, as well as applicable federal, state and local laws.” Before you leave home, CES encourages you to get a flu shot, get up to date with your COVID vaccines and booster shots, and test regularly as the most effective way to detect COVID and isolate if you are infected. CDC guidelines as of September 10, 2022, require international travelers to the U.S. to show proof of vaccination against COVID before boarding a flight to the U.S. CES suggests that you self-test before leaving and upon arriving in Las Vegas. You can find Las Vegas testing sites at NVCovidFighter using the zip code 89109. CES will provide access to rapid antigen tests. If you feel ill onsite at a CES exhibit venue, the CTA will offer free testing at designated medical stations. It also will provide isolation recommendations from the CDC if you test positive. It will also provide masks. It recommends you wear a high-quality mask indoors if you are more comfortable and it asks that you “respect those around you that choose to do so.” Since 2020, CES exhibits have improved ventilation systems and increased outside air flow. They will also have reduced surface touch points in high-traffic areas. There will also be more space for social distancing. Security screening entry points will have one-way traffic flow. It suggests you avoid shaking hands and use hand sanitizer, which will be available in the venues. Getting your badge and getting into the show You have to work in the tech industry to get into the show. It’s a place for professionals, not tourists, and registration is designed to screen the tourists out. With 100,000 people at the show, you don’t want to get stuck in long lines. You can avoid the first big line by picking up your badge by showing your ID and show registration early. You can get your CES badge at the baggage claim areas of the airport and most of the venues. There will also be lines getting into the venues. You will have to submit to a bag search at the entrance to all show venues. All bags are subject to search and metal wand screening. In addition, regular attendees can only carry two small bags, each smaller than 12”x17”x6” into show venues. Attendees are encouraged to consider their bag type and use clear bags (mesh, plastic, vinyl, etc.) to expedite entry. Rolling bags of any size are prohibited including luggage, carry-ons, laptop and computer bags, and rolling luggage carts. Media with an official media badge are permitted to hand-carry equipment onto show premises in excess of the two-bag restriction. This equipment is subject to search and tagged as approved for entry. If you’ve got a good light laptop, I suggest you bring it. And maybe bring a backup if you have one. But consider the tradeoffs of having backup stuff and a heavier bag that will kill your back. If you do have heavier bags, you can check them at the Las Vegas Convention Center Central Plaza and West Hall exterior areas, the Venetian Expo Level 1 lobby, and the Venetian Ballroom foyer. Peak times for CES crowds The press events start late on January 3 and continue all day on January 4. Lisa Su, CEO of Advanced Micro Devices, will kick off the keynote speeches at 6:30 p.m. on January 4 at the Venetian Palazzo Balloom, followed by a talk at 8 p.m. at The Pearl Theater at The Palms by BMW chairman Oliver Zipse. The expo will start in places like the Las Vegas Convention Center and the Sands Expo on January 5, when you can expect to see the crowds showing up. John May, CEO of John Deere, will give a keynote on the morning of January 5 at 8:30 a.m. in the Venetian Palazzo Ballroom, and the CTA’s Shapiro will also give a kind of state of the union about tech during that time. Other talks will take place at the Aria’s C Space section, where speakers like Ed Bastian, CEO of Delta Air Lines, Jeremi Gorman, president of global advertising at Netflix, and Francine Li, global head of marketing at Riot Games, will speak. These speakers are more diverse than in the past. And they reflect the fact that technology is moving well beyond tech companies and all major companies are adopting it. When tech fades into the woodworks, like into the giant John Deere combines harvesting wheat, it has reached a new stage. The internet-connected tractors are all about getting more food to people in the world. I’m not sure what to expect on Saturday and Sunday (January 7-8) at the show. CES usually doesn’t last that long, but some people might be arriving on those days just to see the show without crowds. Press days As I mentioned, the press days will start earlier with some digital events, like Nvidia’s 8 a.m. online press event on January 3 and Acer’s 9 a.m. online event the same day. At 4 p.m. on January 3, CTA vice president of research will kick off the first CES event at the Mandalay Bay in a press-only event, followed by CES Unveiled at 5 p.m., where scores of companies will be showing off their award-winning stuff for the first time at the show. For the press, Tuesday January 4 is a kind of baptism by fire, thanks to press events starting at 8 a.m. with LG and ending with Sony’s 5 p.m. press event. The opening keynote follows, and then the press moves on to the Pepcom (press only) party at Caesar’s Palace. The rival Showstoppers press party takes place at the Bellagio this year on January 5. This is the day when I need the laptop with the longest battery life. (Yes, that’s still an issue). As noted, Wednesday is when the real crowds show up, and you’ll notice it in restaurants, transportation lines, convention halls, casino floors, and at the airport. Let’s hope the weather will be good, in contrast to the torrential rains we saw in 2018, when blackouts took out the main show floor. If you’re leaving the convention center around 6 p.m., you can catch a bus to most of the major hotels. But that’s also the busiest traffic time. You’ll have to walk (perhaps a long way) to designated areas for Uber/Lyft ride-sharing pickup zones. And there’s the monorail to consider as well for some hotels. Getting lost in the maze The 2.1 million-plus square feet of exhibition space will open at 10 a.m. on Wednesday January 5. If you’re really ambitious, you could be walking 30,000 steps a day, about 3 to 6 times as much as usual. For me, exhaustion sets in around 20,000 steps. If you can cut some unnecessary walking from your day, that would be wise to do. You can start by getting to know the locations. The LVCC Central Hall is where a lot of the big companies are, such as Samsung, Sony, Canon, Sharp, Nikon, IBM, Panasonic, LG, Bosch, and many more. You can walk across a connector from the Central Hall to the South Hall, where there are a mix of big booths, small booths, and meeting rooms (which are way in the back). The South Hall itself is confusing, as it has two levels. South Halls 1 and 2 are on the ground level, with booth numbers ranging from 2000 to 22999 and 25000 to 27999 on the ground level. South Halls 3 and 4 (30000-32999, 35000-37999) are on the upper level, and both are easily reached via the South Hall connector. What’s trending and what’s not The metaverse is going to be a key theme, even though some people in the know were excited about it earlier and aren’t as excited about it now that we have a downturn. But I believe there is a wave of new technologies about to hit that will spread interest in the metaverse across all industries. Will we see evidence of that at CES this year? I honestly don’t know how much of that tech will be ready. But CES is often about marketing hype, and the people who market stuff there know a catchy trend when they see one. Some like Meta CEO Mark Zuckerberg believe that VR will grow into something huge. But others feel like it will be a niche. Apple will likely enable augmented reality to take off on mobile. But that’s not the entirety of the metaverse. Gamers are going to be in expansion mode. Hollywood will promote the sci-fi vision. The industrial and ecommerce industries will be huge. Brands will go where the people are. Altogether, that adds up to a considerable amount of force working in favor of the metaverse. A recession will slow these things down, but I don’t think it will kill the metaverse. I’m going to moderate a panel at CES on Friday January 6 about how gaming will lead the metaverse. It will take place at 3 pm on Friday January 6 at LVCC North / N262 and feature speakers from Area15, Upland, Holoride and Tilt Five. The show will have a dedicated space for things like gaming again, but it will also have new areas dedicated to Web3, blockchain and cryptocurrency. We’ll see a variety of immersive virtual experiences, like a project where carmaker Stellantis and Microsoft are teaming up on a showroom in the metaverse. Sony is likely to show off its PSVR 2 product launching on February 22. Various publications have said that HTC will also show off a new headset that is a tool for gaming, entertainment, enterprise and productivity. L’Oreal will be there demoing augmented reality products. Nvidia will hopefully give us an update on the Omniverse tools for building the metaverse for engineers. This year will feature new themes such as mobile tech focused on marine environments as well as a live demo session with the International Space Station. The transportation sections will grow as the march toward autonomous vehicles continues. Shapiro said the show will have an “incredibly strong” health tech category. Shapiro expects we’ll see a lot of food tech, like from a company called Suvie, which cooks food while you’re away and is ready when you get home. Based on the pitches I’m getting, I think we’ll see a lot of tech related to artificial intelligence, health-focused wearables, energy-saving devices, custom 3D printing, the internet of things (IoT), sleep care, elder care, mental care, smart cars, robots, virtual reality (think Sony and HTC). As far as humanitarian issues go, CES is teamed with the United Nations and its focus on fundamental human rights, like the right to clean air and water, the right to healthcare, security and mobility. Your CES survival tips Many of these tips are recycled from past years, but I’ve gone through and renewed them with my latest info. As I mentioned, it’s hard to get around at CES. You should keep appointments to a half hour, but note that it takes time to walk between venues. You may encounter delays because other people are behind schedule. And you may even have trouble finding people at large booths. So it’s good to pad your schedule to account for possible delays and isolate the really important appointments. The CES badges now have photos on them, streamlining identification and making it harder for people to share badges. On your crowded flights, try to travel light. For Southwest, I always check in ahead of boarding, setting an alarm for exactly 24 hours before my flight. Check your baggage if you don’t have to get anywhere quickly. Be prepared for long cab lines and rental car check-in lines. (Services like Uber and Lyft were very useful at CES, particularly as parking is not plentiful enough and the big casinos/hotels now charge $10 per visit at their self-parking garages). I don’t rent a car anymore. I recommend sleep. If the parties are what you care about, there are often party lists that circulate. Here’s one that I’ve seen. Those parties often require unique invitations and RSVPs. On Thursday, Dolby Labs is throwing a concert with Imagine Dragons. Remember to swap phone numbers with the people you are meeting so you can coordinate, particularly as someone is usually held up by the crowds. Incorporate driving and eating times into your calendar, or use a calendar that does that automatically for you. Smartphone reception is better than it used to be, but it’s still probably prone to interference. Text message is usually a decent way to communicate with coworkers. We always seek out the Wi-Fi/5G havens in the press rooms or wherever we can find them. But carry a MiFi or activate a personal hotspot if you can; even hotel internet connections are likely to be stressed to the limit during the show. If you’re responsible for uploading video, thank you for clogging the network for the rest of us. By CES 2023, I hopefully won’t have to complain about this, as 5G networks should theoretically enable faster connection speeds on cellular data. If you collect a lot of swag, you can send it home via shipping services instead of carting it on the plane. You should print a map of the exhibit floor or rip one out of the show guide. You should also print your tickets, schedule, and RSVPs for events — or make them easily accessible on your phone. (If someone steals your primary bag, you should have backups in a second bag). You need battery backup for your laptop or smartphone, hand sanitizer, a good camera, ibuprofen, and vitamins. I’ve got an HP Elitebook and a Dynabook laptop this year. Bring a backup for everything, even if you have to leave it in your hotel room this year. I used to recommend taking business cards. But those can carry germs and who wants that now? Now you can swap your LinkedIn credentials with people via your smartphone. If you’re exhibiting, wear your company brand on your shirt. Try very hard to avoid losing your phone. I wear a jacket with zippered pockets so I can put my phone and wallet inside. Make some time to walk the show floor. If the cab/rideshare lines have you frustrated, don’t think about walking to a nearby hotel. Chances are the cab line there is also bad, and the hotels are so huge that a mirage effect makes them look deceptively close. If you have a rental car, try not to get stuck in a traffic jam in a 10-story parking garage. And always mark down where you parked your car on your phone map or paper. Uber and Lyft cars are really the way to travel now. But in past years I found that the pickup at the LVCC (near the Renaissance Hotel) was a traffic logjam. So it can be tough no matter what you do. Schedule your appointments in locations that are near each other, and check exhibitor locations on a CES map. Arrive early for keynotes because the lines are long. Drink lots of water. Get some sleep — you really don’t have to party every night. Don’t miss your flight on the way out. Pack up a bunch of snacks early on to avoid getting stuck in breakfast or lunch lines. Take a smartphone with a good camera because what happens in Vegas … gets shared on the internet. As for the notion that you can experience the show in both digital and physical ways, Shapiro said, “We’ve learned that the digital experience complements the live experience in many ways. A lot of the most dedicated digital users of our platform were those that attended CES, who were very busy people, who couldn’t get to all the conferences they wanted to or connect with all the exhibitors. We’ve decided to extend the platform through the end of February, so people going to the show can continue their experience for more than double the time we had in 2022. We’ll also offer a remote option for those who choose not to go to CES.” He added, “What we learned in 2021 and 2022, and even going around the world and talking to our constituencies and having live events in Amsterdam and Paris and New York and elsewhere, is that people want to go and see their customers, to find new customers, to discover new things that they wouldn’t be able to discover online. Serendipity is still a very important factor. The live experience, for many people, is the most important thing.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,308
2,022
"How AIaaS (AI-as-a-service) can help democratize AI | VentureBeat"
"https://venturebeat.com/ai/how-ai-as-a-service-can-help-democratize-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AIaaS (AI-as-a-service) can help democratize AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When it comes to artificial intelligence (AI) adoption, there is a growing gap between the haves and the have-nots. According to IBM , the global AI adoption rate went up by 4 percentage points in 2022, reaching nearly 35%. However, the study also found that the gap in AI adoption between larger and smaller companies also grew significantly in the past year. Today, larger companies are twice as likely to have actively deployed AI as a part of their business operations than smaller companies, which are more likely to be exploring or not pursuing AI at all due to development cost and scalability issues. “SMEs [small- to medium-sized enterprises] are often plagued with the problem of scaling their operations on account of huge financial implications,” said Dipak Singh , head of data science at Indus Net Technologies. “They often don’t want to venture onto the path of AI because they are not sure about the outcome of their AI projects.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI projects generally take months to develop and mature, bringing a long gestation period and significant expenses. That’s where AI-as-a-service (AIaaS) comes in: It was born out of a desire to democratize AI for all while addressing the growing demand for AI, cognitive computing and large-scale adoption of cloud-based solutions. The global AIaaS market is poised to exceed $41 billion by 2025, nearly five times the size of today’s market, according to NASSCOM. AIaaS is often built on big cloud providers including IBM, SAP SE, Google, AWS, Salesforce, Intel and Baidu, but there are also a handful of startups offering solutions. Most recently, California-based API startup Assembly AI launched an AI-powered API to convert audio or video to text. Offered as an AIaaS model, the API empowers developers by aiding in-model development for transcribing, understanding and analyzing the audio data. AI for all AIaaS, which gained traction beginning in 2020 during the COVID-19 pandemic, works like any other “as-a-service” business model. It is composed of multiple delivery models and offerings including, but not restricted to, off-the-shelf tools for development, testing and deployment, and scaling of AI/ML models; vertical services provisioning such as inference-as-a-service, annotation-as-a-service, and machine learning-as-a-service (MLaaS); and fully outsourced or managed service models. While the definition can be blurry, AIaaS does enable users to harness the power of AI/ML without writing a single line of code and without needing any particular technical expertise. “Leveraging AI as a service-based model could potentially be the right approach for those companies who are yet to understand the scope to which AI could be incorporated into their business,” said Tushar Bhatnagar , cofounder and CTO at vidBoard.ai and cofounder and CEO of AIaaS provider Alpha AI. Because AIaaS doesn’t demand large initial investments or massive human resources and involves lower risk, businesses can stay focused on their core competencies while getting access to the capabilities of AI/ML, Bhatnagar said. AIaaS enables SMEs to quickly use pretrained models via plug-and-play mechanisms at a nominal cost. Some of the areas where AIaaS is helping companies include bots and digital assistants, cognitive computing APIs, machine learning frameworks and data labeling. There is no free lunch: AIaaS is no exception AIaaS is a boon for SMEs, but the offering is not flawless. For example, Bhatnagar said that companies need to be aware of issues that could stem from inaccuracy in data insights, algorithmic bias and the “black box” nature of AI. Take algorithmic bias, for example. An AI model creates a series of instructions that a system has to follow to achieve a particular task, which is typically created by humans. However, if the algorithms are flawed, not optimized for edge cases, or biased, they will provide unfavorable and unreliable results and conditions. If not kept in check, an AI-as-a-service model could scale up quite fast. In the end, companies might find themselves looking for more complex solutions and customizations, which can be more expensive and necessitate hiring and training more specialized personnel. “When it comes to inaccuracies in data insights, AI programs can only learn from the information we present to them,” said Bhatnagar. “One’s results may be wrong or highly disjointed if the data provided to the program is incomplete, unreliable or lacks structure. As a result, AI can only be as smart, useful, or innovative as the data you feed into it.” Lastly, AIaaS often lacks any explainability. While a solution may consistently return accurate findings, it cannot explain how it arrived at that particular conclusion. Tips for navigating the AIaaS landscape The best SME use cases for AIaaS, said Singh, are those “odd and mundane” operational activities that can be automated using AIaaS and do not involve private and confidential data, such as data labeling and classification, or bots and digital assistance. Singh also recommends doing thorough background research about the AIaaS provider to make sure company data will be in safe hands. Finally, companies should also create usage, access and security protocols. For example, if AIaaS is used for bots and digital assistance, what kind of data should the company allow the system to store or share? To what extent can they use it or not use it? What are the rules and regulations for employees moving in and out of the platform? This knowledge needs to be clearly documented and shared with the different stakeholders within the organization, said Singh. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,309
2,022
"Top 5 stories of the week: ChatGPT and cybersecurity predictions | VentureBeat"
"https://venturebeat.com/ai/top-5-stories-of-the-week-visions-of-ai-and-security-danced-in-readers-heads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top 5 stories of the week: ChatGPT and cybersecurity predictions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While others were shopping and decorating for the holidays, VentureBeat readers didn’t check out for Christmas cheer this week. Rather, they were consuming coverage in two keys — as reflected in our Top 5 stories of the week — AI and security. Sharon Goldman’s coverage of ChatGPT and generative AI captured the two top spots among the list of most-read stories. Goldman talked to Forrester Research’s Rowan Curran about how and why ChatGPT is having an iPhone moment. Like the iPhone did for the smartphone, ChatGPT is changing the public consciousness around AI – generative AI, in particular. >>Follow VentureBeat’s ongoing ChatGPT coverage<< Goldman also looks back on her first calendar year at VentureBeat and the barrage of news in areas ranging from generative AI to AI legislation. Check out how Goldman learned to stop worrying and love AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Even amid the holiday season, IT leaders can’t sleep on security as is clear by the traffic Tim Keary and Louis Columbus generated with their prediction stories. Keary dug into cybersecurity predictions from Google and Microsoft, and Columbus looked at security spending forecasts for 2023 and why zero trust and cloud security top the list. Here are the top 5 stories for the week of December 19 th. 1. Why ChatGPT is having an iPhone moment (with a unique twist) Is it possible to compare this moment in the evolution of generative AI to any other technology development? According to Forrester Research AI/ML analyst Rowan Curran, it is. “The only thing that I’ve been able to compare it to is the release of the iPhone,” he told VentureBeat. Apple’s iPhone was not the first smartphone, but it buried the competition with its touchscreen, ease of use and introduction of apps that put an entire computing experience in our pockets. The release of the original iPhone in January 2007, followed by the launch of the App Store in July 2008, ushered in a period of historic technological change, Curran explained — when the mass public learned there was an entire universe of creativity and applications they could work with. 2. From DALL-E 2 to ChatGPT, covering AI’s wild year | The AI Beat It was my first week at VentureBeat, in mid-April. OpenAI had just released the new iteration of its text-to-image generator, DALL-E 2; our lead AI writer, Kyle Wiggers, had moved to TechCrunch before I could pick his brain, and I was panicking. I belatedly realized how little I understood about the past decade of progress in AI, from machine learning (ML) and computer vision to natural language processing (NLP). Every beat I’ve ever covered has had a learning curve, of course. But the AI beat felt like Mount Everest. 3. Google outlines 6 cybersecurity predictions for 2023 It’s no secret that cybercrime is a growth industry. Just last year, the FBI estimated that internet crime cost $6.9 billion. The worse news is that Google’s cybersecurity predictions for 2023 anticipate that this malicious economy will only continue to expand and diversify. 4. Microsoft security leaders make 9 key cybersecurity predictions for 2023 At times the threat landscape looks bleak, but it’s also driving greater collaboration between vendors and organizations. At least that’s what Microsoft security leaders are suggesting in their 2023 cybersecurity predictions. VentureBeat recently connected with some of Microsoft’s top security leaders and researchers, who shared their predictions for 2023. 5. 2023 cybersecurity forecasts: Zero trust, cloud security will top spending Current predictions for cybersecurity spending in 2023 are reinforcing some of 2022’s top trends. Gartner predicts zero-trust network access (ZTNA) will be the fastest-growing network security market segment worldwide. It’s forecast to achieve a 27.5% compound annual growth rate (CAGR) between 2021 and 2026, jumping from $633 million to $2.1 billion worldwide. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,310
2,022
"AI will thrive in 3 key areas in 2023, despite economic conditions | VentureBeat"
"https://venturebeat.com/ai/ai-will-thrive-in-3-areas-despite-economic-conditions-experts-say"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI will thrive in 3 key areas in 2023, despite economic conditions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Some of the biggest tech names have laid off artificial intelligence (AI) and machine learning (ML) employees this fall, including Meta , Twitter and Amazon. In light of that, it would make sense for industry nerves to be high entering 2023, but that’s not the case. Even in the midst of a possible recession, AI experts across several industries told VentureBeat that they expect AI innovation to continue and companies to adjust budgets and priorities accordingly. In fact, these industry leaders resoundingly underscored three areas where AI has thrived over the past year and will continue to grow in 2023: workplace automation and human-centric AI; data-driven AI decision-making; and generative AI use cases. >>Follow VentureBeat’s ongoing generative AI coverage<< “The U.S. is knee-deep in a recession, but despite the economic uncertainties that consumers and companies are facing, I predict AI will remain not only resilient but recession-proof,” said Scott Stephenson, CEO at Deepgram , an AI-powered transcription tool. “AI will continue to be central to business in 2023, by cutting costs and increasing innovation. Simply put, AI will help us do more with less.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While some may hear “cutting costs and increasing innovation” as workplace automation and artificial intelligence tools pushing them out of a job, several leaders say that what’s more likely is automation taking over mundane tasks, so employees can focus on more advanced ones. “AI won’t — and shouldn’t — replace humans in the near term,” said Vishal Sikka, founder and CEO of human-centered AI platform, Vian AI. “The reality is that AI today, as impressive and as powerful as it is, is nowhere close to human judgment. In 2023, the recognition that too many platforms aren’t designed for humans to use will increase. More and more systems will be designed to amplify human judgment — to aid people and encourage AI symbiosis, rather than seeking to have AI replace the user.” Human-machine interactions need human-centric solutions As artificial intelligence continues to shift how we work and the tools we use for work, several experts say the tools themselves should become more human-centric too — simplifying things in day-to-day workflows, or creating a platform that eases two-way human and technology interactions. “We’ll see the proliferation of user-friendly, non-technical AI systems,” said Zeeshan Arif, founder and CEO at software development company Whizpool. “They’ll be building more AI systems internally to help them streamline their operations and improve their customer service.” Pieter Buteneers, director of engineering in ML and AI at Sinch , a cloud communications platform, echoed this, telling VentureBeat that heading into the new year, AI will likely move away from keywords as it “progress[es] toward actual comprehension and understanding.” Advancements in natural language processing (NLP) and large language models (LLMs) are likely to occur in this vein as well, industry leaders told VentureBeat, noting that these technologies can assist with scaling business processes. “NLP is revolutionizing how humans interact with machines; these technologies can understand what people say, act on that information appropriately and respond accordingly,” said Devanshu Bansal, cofounder of The X Future , an ideation platform for corporate teams. “NLP has a lot more to offer than just clearly communicating with users; it can also help scale business operations by helping them understand the voice of the customer.” On top of that, with increasing advancements toward human-centric AI, and particularly in NLP and LLMs, Julien Salinas, founder and CEO at NLP Cloud , an AI startup that deploys LLMs, predicts that this progress may even make the technology more affordable. “We keep seeing a stable growth of adoption of AI technologies when it comes to natural language processing,” Salinas said. “I expect large language models to become cheaper thanks to many low-level optimizations being made by the AI community right now.” AI and ML models will be driven by quality data Although AI has experienced rapid growth throughout the past year, some say it is not advancing quickly enough, getting stuck in a “Stone Age” of sorts — which is why so many ML projects fail. Looking ahead to 2023, this is something industry leaders are increasingly aware of and aiming to address. Sameer Maskey, CEO of Fusemachines , an AI educational platform, and professor of AI at Columbia University, told VentureBeat that “at the enterprise level, data silos continue to present a big obstacle…organizations are slowly beginning to understand that the success of AI hinges on data, and a lot of it.” As a result, Maskey expects to see more solutions enabling access to “credible pools of data that will aid businesses to benefit from AI-powered efficiencies.” One solution coming on the scene to address AI’s data-starvation is the use of synthetic data , which Gartner predicts will be used to accelerate 60 % of AI projects by 2024. Another is the use of foundational models , which are typically trained on large amounts of unlabeled data and then paired with smaller sets of labeled data to propel problem-solving. “The general trend towards data-centric AI seems to be accelerating,” said Ulrik Stig-Hansen, president and cofounder at computer vision company Encord. “This lowers the barriers to entry for companies to start monetizing their data, and all things [being] equal should lead to more widespread adoption of these technologies. Over the next year, companies will need to learn how to use data strategically. This is where they can really find the competitive advantage of AI.” Generative AI’s use cases will explode Advances in generative AI, which may be unofficially dubbed the “topic of the year” in the field, took off this year, giving rise to several new companies and tools that are revamping how creatives and non-creatives alike do their work. Tools like DALL-E , Imagen and Stable Diffusion generate original text-to-image concepts almost instantly after they are given even the most obscure of prompts — like “an AI bot sitting on a throne.” On top of its growing use cases, Encord’s Stig-Hansen predicts that “there will be a transformational leap forward in the availability of generative AI.” This domain shows no signs of slowing down in 2023, with Gartner predicting that generative AI will not only improve digital product quality, but by 2025 will also account for 10 % of all data produced — compared to a current 1%. Generative AI is a tool that, though not quite replacing jobs yet, is sparking curiosity across enterprise sectors about what might be ahead. “As the technology grows more sophisticated it will continue to be disruptive, not just for images and content development, but for other industries like speech recognition and banking,” Deepgram’s Stephenson said. “I predict that generative technology will soon act as an exoskeleton for humans — it will support the work we are doing and ultimately drive a more efficient and creative future.” With the text-to-image, text-to-video and upcoming text-to-3D capabilities that generative AI provides, adoption of the tool will also increase — probably even outside of the technology and enterprise spaces into others like entertainment and the creative professions, Stig-Hansen said. “These AI models will only get better and become more photo-realistic,” he said. “The space will evolve from just AI-generated faces to whole bodies in both imagery and video. There will also be a huge adoption of generative AI in creative industries, including the music industry.” Investors are also beginning to take note of the new technology, but some are waiting to see if it will remain all “hype” or if its growth becomes solid for a forward trajectory. Bansal, the cofounder of The X Future, said their fund, Z Nation Lab, was an early investor in AI-powered content generation platform Jasper AI , and said he sees it only going up from here. “Seeing the exponential growth that Jasper AI has achieved in the past two years makes me very bullish on the use of generative AI in marketing,” Bansal said. Notes for 2023 Investment and growth in the artificial intelligence market at large are expected to skyrocket through the next several years, according to Fortune Business Insights, which also reports that the sector will be worth over a trillion dollars by 2029. Research giants, including Gartner, Forrester and McKinsey, emphasize AI’s massive development. For example, Gartner analyst Afraz Jaffri recommends leaders “pay particular attention to innovations expected to hit mainstream adoption in two to five years” to gain a competitive edge. Forrester projects that AI software will grow 50 % faster than the rest of the software market. And McKinsey researchers too expect AI’s adoption rate to continue to increase. “Companies can’t let the vast power and the promise of AI/ML stop them from taking advantage of its capabilities, even as they navigate these incredibly rough economic waters,” said Lior Gavish, CTO of data observability platform Monte Carlo. “With today’s tighter budgets and leaner teams, less is truly more when it comes to optimizing the impact of AI/ML.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,311
2,022
"Precision healthcare AI tools eyed by investors | VentureBeat"
"https://venturebeat.com/ai/precision-healthcare-ai-tools-eyed-by-investors"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Precision healthcare AI tools eyed by investors Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence and machine learning promise to transform healthcare across the board, but particularly through the use of precision medicine. Precision medicine is often defined differently than the common phrase “personalized medicine,” which simply means tailoring treatments to the patient. Precision medicine, on the other hand, specifically applies machine learning to the genetic material of patients with less-common conditions. The AI finds patterns within material to identify common phenotypes, while pharmaceutical companies use that information to develop drugs targeted to the specific need. Palo Alto, California-based Endpoint Health is one player in this space looking to tap the potential machine learning has for precision medicine. Today, it announced it has received $52 million series A equity and debt financing – a big jump from its first funding round of $12 million in 2020. The new funding will be used, in part, to extend the company’s Precision-First platform and expand its therapeutic pipeline to include programs for chronic immune-mediated diseases. The money is also meant to advance Endpoint Health’s efforts to test the use of a human plasma-derived medication, called Antithrombin III, for use by patients with sepsis — a life-threatening medical emergency that occurs when the body’s immune response to an infection becomes dysregulated. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI boosts speed and lowers costs for clinical trial s A Phase II clinical trial is intended to investigate the use of Antithrombin III in patients with sepsis. Endpoint Health used ML-driven insights to develop a blood test that will identify patients who will be most helped by Antithrombin III. To develop the test, which identifies patients with a particular form of sepsis, the Precision First platform analyzed RNA from sepsis patients, using machine learning to “look for underlying patterns that are unique and different that a normal human couldn’t see,” said Jason Springs, Endpoint Health chief executive officer. The immune reactions of patients with sepsis differ, so the platform discovers subgroups and identifies what people in those groups have in common. “The immune system is very complex, so the machine learning platform lets the computer pull apart that complexity in a way no human could,” Springs said. “Nobody could look at 10,000 data points for 1,000 patients and quickly understand if there are one or two different clusters of patients that look like each other – one person can’t process that much information at once,” he said. “But the computer will keep crunching and it can handle the amount of data required to understand these illnesses.” This initial analysis determined whether to develop a blood test for a group of sepsis patients “that the computer told us were unique,” Springs said, adding that the test identifies patients within a sepsis subtype. Those patients can take part in the upcoming Phase II clinical trials for the sepsis drug. Identifying patients in this way makes clinical trials shorter and less expensive, as researchers can choose the participants they think most likely to respond to their therapy. AI analysis of genetic samples to match patients to medication The entire process begins through the AI analysis of genetic samples from sepsis patients, who agree to submit blood samples. “If we get 1,000 samples, one or two months later we will know if we see some patterns,” Springs said. In the future, the test can determine sepsis subgroups to match a patient with the medication specific to their group. That type of matching can’t come soon enough, Springs added. Some sepsis patients develop disseminated intravascular coagulation (DIC), where small blood clots are distributed across the body. The condition is very dangerous. “Our hope is that our therapy will have a good chance of resolving that problem,” he said. Other companies are also using AI tools to deliver precision medicine. According to The Journal of Precision Medicine , embracing digitization is the “the key to enable and operationalize both standardization and personalization in health care.” Tools can help affordably create and deliver complex, patient-specific pharmaceutical or medical devices. Other precision medicine companies using AI include Synapse, GNS Healthcare andTempus, which in 2020 announced an additional $100 million financing, bringing its financing total to $600 million since its 2015 inception. Tempus focuses on cancer treatment, though it devoted resources to COVID-19. Endpoint Health’s own recent funding round is impressively large because the company’s technology attracts investors. “Investors are interested in technology that can analyze and understand large sets of unique patient data and gain insights that could speed clinical development of therapies that are much more likely to succeed in patients,” Springs says. AI healthcare market to reach over $35 billion In fact, the United States dominates the list of firms with highest VC funding in healthcare AI to date, and has the most completed AI-related healthcare research studies and trials, says a September 2021 report from EIT Health and McKinsey. The global AI healthcare market spend is anticipated to reach over $35 billion by 2030, growing by 24% from $2 million in 2019, according to the BIS Research market intelligence firm. “We are in the very early days of our understanding of AI and its full potential in healthcare, in particular with regards to the impact of AI on personalization,” according to the report, which predicts precision medicine will grow to offer medicine tailored to every patient’s unique need. While that is in the future, Springs can attest to the fact that precision medicine is already here. And the potential for AI tools in precision medicine will continue to skyrocket. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,312
2,021
"AI in health care creates unique data challenges | VentureBeat"
"https://venturebeat.com/business/ai-in-health-care-creates-unique-data-challenges/https://venturebeat.com/business/ai-in-health-care-creates-unique-data-challenges"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI in health care creates unique data challenges Share on Facebook Share on X Share on LinkedIn The health care industry produces an enormous amount of data. An IDC study estimates the volume of health data created annually, which hit over 2,000 exabytes in 2020, will continue to grow at a 48% rate year over year. Accelerated by the passage of the U.S. Patient Protection and Affordable Care Act , which mandated that health care practitioners adopt electronic records, there’s now a wealth of digital information about patients, practices, and procedures where before there was none. The trend has enabled significant advances in AI and machine learning, which rely on large datasets to make predictions ranging from hospital bed capacity to the presence of malignant tumors in MRIs. But unlike other domains to which AI has been applied, the sensitivity and scale of health care data makes collecting and leveraging it a formidable challenge. Tellingly, although 91% of respondents to a recent KPMG survey predicted that AI could increase patient access to care, 75% believe AI could threaten patient data privacy. Moreover, a growing number of academics point to imbalances in health data that could exacerbate existing inequalities. Privacy Tech companies and health systems have trained AI to perform remarkable feats using health data. Startups like K Health source from databases containing hundreds to millions of EHRs to build patient profiles and personalize automated chatbots’ responses. IBM, Pfizer, Salesforce, and Google, among others, are attempting to use health records to predict the onset of conditions like Alzheimer’s , diabetes , diabetic retinopathy , breast cancer , and schizophrenia. And at least one startup offers a product that remotely monitors patients suffering from heart failure by collecting recordings via a mobile device and analyzing them with an AI algorithm. The datasets used to train these systems come from a range of sources, but in many cases, patients aren’t fully aware their information is included among them. Emboldened by the broad language in the Health Insurance Portability and Accountability Act (HIPAA), which enables companies and care providers to use patient records to carry out “healthcare functions” and share information with businesses without first asking patients, companies have tapped into the trove of health data collected by providers in pursuit of competitive advantages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2019, The Wall Street Journal reported details on Project Nightingale, Google’s partnership with Ascension, which is the nation’s second-largest health system, that is collecting the personal health data of tens of millions of patients for the purposes of developing AI-based services for medical providers. Separately, Google maintains a 10-year research partnership with the Mayo Clinic that grants the company limited access to anonymized data it can use to train algorithms. Regulators have castigated Google for its health data practices in the past. A U.K. regulator concluded that The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Google parent company Alphabet’s DeepMind with data on 1.6 million patients without their consent. And in 2019, Google and the University of Chicago Medical Center were sued for allegedly failing to scrub timestamps from anonymized medical records. (A judge tossed the suit in September.) But crackdowns and outcries are exceptions to the norm. K Health claims to have trained its algorithms on a 20-year database of millions of health records and billions of “health events” supplied partially by Maccabi, Israel’s second-largest health fund, but it’s unclear how many of the patients represented in the datasets were informed that their data would be used commercially. Other firms including IBM have historically drawn on data from research like the Framingham Heart Study for experiments unrelated to the original purpose (albeit in some cases with approval from institutional review boards). Startups are generally loath to disclose the source of their AI systems’ training data for competitive reasons. Health Data Analytics Institute says only that its predictive health outcome models were trained on data from “over 100 million people in the U.S.” and over 20 years of follow-up records. For its part, Vara , which is developing algorithms to screen for breast cancer, says it uses a dataset of 2.5 million breast cancer images for training, validation, and testing. In a recent paper published in the New England Journal of Medicine , researchers described an ethical framework for how academic centers should use patient data. They align with the belief that the standard consent form that patients typically sign at the point of care isn’t sufficient to justify the use of their data for commercial purposes, even in anonymized form. These documents, which typically ask patients to consent to the reuse of their data to support medical research, are often vague about what form that medical research might take. “Regulations give substantial discretion to individual organizations when it comes to sharing deidentified data and specimens with outside entities,” the coauthors wrote. “Because of important privacy concerns that have been raised after recent revelations regarding such agreements, and because we know that most participants don’t want their data to be commercialized in this way, we [advocate prohibiting] the sharing of data under these circumstances.” Standardization From 2009 to 2016, the U.S. government commissioned researchers to find the best way to improve and promote the use of electronic health records (EHR). One outcome was a list of 140 data elements that should be collected from every patient on each visit to a physician, which the developers of EHR systems were incentivized to incorporate into their products through a series of federal stimulus packages. Unfortunately, the implementation of these elements tended to be haphazard. Experts estimate that as many as half of records are mismatched when data is transferred between health care systems. In a 2018 survey by Stanford Medicine in California, 59% of clinicians said they felt that their electronic medical records (EMRs) systems needed a complete overhaul. The nonprofit MITRE Corporation has proposed what it calls the Standard Health Record (SHR), an attempt at establishing a high-quality, computable source of patient information. The open source specification, which draws on existing medical records models like the Health Level Seven International’s Fast Healthcare Interoperability Resources, contains information critical to patient identification, emergency care, and primary care as well as areas related to social determinants of health. Plans for future iterations of SHR call for incorporating emerging treatment paradigms such as genomics, microbiomics, and precision medicine. However, given that implementing an EMR system could cost a single physician over $160,000, specs like SHR seem unlikely to gain currency anytime soon. Errors and biases Errors and biases aren’t strictly related to the standardization problem, but they’re emergent symptoms of it. Tracking by the Pennsylvania Patient Safety Authority in Harrisburg found that from January 2016 to December 2017, EHR systems were responsible for 775 problems during laboratory testing in the state, with human-computer interactions responsible for 54.7% of events and the remaining 45.3% caused by a computer. Furthermore, a draft U.S. government report issued in 2018 found that clinicians are inundated with (and not uncommonly miss) alerts that range from minor issues about drug interactions to those that pose considerable risks. Mistakes and missed alerts contribute to another growing problem in health data: bias. Partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases might perpetuate inequalities. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study , Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI come from California, New York, and Massachusetts. A study of a UnitedHealth Group algorithm determined that it could underestimate by half the number of Black patients in need of greater care. Researchers from the University of Toronto, the Vector Institute, and MIT showed that widely used chest X-ray datasets encode racial, gender, and socioeconomic bias. And a growing body of work suggests that skin cancer-detecting algorithms tend to be less precise when used on Black patients, in part because AI models are trained mostly on images of light-skinned patients. Security Even in the absence of bias, errors, and other confounders, health systems must remain vigilant for signs of cyber intrusion. Malicious actors are increasingly holding data hostage in exchange for ransom, often to the tune of millions of dollars. In September, employees at Universal Health Services, a Fortune 500 owner of a nationwide network of hospitals, reported widespread outages that resulted in delayed lab results, a fallback to pen and paper, and patients being diverted to other hospitals. Earlier that month, a ransomware attack at a Dusseldorf University hospital in Germany resulted in emergency-room diversions to other hospitals. Over 37% of IT health care professionals responding to a Netwrix survey said their health care organization experienced a phishing incident. Just over 32% said their organization experienced a ransomware attack during the novel coronavirus pandemic’s first few months, and 37% reported there was an improper data sharing incident at their organization. Solutions Solutions to challenges in managing health care data necessarily entail a combination of techniques, approaches, and novel paradigms. Securing data requires data-loss prevention, policy and identity management, and encryption technologies, including those that allow organizations to track actions that affect their data. As for standardizing it, both incumbents like Google and Amazon and startups like Human API offer tools designed to consolidate disparate records. On the privacy front, experts agree that transparency is the best policy. Stakeholder consent must be clearly given to avoid violating the will of those being treated. And deidentification capabilities that remove or obfuscate personal information are table stakes for health systems, as are privacy-preserving methods like differential privacy, federated learning, and homomorphic encryption. “I think [federated learning] is really exciting research, especially in the space of patient privacy and an individual’s personally identifiable information,” Andre Esteva, head of medical AI at Salesforce Research, told VentureBeat in a phone interview. “Federated learning has a lot of untapped potential … [it’s] yet another layer of protection by preventing the physical removal of data from [hospitals] and doing something to provide access to AI that’s inaccessible today for a lot of reasons.” Biases and errors are harder problems to solve, but the coauthors of one recent study recommend that health care practitioners apply “rigorous” fairness analyses prior to deployment as one solution. They also suggest that clear disclaimers about the dataset collection process and the potential resulting bias could improve assessments for clinical use. “Machine learning really is a powerful tool, if designed correctly — if problems are correctly formalized and methods are identified to really provide new insights for understanding these diseases,” Dr. Mihaela van der Schaar, a Turing Fellow and professor of ML, AI, and health at the University of Cambridge and UCLA, said during a keynote at the ICLR 2020 conference in May. “Of course, we are at the beginning of this revolution, and there is a long way to go. But it’s an exciting time. And it’s an important time to focus on such technologies. I really believe that machine learning can open clinicians and medical researchers [to new possibilities] and provide them with powerful new tools to better [care for] patients.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,313
2,022
"Responsible AI in healthcare: Addressing biases and equitable outcomes | VentureBeat"
"https://venturebeat.com/datadecisionmakers/responsible-ai-in-healthcare-addressing-biases-and-equitable-outcomes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Responsible AI in healthcare: Addressing biases and equitable outcomes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the rapid growth of healthcare AI, algorithms are often overlooked when it comes to addressing fair and equitable patient care. I recently attended the Conference on Applied AI (CAAI): Responsible AI in Healthcare , hosted by the University of Chicago Booth School of Business. The conference brought together healthcare leaders across many facets of business with a goal of discussing and finding effective ways to mitigate algorithmic bias in healthcare. It takes a diverse group of stakeholders to recognize AI bias and make an impact on ensuring equitable outcomes. If you’re reading this, it’s likely you may already be familiar with AI bias, which is a positive step forward. If you’ve seen movies like The Social Dilemma or Coded Bias , then you’re off to a good start. If you’ve read articles and papers like Dr. Ziad Obermeyer’s Racial Bias in Healthcare Algorithms , even better. What these resources explain is that algorithms play a major role in recommending what movies we watch, social posts we see and what healthcare services are recommended, among other everyday digital interactions. These algorithms often include biases related to race, gender, socioeconomic, sexual orientation, demographics and more. There has been a significant uptick in interest related to AI bias. For example, the number of data science papers on arXiv’s website mentioning racial bias has doubled between 2019-2021. We’ve seen interest from researchers and media, but what can we actually do about it in healthcare? How do we put these principles into action? Before we get into putting these principles into action, let’s address what happens if we don’t. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The impact of bias in healthcare Let’s take, for example, a patient that has been dealing with various health issues for quite some time. Their healthcare system has a special program designed to intervene early for people who have high risk for cardiovascular needs. The program has shown great results for the people enrolled. However, the patient hasn’t heard about this. Somehow they weren’t included in the list for outreach, even though other sick patients were notified and enrolled. Eventually, they visit the emergency room, and their heart condition has progressed much further than it otherwise would have. That’s the experience of being an underserved minority and invisible to whatever approach a health system is using. It doesn’t even have to be AI. One common approach to cardiovascular outreach is to only include men that are 45+ in age and women 55+ in age. If you were excluded because you’re a woman who didn’t make the age cutoff, the result is just the same. How are we addressing it? Chris Bevolo’s Joe Public 2030 is a 10-year look into healthcare’s future, informed by leaders at Mayo Clinic, Geisinger, Johns Hopkins Medicine and many more. It doesn’t look promising for addressing healthcare disparities. For about 40% of quality measures, Black and Native people received worse care than white people. Uninsured people had worse care for 62% of quality measures, and access to insurance was much lower among Hispanic and Black people. “We’re still dealing with some of the same issues we’ve dealt with since the 80s, and we can’t figure them out,” stated Adam Brase, executive director of strategic intelligence at Mayo Clinic. “In the last 10 years, these have only grown as issues, which is increasingly worrisome.” Why data hasn’t solved the problem of bias in AI No progress since the 80s? But things have changed so much since then. We’re collecting huge amounts of data. And we know that data never lies, right? No, not quite true. Let’s remember that data isn’t just something on a spreadsheet. It’s a list of examples of how people tried to address their pain or better their care. As we tangle and torture the spreadsheets, the data does what we ask it to. The problem is what we’re asking the data to do. We may ask the data to help drive volume, grow services or minimize costs. However, unless we’re explicitly asking it to address disparities in care, then it’s not going to do that. Attending the conference changed how I look at bias in AI, and this is how. It’s not enough to address bias in algorithms and AI. For us to address healthcare disparities, we have to commit at the very top. The conference brought together technologists, strategists, legal and others. It’s not about technology. So this is a call to fight bias in healthcare, and lean heavily on algorithms in order to help! So what does that look like? A call to fight bias with the help of algorithms ​​Let’s start by talking about when AI fails and when AI succeeds at organizations overall. MIT and Boston Consulting Group surveyed 2,500 executives who’d worked with AI projects. Overall, 70% of these executives said that their projects had failed. What was the biggest difference between the 70% that failed and the 30% that succeeded? It’s whether the AI project was supporting an organizational goal. To help clarify that further, here are some project ideas and whether they pass or fail. Purchase the most powerful natural language processing solution. Fail. Natural language processing can be extremely powerful, but this goal lacks context on how it will help the business. Grow our primary care volume by intelligently allocating at-risk patients. Pass. There’s a goal which requires technology, but that goal is tied to an overall business objective. We understand the importance of defining a project’s business objectives, but what were both these goals missing? They’re missing any mention of addressing bias, disparity, and social inequity. As healthcare leaders our overall goals are where we need to start. Remember that successful projects start with organizational goals, and seek AI solutions to help support them. This gives you a place to start as a healthcare leader. The KPIs you’re defining for your departments could very well include specific goals around increasing access for the underserved. “Grow Volume by x%,” for example, could very well include, “Increase volume from underrepresented minority groups by y%.” How do you arrive at good metrics to target? It starts with asking the tough questions about your patient population. What’s the breakdown by race and gender versus your surrounding communities? This is a great way to put a number and a size to the healthcare gap that needs to be addressed. This top-down focus should drive actions such as holding vendors and algorithmic experts accountable to helping with these targets. What we need to further address here, though, is who all of this is for. The patient, your community, your consumers, are those that stand to lose the most in this. Innovating at the speed of trust At the conference, Barack Obama’s former chief technology officer, Aneesh Chopra, addressed this directly: “Innovation can happen only at the speed of trust.” That’s a big statement. Most of us in healthcare are already asking for race and ethnicity information. Many of us are now asking for sexual orientation and gender identity information. Without these data points, addressing bias is extremely difficult. Unfortunately, many people in underserved groups don’t trust healthcare enough to provide that information. I’ll be honest, for most of my life, that included me. I had no idea why I was being asked that information, what would be done with it, or even if it might be used to discriminate against me. So I declined to answer. I wasn’t alone in this. We look at the number of people who’ve identified their race and ethnicity to a hospital. Commonly one in four people don’t. I spoke with behavioral scientist Becca Nissan from ideas42, and it turns out there’s not much scientific literature on how to address this. So, this is my personal plea: partner with your patients. If someone has experienced prejudice, it’s hard to see any upside in providing the details people have used to discriminate against you. A partnership is a relationship built on trust. This entails a few steps: Be worth partnering with. There must be a genuine commitment to fight bias and personalize healthcare or asking for data is useless. Tell us what you’ll do. Consumers are tired of the gotchas and spam resulting from sharing their data. Level with them. Be transparent about how you use data. If it’s to personalize the experience or better address healthcare concerns, own that. We’re tired of being surprised by algorithms. Follow through. Trust isn’t really earned until the follow-through happens. Don’t let us down. Conclusion If you’re building, launching, or using responsible AI, it’s important to be around others who are doing the same. Here are a few best practices for projects or campaigns that have a human impact: Have a diverse team. Groups that lack diversity tend not to ask whether a model is biased. Collect the right data. Without known values for race and ethnicity, gender, income, sex, sexual preference, and other social determinants of health, there is no way to test and control for fairness. Consider how certain metrics may carry hidden bias. The idea of healthcare spending from the 2019 study demonstrates how problematic this metric might be to certain populations. Measure the target variable’s potential to introduce bias. With any metric, label, or variable, checking its impact and distribution across race, gender, sex and other factors is key. Ensure the methods in use aren’t creating bias for other populations. Teams should design fairness metrics that are applicable across all groups and test continuously against it. Set benchmarks and track progress. After the model has been launched and is in use, continually monitor for changes. Leadership support. You need your leadership to buy in, it can’t just be one person or team. “Responsible AI” isn’t the end , it’s not just about making algorithms fair. This should be part of a broader organizational commitment to fight bias overall. Partner with patients. We should go deeper on how we partner with and involve patients in the process. What can they tell us about how they’d like their data to be used? As someone who loves the field of data science, I am incredibly optimistic about the future, and the opportunity to drive real impact for healthcare consumers. We have a lot of work ahead of us to ensure that impact is unbiased and available to everyone, but I believe just by having these conversations, we’re on the right path. Chris Hemphill is VP of applied AI and growth at Actium Health. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,314
2,022
"The state of the GDPR in 2022: why so many orgs are still struggling | VentureBeat"
"https://venturebeat.com/security/gdpr-4th-anniversary/amp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The state of the GDPR in 2022: why so many orgs are still struggling Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today marks the fourth anniversary of the EU’s General Data Protection Regulation ( GDPR ), which originally came into effect in May 2018, and forced organizations to rethink the way they collect and store data from EU data subjects. The GDPR gave consumers the right to be forgotten, while mandating that private enterprises needed to collect consent from data subjects in order to store their data, and prepare to remove their information upon request. However, even years after the legislation went into effect, many organizations are struggling to maintain regulatory compliance while European regulators move toward more stricter enforcement actions. For example, Facebook is still having difficulties complying with the GDPR, with Motherboard recently discovering a leaked document revealing that the organization doesn’t know where all of its user data goes or how it’s processed. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Of course the challenge of GDPR compliance isn’t unique to Facebook. In fact, Amazon , WhatsApp , and Google , have all had to pay 9-figure fines to European data protection authorities. But why are so many organizations failing to comply with the regulation? The answer is complexity. Why GDPR compliance is an uphill battle The widespread movement of organizations toward cloud services over the past few years has increased complexity on all sides. Organizations use applications that store and process customer data in the cloud, and often lack the visibility they need to protect these assets. “Companies have done a lot of work to bring their systems and processes in line with the GDPR, but it is a continuous exercise. In the same way regulations change, so does technology,” said Steve Bakewell, managing director EMEA of penetration testing provider NetSPI. “For example, the increasing uptake in cloud services has resulted in more data, including personal data, being collected, stored and processed in the cloud,” Bakewell said. With more data stored and processed in native, hybrid , and multicloud environments, enterprises have exponentially more data to secure and maintain transparency over, that’s beyond the perimeter defenses and oversight of the traditional network. Organizations like Facebook that can’t pin down where personal data lives in a cloud environment or how it’s processed inevitably end up violating the regulation, because they can’t secure customer data or remove the data of subjects who’ve given consent. Maintaining GDPR compliance in 2022 and beyond While the GDPR is mandating data handling excellence in the cloud era, there are some strategies organizations can use to make compliance more manageable. The first step for enterprises is to identify where sensitive data is stored, how it’s processed and what controls or procedures are needed to protect or erase it if necessary. Bakewell recommends that organizations “understand and implement both privacy and security requirements in systems handling the data, then test accordingly across all systems, on-prem, cloud, operational technology, and even physical, to validate controls are effective and risks are correctly managed.” Of course identifying how data is used in the environment is easier said than done, particularly with regards to identity data with the humber of digital identities businesses store increasing. “Organizations have been scattering their identity data across multiple sources and this identity sprawl results in overlapping, conflicting or inaccessible sources of data. When identity data isn’t properly managed, it becomes impossible for IT teams to build accurate and complete user profiles,” said chief of staff and CISO at identity data fabric solution provider Radiant Logic , Chad McDonald. If organizations fail to keep identity data accurate and minimized, they’re at risk of non-compliance penalties. To address this challenge, McDonald recommends that enterprises unify the disparate identity data of data subjects into a single global profile with an Identity Data Fabric solution. This enables data security teams to have a more comprehensive view of user identity data in the environment, and the controls in place to limit user access. Looking beyond the GDPR: the next wave of data protection regulations One of the most challenging aspects of the GDPR’s legacy is that it’s kickstarted a global movement of data protection regulations, with countries and jurisdictions across the globe implementing their own local and international data privacy mandates, which impose new controls on organizations. For example, domestically in the U.S. alone, California , Colorado , Connecticut , Virginia and Utah have all begun producing their own data privacy or data protection acts, the most well-known being the California Consumer Privacy Act ( CCPA ). The U.S. isn’t alone in implementing new data protection frameworks either with China creating the Personal Information Protection Law ( PIPL ), South Africa creating the Protection of Personal Information Act ( POPI ) and Brazil creating the General Data Protection Law ( LGPD ). The need for a meta-compliance strategy With regulatory complexity mounting on all sides, compliance with the GDPR isn’t enough for organizations to avoid data protection violations; they need to be compliant with every regulation they’re exposed to. For example, while the GDPR permits the transfer of personal information across borders so long as it’s adequately protected, the PIPL doesn’t. So organizations doing business in Europe and China would need to implement a single set of controls that are compatible with both. Similarly, while the GDPR says you merely need to have a legal reason for collecting the personal data of eu data subjects, the CCPA mandates that you enable users to opt out of personal information practices. The writing on the wall is that organizations can’t hope to keep up with these regulatory changes without an efficient meta compliance strategy. In practice that means implementing controls and policies that are designed to mitigate regulatory sprawl and to work towards compliance with multiple regulations at once, rather than taking a regulator-by-regulator approach to compliance. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,315
2,022
"Imec makes strides in creation of a medical digital twin for gut health | VentureBeat"
"https://venturebeat.com/ai/imec-making-strides-in-creation-of-a-medical-digital-twin-for-gut-health"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Imec makes strides in creation of a medical digital twin for gut health Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Human gut health is closely correlated with mental well-being, various degenerative diseases and chronic inflammation. One big component of this is the balance of microbes across different digestive tract sections. Researchers have made incredible progress in analyzing microbes that come out the end and taking samples near both openings. But thus far, the relative composition of microbes across different sections in the middle has been a bit of a mystery. An ambitious collaboration in Europe hopes to change that by building a digital twin of the human gut and its connection to various health outcomes. At the Imec Future Summits Conference in Antwerp, Chris Van Hoof , VP of R&D at Imec and general manager of the OnePlanet research center, discussed the company’s effort to build a digital twin of the human gut. Imec’s team is working to accomplish this by combining new sensors, analytics and machine learning techniques with guidance from nutritionists, doctors, medical researchers, chip designers and data scientists. Medical researchers are making vast progress in building medical digital twins by exploring humans from the outside. But thus far have very little understanding of what goes on inside the digestive tract. “You can do gastroscopy and colonoscopy and stool analysis, but no one has charted the entire GI tract,” Van Hoof told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Imec is a cutting-edge R&D hub facilitating industry collaboration around silicon chips, nanotechnology and health. The current effort involves collaboration with researchers at Wageningen University and Research and Radboud University Medical Center. And Van Hoof said they are soliciting input and guidance from others to help guide research and scale-up initial research. The largest surface Medically speaking, the surface of the gut, called the lumen, is outside of the body. Doctors consider it the largest surface of the human body, at about 30 square meters. It acts as a filter and gatekeeper for capturing nutrients and excreting waste. And gut health is correlated with diabetes, metabolic health, weight loss, poor immunity, mental health and dozens of diseases. There are about 500–800 different populations of microbes in the body. The relative balance of these populations is often correlated with different diseases. Various studies have linked several chronic diseases to microbe populations. And many seemingly unrelated diseases share common microbiome signatures, said Van Hoof. Researchers do not know why people develop a particular combination or how to tune it. In one recent study, a few of Van Hoof’s colleagues ate a diet rich in a range of plants that might affect their gut health. Everyone on the diet saw a change, but the differences were unique to each individual. The many pieces of a medical digital twin Elements of the medical digital twin include a new sensor platform built into a smart pill, a toilet that analyzes stools and urine, an intelligent lunchbox, cameras that track what and how people eat and wearables for correlating these measures with stress and mental well-being. As for the ingestible, a lot of work went into building a resilient package that was small enough to be swallowed, sufficient for measuring the right things and robust enough to survive the harsh environment adapted to break most substances down. They are now working through the regulatory process and how to start the first human trials at the beginning of next year. The team wanted to be able to take chemical and biological measurements, record physical characteristics like sound and communicate wirelessly. It also needed highly efficient electronics, since it needed to run for up to a week. They also needed to build a kind of GPS for the stomach to track where it moves in the body. Fine-tuning models Today, the knowledge of which factors improve or worsen gut health is only known at the general population level. As a result, doctors tend to focus on population-level advice, such as avoiding fatty or spicy foods, which is not always helpful. “We hope to create a model of the person to see what causes flare-ups to happen virtually so that the person is not a guinea pig for a treatment,” Van Hoof said, “Rather the digital twin is the guinea pig to try out new interventions to see what would work best for them.” The team is exploring a range of strategies ranging from individualized digital twins to broader digital twins for groups based on genotype, phenotype, or behavioral or microbiome characteristics. Van Hoof said, “There is hope that we will not have to create billions of models. Rather, we could create a few models and then fine-tune these based on the data we capture to make it more manageable.” Creating these new models will require a different approach than other domains. For example, a cardiovascular researcher can monitor electrocardiogram data to diagnose many diseases without knowing anything specific about the patient. “But there are no gold standards about what the signal should look like for gut health,” he said. Actionable insight The effort is also looking at the UI side of the digital twin. Most countries have basic health pyramids characterizing the basic elements of a healthy diet. But getting people to eat more of some things and less of others requires a more behavioral rather than an informational approach. For example, they have created a smart snack box outfitted with cameras to study people’s snacking behaviors. It turns out that many people, particularly frequent snackers, tend to eat more calories than they record due to unconscious eating patterns; and wearable sensors can track mental and stress changes before someone reaches for a snack. “We see physiological changes before a person craves some sweets. This is part of the model we hope to link to gut health, where we can capture these signals and hopefully steer people towards healthy alternatives,” Van Hoof said. Building the right data framework A digital twin of gut health involves bringing together data from many sources and no one is sure how to do this at scale. Down the road, the hope is that the digital twin could work with sensors, applications and data infrastructure from many parties. But meanwhile, the group is integrating everything into Imec’s OpenPlanet infrastructure, a low-code platform for health and environmental research. OpenPlanet includes applications for data collection, data connectivity, storage and analysis. It also supports curated algorithms and applications for various use cases. For example, data from wearables can be shared live between doctor and patient or between a digital twin and a health avatar. This can help democratize access to digital twins on top of prebuilt models, data formats and machine learning workflows managed with appropriate security and privacy guardrails. Van Hoof believes that prior research on the relationship between gut health and other domains has been hindered owing to different ways of gathering, formatting and analyzing data. They avoid these problems by building and integrating all of the pieces in-house. Tools like federated learning could help support a broader range of sensors, data sources and applications down the road. Creating a digital twin will require more extensive studies and many partners with medical, commercial and healthcare expertise. Van Hoof said they are looking for other parties to join the effort across food and beverage, pharmaceutical and medical technology companies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,316
2,022
"The race is on to build generative AI for the enterprise | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/the-race-is-on-to-build-generative-ai-for-the-enterprise-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis The race is on to build generative AI for the enterprise | The AI Beat Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the wake of last week’s release of the DALL-E API, a crowd of startups is sure to follow, racing to build generative AI for the enterprise. I thought of this as I watched coverage of 50,000 runners converging on New York City for its annual marathon yesterday. It reminded me of OpenAI’s announcement last week about its Converge program , which will provide 10 early-stage startups with $1 million each and early access to its systems. “I can’t think of a more interesting time to start a startup in recent memory,” said OpenAI Sam Altman in a tweet about the program. That announcement came just a day before the company released the hotly anticipated DALL-E API in public beta, which means developers can now integrate DALL-E directly into their apps and products — including many that will likely be used for a host of enterprise use cases. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That effort has clearly already begun: While the term “generative AI” for content generation is not new – a quick Google search unearths articles from as far back as 2017 and pre-Transformer models such as GANS (generative adversarial networks) were introduced in 2014 – the term is already taking over my inbox. There has been a massive uptick in PR pitches related to generative AI in my VentureBeat email over the past 2-3 months. And when I received a missive last week with the subject line “generative AI for plumbers,” I knew the race — and the hype surrounding it — was just beginning. Expect more startups to get into the generative AI game Last week, I spoke to Forrester AI/ML analyst Rowan Curran about Forrester’s 2023 Predictions Report. He said he thinks the DALL-E API will open up a variety of new generative AI use cases. “I’m expecting more startups to get into the game using the DALL-E API, and that in turn will drive both enterprises, especially innovation teams, to use it,” he said. In addition, he expects to see more enterprise-level research and usage in terms of adopting and fine-tuning other large language models for various text and image use cases. “I think the ability to take the large language models and add this fine-tuning layer on top for some specific industries is where it’s going to start to be very game-changing,” he said. “Construction is a good example – being able to understand, summarize and provide insights from contracts, or on the visual side being able to correctly identify cracks in concrete or stressed iron or something like that, or to generate large sets of synthetic data of damaged concrete.” Curran added that he thinks “we are just stumbling into the beginnings of this,” and that large language models are actually just one approach. “There are other types of neural networks and other approaches to developing intelligence, such as using causal reasoning,” he pointed out. He says the promise of tools in areas such as generative AI will likely accelerate “at even more exponential rates than it already is, so it’s tremendously exciting.” The debate over open source, ethics will continue The release of the DALL-E API, however, certainly won’t quiet the general debate or criticism around generative AI. For example, ML community newsletter The Sequence posted an editorial yesterday that highlighted the debate between the API model of generative AI — that is, DALL-E — and the open source model used by Stability AI and its Stable Diffusion text-to-image generator. “The friction between controlled API versus fully open-source distribution mechanisms will likely be at the center of the generative AI debate for the next few years,” the editorial said. API models enable greater control and simpler mechanisms to enforce fair and ethical behavior of the model, but that also guarantees power remains in the hands of Big Tech. Open-sourcing, on the other hand, “removes the control barriers and enables broader levels of customization and fine-tuning but also opens the door to the unethical use of these models.” And, of course, it’s likely nothing will quiet the debate around ownership and copyright of generative AI output — except, perhaps, the courts. Meanwhile, OpenAI sent a message to users on Friday putting their own stake in the ground on that front, saying that it has simplified its Terms of Use and now offers “full ownership rights to the images you create with DALL·E — in addition to the usage rights you’ve already had to use and monetize your creations however you’d like.” This update included no legal context for the change. It simply stated that it “is possible due to improvements to our safety systems which minimize the ability to generate content that violates our content policy. ” The generative AI race is on — but how it plays out on the way to the finish line remains to be seen. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,317
2,021
"Digital threads link workflows across users | VentureBeat"
"https://venturebeat.com/business/digital-threads-link-workflows-across-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Digital threads link workflows across users Share on Facebook Share on X Share on LinkedIn ESRI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The recent Esri User Conference 2021 was a major event for virtual replicates of real-world entities, called digital twins. Real-world digital apps are also beginning to benefit from a complementary technology known as digital threads. Digital threads provide a way to integrate a comprehensive digital twin across workflows for different types of users, according to Spatial Business Systems president Dennis Beck. At the Esri event, users weighed in on how this nascent technology could optimize snow-making operations at a ski resort, improve power network management on a power grid, and streamline foot traffic in a hospital adjusting to COVID-19 restrictions. In some respects, geographic information systems (GIS) were the original digital twins before the term was even born, Beck said. He believes an emerging focus on digital threads will improve the integration of not just data, but processes across users, and that GIS will often be the glue. “It is critical to have a system platform for tying information together, and we believe that GIS is the platform that does that,” Beck told an audience at Esri’s virtual gathering. The industry also needs better digital threads, he added. What are digital threads? In effect, digital threads are a digital blueprint consisting of a collection of data transformation models. An instance of a digital thread captures changes in design models, hardware description language base executables, and databases across a product’s lifecycle. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Beck characterized a digital twin as a digital model of a real-world object that can support the relationship between objects and evolve over time. While the industry has been working on precursors to modern digital twins since the mid-1980s, activity has accelerated as new data types are ingested at ever-faster speeds. Beck has done considerable work for utility companies to help them integrate workflows covering material traceability, supply chain analysis, long-term quality control, and demolition. This effort includes creating user experiences that allow people in different roles to work with data across a project. Such modern applications benefit from digital twin and digital thread capabilities. For example, Beck said, repair crews need to know the technical properties of equipment, like attachment height. Manufacturers need to understand the specifications, material information, and usage conditions. Builders need detailed plans and landscape information. Maintenance teams need to know how to access a site without trespassing. And finance teams need to know how much a part costs, how long it will last, and how expensive it is to maintain. Ignore digital threads at your peril Beck told VentureBeat that digital twins have not particularly taken off in areas like critical infrastructure design because the focus was on speed and efficiency. But faster designs came at the expense of information management. Now the tides are changing, with improvements to tools that speed up design and preserve data relationships across apps. It is now possible to create an intelligent model-based design in less time than it previously took to complete simple job sketches. For example, new capabilities like raster analytics automate conversions between map images and the entities, objects, or events needed for other applications. Esri is also working on a slew of new integration and user experience capabilities to simplify digital threads. “This enables people to take these rich models and leverage integration via service-based architectures to create a common information-based ecosystem,” Beck said. “This is what is enabling digital twins and the digital thread.” With its extensive experience in location intelligence, Esri appears well-positioned to bridge a broad set of applications and use cases that feed digital twins. In various discussions at the recent event, users explored the way digital twins and digital threads will develop. Healthier hospitals and offices Esri has made a big push to bring location intelligence indoors. The goal is to improve asset tracking, optimize facilities, and streamline facilities planning. Loma Linda University Health CIO Mark Zirkelbach said the digital twins of the medical facilities helped the staff plan and optimize COVID-19 social distancing signage for visitors during the pandemic. Down the road, he also wants to use digital threads to make it easier for staff to find expensive assets like medical devices, drugs and other regulated assets, and critical assets that may be scattered around the hospital, such as oxygen tanks. Arup digital specialist Luke Cooper said creating digital twins of his company’s building complexes has made it easier to transition staff back to the office after lockdown. Arup has 16,000 employees spread across 94 offices. Workers are coming back on a limited schedule, and digital twins help improve the employee experience of finding a desk — and finding each other — in a constantly shifting environment. The technology also allows operations teams to figure out why employees use some offices less than others. Cooper also found that a shared digital twin can help improve conversations about issues when employees have to reach a consensus quickly. Facilitating quality control Other ESRI improvements have focused on extending the use of digital threads across more users — all with appropriate governance. At the conference, Brian Abcunas, associate electrical engineer at Peabody Municipal Light and Power , talked about creating a workflow to make it easier for more people to notice mistakes in the their network’s digital twin. The power company is constantly making changes, like replacing transformers or adding circuits, that do not always get updated on the master map. Traditionally, one person was responsible for cross-referencing paper documents, CAD drawings, and GIS maps to find errors. Now, Abcunas’ team has streamlined the process using a web-based interface. In another application, the Telluride Ski and Golf resort recently built a digital twin of its facility to help orchestrate an ambitious expansion of its snow-making operation. GIS analyst and drone operator Matt Tarkington said a digital twin allows the resort to plan for long-term sustainability while using the least amount of water and power. The digital twin also helps coordinate communication about crucial events — such as avalanches, equipment breakdowns, and accidents — across teams in real time. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,318
2,021
"Pharma startup Quris aims to use a 'patient on a chip' to target drug delivery | VentureBeat"
"https://venturebeat.com/business/pharma-startup-quris-aims-to-use-a-patient-on-a-chip-to-target-drug-delivery"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pharma startup Quris aims to use a ‘patient on a chip’ to target drug delivery Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nobel Laureate, Aaron Ciechanover, is one of several notable names behind pharma startup Quris. The company aims to bring together artificial intelligence, the industry’s vast knowledge of the human genome, and the concept of the “patient on a chip” to improve the effectiveness of drug delivery. Last month, the startup announced the launch of its AI platform and a $9 million seed round, led by Moshe Yanai (the mind behind EMC Symmetrix ) and Dr. Judith Richter, and Dr. Jacob Richter (founders of Medinol , which has sold more than 2 million cardiovascular stents). Ciechanover, as well as Moderna cofounder, Robert Langer, are among Quris’ noteworthy advisers. For decades, medical research has successfully cured cancer and treated rare diseases in innumerable quantities of mice – but has not done so as frequently in humans. As Nobel Laureate Aaron Ciechanover points out , mice are different from humans in nearly every way, from genetics to diet. “It’s no wonder that 92% of the drugs that are successful in mice are failing in clinical trials in humans,” Ciechanover says. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “At the base of it, pharma is amazingly ineffective,” says Quris founder and CEO Dr. Isaac Bentwich, whose previous companies have focused on computerized medical records, genomic data analysis, and analyzing soil conditions to grow more crops using fewer resources. “We need to be able to better predict which drug candidates will work safely in humans.” Testing drugs can be similar to predicting elections The concept of the patient on a chip drives the Quris approach. It’s similar to the practice of testing different chemotherapy options on a tumor that has been removed from a patient , Bentwich says – only the patient on a chip uses stem cells to create miniaturized versions of internal organs and arteries. Hundreds of patients can fit on a chip. By applying the Quris AI platform to enough patients, it’s possible to test more than 1,000 drugs on the same patient cohort at the same time, Bentwich explains. Quris has developed its AI tool to predict things like which patients are the best candidates for certain drugs, as well as which drugs will be most effective on those particular patients based on various genome subtypes. Patients with a similar subtype of cancer or rare genetic disease are likely to respond to similar therapies. “You’re not just selecting an individual. You’re selecting a representative group, so you can get the right balance of age and gender, and occupation. It’s like predicting elections,” Bentwich says. At launch, Quris has two projects in the works. One is a partnership with a large pharma firm to test the platform, with an option to purchase it to help develop a single drug over a five-year period. Quris anticipates charging a rate of $60 million to $100 million per drug or indication — a fraction of the potential losses due to safety and efficacy failures during drug development. The approach also aligns with the recent call from the European Parliament to move away from animal testing for scientific purposes. The other is applying the platform to develop its own drug, which would treat a cause of hereditary autism known as Fragile-X syndrome. While admittedly not a large market, Bentwich describes it as an “archetypal example” of the type of conditions Quris hopes to address, as there’s no equivalent model in mice. The company hopes to begin clinical trials for the drug in 2022. “We see it as a proof of concept, as a validation of our engine.” Standing out among phrama companies using AI There is no shortage of companies both large and small applying AI to pharma R&D. Biopharma Dealmakers, a Nature Research publication , points to more than a dozen that have received funding or completed an IPO since the start of 2020, along with a similar number of partnerships inked since the spring of 2019. This pace doesn’t surprise Natalie Schibell, a senior healthcare analyst at Forrester. She notes that pharma is leaning on AI tech for a host of tasks, including recruiting patients, cleansing real-world data, monitoring biomarkers, and automating administrative tasks. “Historically, it takes an average of ten to 12 years to bring a new drug to market, including five to seven years for clinical trials. The benefits of AI can reduce risk and save an abundance of effort, cost, and overall time to market,” she says. “The capacity for AI to automate data capture, digitize clinical assessment, and share data across multiple systems far outweighs the speed and volume that can be performed manually.” What differentiates Quris from other AI companies targeting pharma, Bentwich says, is taking the step beyond analytics and working to bring an AI-developed drug into the clinic. With the work on the Fragile-X syndrome treatment, “the Quris engine has yielded a drug that allows us to validate our approach.” A ‘Mini-Me’ for personal testing The current cost of generating a set of miniature organs on a chip is roughly $15,000, Bentwich says. It’s expensive, sure, but a far cry from $2 million about a decade ago. And it’s poised to drop to $100 within the next decade – following a trajectory similar to the cost of sequencing the human genome. At that price point, consumers could acquire a “Mini-Me” that lets them personally test not just pharmaceutical therapies, but over-the-counter medications like vitamins and antibiotics. “If you can identify the genomic markers that make the differences between us, think of the power that can have,” Bentwich says. “Predicting which drugs will work best in humans is a trillion-dollar problem. It’s the most lucrative AI challenge of our time.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,319
2,022
"How USD can become an open 3D standard of the metaverse | VentureBeat"
"https://venturebeat.com/games/how-usd-can-become-an-open-3d-standard-of-the-metaverse"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture How USD can become an open 3D standard of the metaverse Share on Facebook Share on X Share on LinkedIn Nvidia Omniverse Cloud Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As CES 2023 approaches with a lot of metaverse talk, one of the technologies that is coming to the forefront of standards discussions is Universal Scene Description , or USD. The metaverse has many definitions, but many view it as a 3D version of the web, a network or universe of virtual worlds and destinations that represent the next generation of the internet. In an ideal world, the metaverse will be open — not owned by any single company — and it will be interoperable so that platforms, developers and users can reuse their 3D assets and carry them across the virtual worlds that might be as plentiful as websites. While game companies like Roblox, Microsoft (Minecraft) and Epic Games (Fortnite) have created the most metaverse-like experiences to date, just about every industry will likely invest in the metaverse, the same way that all companies did so with the Web. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Among enterprises, companies such as Nvidia have galvanized interest in creating digital twins , where companies like BMW can design a factory in a digital space and then build that factory in the real world. As the companies operate the real factories, they can collect sensor data that can be used to make the digital twin better, resulting in improvements to the real factories. There are many such applications possible with the metaverse, and that’s why reusing assets — and setting metaverse standards — is so important. “It’s probably one of the biggest things that has ever happened for computer graphics. Because if we can get this kind of standardization, what it essentially does is unlock the potential progress we can make,” said Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia, in an interview with GameBeat. “Today, there’s a lot of effort that’s made to create 3D tools, 3D datasets, 3D experiences, where there’s a lot of redundant work happening. We’re not building on the same foundation. Everybody has to redo everything every time.” The lingua franca of the metaverse? USD is a 3D file format that can be like a lingua franca that makes those assets compatible, with a chance to unify both user experiences and developer workflows. It could be a standard that enables the metaverse — which many see as the next version of the internet — just like HTML enabled the Web. Richard Kerris, vice president of the Omniverse platform at Nvidia, said at our recent MetaBeat event , “We think of USD as the HTML of 3D. The connective tissue that we experience the web through today is HTML. That’s what makes it seamless from website to website, device to device. It wasn’t always that way. But those of us that are old enough to remember [things like] what extension do you have loaded? What browser?” He added, “Once that got remedied with HTML, everything’s been smooth sailing. USD is going to do that for 3D so that we can go from virtual world to virtual world seamlessly.” To make the metaverse happen faster, Kerris believes everybody needs to align around a standard. “We’re seeing that happen more and more with USD,” he said. “I think every 3D company out there today either supports USD or has a plan to support USD in one way or another, whether it’s exporting out to it or creating a live bidirectional connector to the platform. But that takes time. And it takes everybody working together for it. You don’t want to have walled gardens around this. It doesn’t work on the internet. It’s not going to work in the metaverse. You want to have everything be open and accessible.” Solid momentum The number of companies using USD in one way or another is now in the thousands, Lebaredian said. “We’re well on our way to making this a real standard,” Lebaredian said. “I think every single 3D tools maker, engine maker and others have USD on their radar.” But the process of setting the standard takes time as it requires some democratic feedback. And it isn’t clear if there will be more than one standard in this space yet. Tony Parisi is chief strategy officer for Lamina1, which is a blockchain infrastructure company started by sci-fi author Neal Stephenson, who coined the term “metaverse” in his 1992 novel Snow Crash. Parisi was also a pioneer of the VRML (virtual reality markup language), which was aimed at standardizing 3D on the Web in the 1990s. It was ahead of its time, but it set in motion a lot of the technologies in place today that are aimed at running 3D imagery anywhere. He favors the glTF 3D file format which is targeted at enabling 3D on the web for smartphones. It’s kind of a low-end technology meant to ensure that computers or smartphones of any kind can display a 3D object for an e-commerce site without looking janky. By contrast, the heavy-duty USD is aimed more at makers of games and films, and in some cases it might not be as efficient as glTF in mobile apps. “With glTF it was designed to deliver it onto a phone, get it into a web page,” he said. “Those aren’t the same goals. They can intersect and overlap on the high-end side of glTF. USD was not designed to deliver and render a 3D interactive experience on someone’s phone.” Hub and spoke With a USD pipeline, there is a kind of hub-and-spoke model. The hub is where the data is at the center. The tools that can access and update that data are the spokes. All the tools can update the data as needed, without losing anything, Lebaredian said. “Normally, when a company has a complex 3D pipeline, they also need pipeline engineers, somebody to create all the glue that lets them use these tools together, or they do some ad hoc thing,” he said. “It’s a lot of maintenance. It’s nowhere near as useful as they would like.” A big problem with that pipeline model is if you make a change in one part of the pipeline, you have to propagate that change across tools in other parts of the pipeline. “That’s why Pixar invented USD,” Lebaredian said. “Pixar was the first one to really solve this fundamental problem. So there’s really nothing else that’s that compares to USD in this way. It’s unique in this regard.” A long journey — how far to go? It’s inspiring to be able to point to the web as an example of an open platform supported by open standards. But it’s also discouraging as it shows that the journey can be a long one. Nobody expects this to be an easy or a short journey. Standards take time to happen, and there are going to be a lot of them related to the metaverse. No one company can do it all, Kerris said. “USD has to become open, and it has to really be taken over by a third-party consortium of some sort, either through the Open Software Foundation or Khronos or something like that,” said Kevin Krewell, an analyst at Tirias Research. “Everybody gets a say in it, and everybody feels that it’s not proprietary to one company.” USD has a chance to become a defacto standard, but it could reach a wider audience if it becomes an official standard validated by a standards committee. “I think that’s where we are with USD today, on the road to becoming a defacto standard. And what we’re working towards is setting up more formal governance of the standard so it can expand to more uses and more domains,” Lebaredian said. “We’d like to take USD all the way to that point where it can be used everywhere for anything that’s 3D related. We started this journey years ago.” Steve May, CTO at Pixar Animation Studios, said in an interview with GamesBeat that it’s still early days for USD in its quest to be a standard, even though the file format is a decade old. “It’s still early days of USD, especially when we’re talking about doing other things like behaviors and physics or even materials that we don’t really have good standards for,” May said. “I think it’s at the very beginning of a long path. It’s going to need a lot of guidance. Pixar wants to be one of the primary guides here, but, on the other hand, the cool thing about it is that people are using it for stuff that we did not envision.” Guido Quaroni, a key Pixar leader who oversaw the open sourcing of USD before moving over to Adobe, said in an interview with GamesBeat that he hopes the standard can be defined in the next six months or year. After that, more work will have to be done but the momentum will be clear. The metaverse One thing Pixar definitely didn’t foresee was the popularity of the metaverse — an idea that took root in the pandemic and exploded in popularity when Mark Zuckerberg renamed Facebook as Meta in October 2021. “We certainly were not thinking about the metaverse when we were originally making USD,” said May. “It was just how do we make our movie. What’s neat is when you see it being used in ways that we never thought about, that’s always a good sign of a good technology.” But while many wonder if Meta will be truly open and expand its focus beyond virtual reality, even Zuckerberg has said he supports an open metaverse and that no company can do it by itself. Nvidia has more cred in driving USD than companies like Meta, which has had some history of not being open. That leaves some analysts puzzled about why it’s taken so long to move the standard forward. “I can’t say why USD hasn’t been universally embraced,” Krewell said. “It definitely has a lot of capability.” Lebaredian responded, “It might seem like a long time. But when Pixar put it out there, they didn’t even imagine it being used for the things it’s already being used for now. They thought they would use this to make movies. And this might be useful for other people who make movies like they do. I think it’s happening quite fast. We’re the ones that saw the potential for it in all of these other industries because we’re involved in all of these other industries.” Why it matters The companies that are contributing to the effort to create open standards for the metaverse and its related technologies say it’s worth it. While the standards process may be boring or daunting, they help the industries involved ensure that some of the most important priorities of humanity are addressed, such as issues of privacy, free and fair competition, regulations, ethics, centralization versus decentralization, transactions, protections for young people and avoiding lock-ins that lead to monopolies. And standards can unlock markets so we can reap the economic benefits, just as the web unlocked ecommerce that is now generating trillions of dollars in revenue. “It’s a different process and goal than building proprietary products, which is awesome and necessary,” said Neil Trevett, a vice president at Nvidia and head of the standards body The Khronos Group, speaking at the MetaBeat event. “It’s complimentary. You are building a whole ecosystem. It can be a very thrilling ride. You have to be patient. It does take longer than shipping a proprietary product. If your measure is the [time it takes to reach] a pervasive ecosystem, there is no faster or more exciting way to get there than to cooperate as an industry to create open standards that will benefit a bunch of different companies.” HTML took years to solidify, unifying text, graphics, and hyperlinks into the platform of the web. USD is just one of multiple standards that would have to be adopted to make the metaverse operate as seamlessly as the web. Nobody really wants fragmentation. Lebaredian pointed to problems of a lack of a 3D standard at a company like Disney, which has 3D-animated Marvel characters like Iron Man for its movies and 3D characters for games, toys and a range of other projects. No one wants to reinvent Iron Man every time with a new 3D format in a new medium. “This is a huge problem for everyone, where we’re just recreating the same things over and over again,” Lebaredian said. “USD gives us this opportunity to have commonality. So you can take the asset from one studio and give it to another and have them continue the work.” Ben Houston, CTO of Threekit and one of the main contributors to Khronos 3D commerce working group, has been working 3D since the 1990s. He is one of the architects of interactivity for glTF, which is compatible with USD but could become a rival. He said in an interview with GamesBeat that it’s frustrating that you can make a 3D object look good in a tool like Blender and then have to re-create it when you brought it into something like Unity. If you put it in Unreal Engine, you have to touch it up again. “It was horrible when it comes to productivity,” he said. Lebaredian said that a USD standard would matter to game engines, game makers, 3D tool makers and anyone creating content that will be in the metaverse. That includes companies like IKEA, which is making digital virtual versions of its furniture, and ecommerce companies that are going to populate the 3D web with things that people are going to buy. A draft of a standard has been in the works for some time, but it is in the process of being refined. The groups involved have to avoid duplication across organizations. Every company has a vested interest in seeing something adopted. How Pixar originally created USD Pixar built USD so it could make use of the 3D assets that it was creating for its animated movies. The file format is in its fourth generation as a “composed scene description” at Pixar. As it worked on films like Toy Story and A Bug’s Life , the company felt the pain of having to reinvent technologies for each generation of movie. It started with something called Marionette, but decided to move on to a new animation system called Presto, which was used in the film Brave for the first time. Presto evolved into something called TidScene, and eventually Pixar started pulling different pieces together for its USD project, which started in 2012. USD delivered a new scenegraph that sits on top of the same composition engine that Presto used, and it has introduced parallel computation into all levels of the scene description and composition core. One of the good things about USD is that it was designed for extensibility, May said. “We don’t know all of the things that we’re going to need,” he said. “It’s going to have to be able to adapt.” Going open source with USD In 2016, Pixar published USD as open source software. This way, Pixar didn’t have to invest its own programming resources to adapt every tool to work in a Pixar movie. May said the reason Pixar open-sourced it was for portability. There was a lot of friction to bring on new tools. “That’s what we were trying to solve,” he said. USD let developers describe the different elements in a 3D scene, like the environment and the 3D objects in it. Designers use it to describe how a scene should look and how it can be reproduced visually on a variety of different hardware devices. USD also lets artists and designers work on the same scene and then put the results together at the end. USD’s composition engine allows updates to be made to data and scenes simultaneously. Groups can work on the same pieces of a 3D scene in parallel. This turned out to be great for unifying the team, but Pixar also wanted unity among its tool vendors. Rather than create a whole new chain of 3D tools for every movie, it wanted to standardize around USD-compatible tools. And so Pixar decided to open-source USD. As far as engineering goes, May said they’re just getting started. Challenges ahead include specific problems like expressing animation curve data, which is a common task in animation. “I really want to see USD become successful in this broader sense of outside of Pixar and outside of animation and visual effects,” May said. “We really do need to rely more on the community. I’m blown away by how broadly it’s being used outside of the film industry and the animation industry.” Nvidia’s Holodeck and the coming of the Omniverse Around the same time as USD was going open source, Nvidia was carving out its business in workstations and other high-end computers used for content creation. It created software applications such as Holodeck, which debuted in 2017 as a kind of metaverse for engineers. It was a 3D environment where engineers could remotely collaborate with each other, using Nvidia’s 3D graphics hardware. After Pixar open-sourced the tech, Nvidia recognized early that USD had potential. Lebaredian said that Nvidia’s graphics experts took a look at USD and decided to get behind it. Nvidia started trying to accelerate it. They could use it as the foundation behind its new Omniverse content creation environment, which was born as a robot simulation environment. Seeing as a tool for remote collaboration — a metaverse for engineers — the company tried to get everyone to back it and contribute 3D assets that could be compatible with each other. Nvidia believed in it so much that is used its own programming resources to port tools from other companies to USD, Lebaredian said. Nvidia wrote a plug-in for various tools so the tool could become compatible with USD, he said. Once the tool makers realized the value of having USD as a standard, they would take it over themselves. “It transitioned from us pushing to them pulling,” Lebaredian said. Nvidia has been working to add new features beyond the base capabilities within USD in the hopes that it can become more versatile as a standard, Lebaredian said. “We’ve been working to advance that by adding new schemas. We’ve done a lot of that. We’ve done all of it with the community, most notably Pixar, and even with Apple,” said Lebaredian. What motivated Nvidia to push for a standard? While some point at Nvidia’s past history favoring proprietary tech, Nvidia has experience guiding standards such as OpenGL, Vulkan and Microsoft’s DirectX. “From the very beginning, we have contributed to standards, and our livelihood has depended on implementing standards and doing it well and help guiding them,” Leberedian said. He added, “It’s inconceivable to us that the metaverse, a 3D incarnation of the web, can exist without a standard way of describing the things in it. And Nvidia is uniquely positioned to help in building this metaverse with our computers, with our AI chips, with our algorithms, with the full stack of stuff that we do. If we’re going to contribute, we need a good standard for describing the stuff inside the metaverse to attach our technology to.” As companies evolve their metaverse strategies, industries that never cooperated before are converging on each other. “We were seeing a convergence between technologies that used to be in different domains,” he said. “We started seeing game engines being used for industrial purposes, or to create AI and robotics. And up until five or six years ago, nobody was doing that. Nobody was using game engines for something as serious as building your car factory. But once that started to happen, then the need to standardize and harmonize the data and tools between these industries became very apparent to us.” Growing support Adobe’s Quaroni said that Nvidia’s investment in USD has been hugely important. He noted that Omniverse remains Nvidia’s proprietary technology, but its contributions to the open-source USD have helped it spread and spurred more investment in the technology at Adobe. “Partially because of Nvidia, but partly because we believe in this format, we’re adding the support to USD to our apps and looking at the benefits,” said Quaroni, senior director of engineering on the Adobe 3D & immersive team. Nvidia is also contributing code to the USD open source standard so that Pixar isn’t the only contributor in that respect and there can be multiple implementations of the standard. Lebaredian said a number of the details come from Nvidia, as it has a large engineering team dedicated to the effort. “If I had to predict, the industry could come together around USD for certain use cases in a friendly way. There’s a spirit of cooperation around this stuff now,” Parisi said. “Everyone understands we have a lot of greenfield in front of us. Let’s face it, we need more 3D content, and more and more of it, for the metaverse. I think USD is going to be foundational. it’s fundamental for authoring and preserving fidelity and connecting professional tools pipelines together, so the pros can get their job done. The metaverse needs this the way the web needed Photoshop and imaging tools.” This year at SIGGRAPH, the big graphics event, there were a lot of talks about USD and its future. In the past few years, Nvidia has made a big push into Omniverse, more than doubling the size of the engineering team behind it. Apple, Pixar, Adobe and many other companies have also expanded their efforts. Lebaredian said that many tool makers are now making their own investments doing the porting work themselves now. “I think Rev is correct this is a sign of support,” said Jon Peddie, an analyst at Jon Peddie Research, in an interview with GamesBeat. “It’s a tipping point.” Enlightened self-interest This is a case where collaboration is an opportunity. “It’s in our interest to do this. Because since we are a pure technology company, our product is tech that we provide to others who take that and shape it into applications and solutions and workflows,” Lebaredian said. “We need to have standards that our technology is compatible with or that our technology can use. So we can have as much surface area as possible for our tech, the more industries that use the same common center, the more opportunity for us to provide tech that plugs into that into the existing content and data that’s out there.” As a hardware vendor and software ecosystem creator, Nvidia doesn’t have a bias toward any particular software vendor. In that sense, it can be neutral in a standards effort, Lebaredian said. “We’re like Switzerland, in this and we can go work with every other software vendor that can work with every engine,” Lebaredian said. “We can work with everyone else to negotiate a standard that everyone can use.” Regarding walled gardens, Kerris said, “I think that we’ve learned our lesson with companies that tried to wall off parts of the internet in the early days. You have to use our extension, or it doesn’t work. I think those lessons have been learned.” Competition between glTF and USD There is potential competition between the high-end USD and the low-end glTF (GL Transmission Format) format, which is akin to the competition between Unreal and Unity in the game engine business. glTF is a 3D transmission format developed within Khronos Group. It is orthogonal competition, as the formats are aimed at different markets. Films are the high end of 3D animation, while game engines don’t support all of the same features yet. But USD adoption would go a long way toward improving interoperability, and then USD data could be converted to glTF for low-end applications. Parisi sees a lot of “marketing hype” behind USD now. He sees USD as designed to preserve visual fidelity. It was aimed at supporting high-end professional tools and pipelines. At the same time, Parisi doesn’t think that USD and glTF should be positioned as competitors. “They’re not competitors. They’re orthogonal, designed for very different things, as I said. And I think the best analogy is, again, like the image formats that are designed to preserve data like a Photoshop file with its layers. That’s all about preserving fidelity versus the downstream compressed file formats like JPEG and PNG. They were designed to be delivered quickly, easily. And so glTF is obviously much more like the JPEG. It was designed to get it over the wire into a phone or on a website as fast as possible. But you don’t have to have a good feature set there.” Ultimately, Parisi thinks professional design tools like Omniverse and Unreal, and applications on those platforms, will use USD for high-end machines that can show off outstanding graphics. But glTF may be good enough for low-end applications such as viewing objects for ecommerce. “Some perceive there is a competition right now between USD and glTF. And maybe there are others out there. There are multiple standards and some adopt one or the other,” Quaroni said. “But the reality is that, if you think about image file formats, there are many out there and that is OK at the end of the day. My hope is that maybe we’ll go we converge to a couple.” Quaroni still believes USD is currently the most capable in terms of richness of the description of the world of 3D, taking into account things like composition, assembly, camera, animation, and more. “It reminds me a little bit of VHS and Betamax. In a way, when there were like two different standards, one was actually potentially better or more advanced, but the other one so took off much more widely.” The discussion about glTF and USD is pretty similar, Quaroni said. But so long as it’s possible to transfer from one format to another, Quaroni doesn’t see a major problem brewing. There are definitely others who favor glTF, which has a unified way of handling 3D materials. Now you can move 3D objects all over the place and it works great, said Houston. “USD is an amazing format. But that’s a bit different philosophy from glTF,” Houston said. “glTF has been about fast and efficient and very opinionated on what we want to hear. And that makes it a lot easier to implement. Because USD can do everything.” A survey in the Metaverse Standards Forum found that 1,800 respondents were concerned about the challenge of interoperability. “That speaks to the notion that the industry is not necessarily ready to resolve upon a format,” Houston said. “What they want is options and interoperability between formats.” As to glTF, Lebaredian said that USD is one of many of the building blocks needed to build the metaverse, and glTF is another. He thinks they are compatible technologies and expects them to evolve in a compatible way. Regarding glTF being a lightweight part of the standard, Kerris at Nvidia said, “The beauty of USD being so open is that you can bring those things to it. I think that you want to have it be accessible to everyone. You don’t want to try to dominate and say this is the way and everybody has to follow. But the beauty of USD is that it is so open, and so many companies can contribute to it.” Is this orthogonal competition? “That’s what we’re trying to figure out as we discuss it,” Quaroni said. May at Pixar added, “The question is whether we have a world where we have simple representation for simple 3D objects, and do we need separately something more sophisticated like USD for doing really complex things. That’s an area of active discussion.” The fragmentation risk Software companies don’t want fragmentation of standards, as it drives costs up. As for fragmentation, Apple supports a variant of the USD spec dubbed USDZ, which is well-suited for mobile and AR devices because it removes features that don’t work well on mobile. Apple has also said it plans to support glTF. But Houston noted that many games have constraints. They have to run on mobile devices, and that’s not easy to make happen. glTF is designed for real-time interactivity and interchange rather than doing everything like USD. Houston doesn’t want to dis the USD format, as his company uses it still as a high-quality interchange format. “You do not want to be high-end only because then you can interchange with a high-end platform but not with everybody else,” Houston said. “You can’t be so far ahead of the curve that nobody uses it.” Houston he noted there are extremes in metaverse experiences, from hyper-realistic 3D metahumans (lifelike humans) or 3D-animated films. And then on the low end are 3D objects on an ecommerce site. “When you use USD, you have to know what you are aiming for,” said Houston, whose company supports both formats. “If you want to have true interoperability, you can’t do everything and the kitchen sink. We’re all building this plane as we’re flying it. I think coexistence will happen.” Enemies of standards? The true competition for USD is more like developers who come up with their own file formats for their own applications where they don’t care if the files are reusable. For instance, video game designers may not want to use the high-end 3D cars with meticulous parts as designed by car engineers. Instead, they might just want something shiny on the outside with nothing on the insides because you can’t see in a game. And ecommerce companies might not care if that car doesn’t zoom along at 60 frames per second. These kinds of details are the enemies of standards and reusable assets, Parisi said. “It’s a great idea to have reusability, but it’s just hard to solve and technically I don’t know if we will ever get there,” he said. Krewell at Tirias Research said USD has the capability to become a metaverse standard. But it would take a who’s who of graphics and gaming companies — those who care about immersive applications — to throw their support behind it. “In the end, we really need the content companies to step up. If the goal is to have digital assets you can move from environment to environment, that means you need a universal standard, where those digital assets can be described. And each localized environment can then recognize it and re-create it. So that’s where USD really could work,” Krewell said. “But it has to be embraced by a consortium.” In terms of making USD into a metaverse standard, Quaroni said, “The challenge is to get all the big players to agree.” That’s where he sees the Khronos Group playing an important role. Will Omniverse be open? Then there is the question of which USD-based tool will become ubiquitous for the metaverse. Omniverse is based on Nvidia technology. And an Omniverse project is not interoperable with other kinds of projects as Omniverse takes USD and adds Nvidia-defined extensions on top of it. Omniverse isn’t like a game metaverse tool either, as it is mostly being used for high-end simulation environments. “It’s incredibly high-end and that’s good,” Houston said. “But it’s like its own thing. It’s not the metaverse. Maybe it turns into the metaverse, but right now it isn’t. For instance, game developers are not on it.” Nvidia itself sees Omniverse as an important market driver. “I fully expect that, you know, Omniverse should be accessible by everybody, whether they’re on our hardware, or they want to access it some virtual way,” Kerris said. May believes there isn’t a good reason to restrict USD to Nvidia hardware, and no one is really talking about that. The question is whether Omniverse becomes universally popular. “We will use every kind of device platform at Pixar. But on the other hand, we do want to run well on everything from mobile devices all the way up to like high-end workstations,” May said. Quaroni noted that some people might be happier if Omniverse were supporting other platforms and it was “more like open source itself.” He added, “I actually see that to be inevitable in my view.” Of course, market forces would need to change for that to happen, as Nvidia is perfectly happy selling more RTX hardware right now to help people use the Omniverse tool. In the name of helping USD spread, Adobe announced that it is working with Qualcomm to port USD to Android technology. “Why are we doing this? Because it’s important for Adobe to have software on every platform,” Quaroni said. “If we adopt USD on the desktop apps, we want to be able to seamlessly go from desktop apps to iPads and tablets and all the way to VR devices.” Will the hardware makers agree? An added source of friction for a standard is whether the hardware companies — chip makers and design companies like Intel, Advanced Micro Devices and Qualcomm — will get behind a standard. Those companies have a vested interest in seeing the metaverse succeed, but they may not be as aggressive about supporting something that was supported so closely by Nvidia. There are tricks that big companies can pull to try to hijack a standards process. They can make their own technologies free and have the technologies that they use become supported by the entire industry. That’s a clever move. Peddie said that Nvidia faces different temptations. He said Nvidia could try to be like Apple, offering superior but closed technologies in a walled garden. Or they could try to expand the market as fast as possible by supporting openness. “If you want to run Omniverse now, you have to use Nvidia RTX,” Peddie said. “There have been multiple attempts over the years to build this universal file format that any program could plug into.” A case in point was Nvidia’s success in GPU computing, which uses a graphics chip to do non-graphics parallel processing computing. Nvidia developed its own software, dubbed CUDA, to enable GPU computing. It became extremely popular and the software made it easy to implement. Nvidia broke into datacenters and high-performance computing with the technology that ran on its graphics chips. But CUDA didn’t get support from the other chipmakers as it was tied to Nvidia hardware. Nvidia spent its own money to develop the CUDA software, and it made sense that it was a closed system. It went to universities and gave CUDA away for free. The engineers would learn how to do CUDA programming and it made CUDA in a defacto standard. “CUDA was one of the most brilliant things anybody ever did. They bet and then they get credit for doing something clever,” Peddie said. “Nvidia gets credit for committing early in the GPU compute development. It was something that more people knew how to program with. But its translation, when you compiled it, would only write code to an Nvidia GPU. So it wasn’t open to the world.” This isn’t unusual in a lot of ways. Apple, for instance, has its Metal low-level graphics API that is aimed at making software run better on Apple devices. The GPU computing market became lucrative for Nvidia, but AMD ultimately came to the market later and supported the rival open-source tech OpenCL, but Nvidia’s hold was hard to beat. Could Nvidia have open-sourced the CUDA tech? Perhaps, but it opted instead to create a defacto standard, rather than a real one. I asked AMD and Intel if they were supporters of USD, but they didn’t get me answers by publication time. Peddie said AMD is late to the party. [Update: Intel responded: Intel’s very bullish on USD and has been involved with USD for some time. The default path-traced renderer for Pixar’s main USD repo – hdEmbree – is, as you might guess from the name, based on the Intel Embree raytracing kernels. We’ve been showing USD-based demos of our new graphics hardware at events like Siggraph and SuperComputing’22 and are actively involved in the trade groups (such as Metaverse Standards Forum and Academy Software Foundation) addressing USD and its interoperability with glTF.] And while Nvidia supports the standards process around USD, it has made Omniverse exclusive to its RTX technology, which creates awesome shadows and lighting tech in imagery produced by its GeForce graphics cards. “The history is often the closer you get to things that are runtime, the harder it gets for people to cooperate,” Parisi said. Lebaredian points out that the discussion around USD isn’t going to be tied to anyone’s specific chips. It’s high enough up the software stack that any chip should be able to run it. We’ll find out if that is true if as other chipmakers disclose their positions on USD. “I don’t think there’s many opinions at high levels at semiconductor companies about these kinds of things. The nature of data standards is such that they’re only valuable if kind of everybody buys into it,” Lebaredian said. “It’s not going to be biased towards one hardware or the other. The only people that might be threatened by such standards are ones that have a walled garden of data that they don’t want to open up.” The pie will be huge Since the pie will be huge, the push for collaboration should win, Parisi said. The demand for 3D will be huge, and do-it-yourself content creators can supply it if they have the right tools that let them remix content created by others. If they gather around one or two or three file formats, that would be an improvement on where we are today, Parisi said. It’s a trillion-dollar question as to whether the industry can come to an agreement. Parisi is aware of the grand visions everyone has about the interoperable metaverse and reuse of 3D assets. He thinks those are laudable goals. The devil will be in the details. “The rubber hits the road and it comes down to certain technologies, protocols, and tools to use it,” Parisi said. “The web has proven it can be done. I don’t think it’s that far-fetched.” The end goal of reusability So what can an open USD standard enable? “I’m hoping that with metaverse, we try not to lock it too much in because ultimately the user will have to pay the price,” Quaroni said. “If you have to buy a car five times if you go between different platforms, that would be unfortunate.” Quaroni sees a standard becoming real in the marketplace if there is a lot of reuse that is happening among the different companies, applications and platforms. “If I buy something somewhere, I can reuse it somewhere else,” he said. “And if I visit some virtual world, I can expect things to be consistent.” I have fantasized how Nvidia could create its Earth 2 supercomputer simulation — a digital twin of the Earth created with the Omniverse — and then predict climate change for decades to come. After it does that, Nvidia CEO Jensen Huang has said that could get us the metaverse for free. “I don’t know if USD in its present form is robust enough to do what you’ve described, but it is a starting point,” Peddie said. “It’s like a universal compiler, and that has never worked. There are too many variables.” Will we ever get to the day when Nvidia does the Earth 2 digital twin of the Earth, and then game developers like Brendan Greene can take that reusable metaverse tech and build his virtual world that is the size of the Earth? That’s the ultimate in reusability, and it’s something to aspire to. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,320
2,022
"Nearmap partnership to accelerate 5G with digital twins | VentureBeat"
"https://venturebeat.com/mobile/nearmap-partnership-to-accelerate-5g-with-digital-twins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nearmap partnership to accelerate 5G with digital twins Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nearmap and Digital Twin Sims have partnered to create accurate digital twins of U.S. cities to help telcos roll out 5G and new IoT services. Nearmap’s automation pipeline transforms high-resolution imagery and spatial data into 3D models for city planning, construction and urban planning. Digital Twin Sims is a telecommunication business and network modeling advisory firm. Digital Twin Sim’s cofounder, Sameer Lalwani, started the company after a 25-year career in telecom planning. The industry has long used manual approaches to craft models that started at 100 meters of resolution and gradually improved to 10 meters. Over the last few years, he saw an opportunity to take advantage of high-resolution data from companies like Nearmap as well as cloud computing from the likes of AWS and Microsoft Azure to scale improve this resolution down to 15 cm at city-scale. Planning for complexity 5G deployments create a new level of complexity and demands greater precision than traditional mobile networks. For starters, 5G requires far more base stations to reach the same levels of coverage as previous wireless services. Additionally, many spectrum bands are more susceptible to interference or reflection by buildings or even tree branches. As a result, telcos need to ensure line-of-site paths between cell towers and coverage zones — something that has traditionally been an expensive and time-consuming process Nearmap’s platform automatically feeds data into the Digital Twin Sims engine to generate models that reflect changes, including new construction and even the growth of trees and other vegetation that might impact coverage. The platform can constantly update these models in response to changes detected in aerial and satellite surveys. This new approach allows Digital Twin Sims to simulate thousands of nodes in a single afternoon, whereas a manual survey would traditionally require several days to plot a single node in a 5G network. The digital twins combine physical models of building and infrastructure — down to the lamp posts — with demographics and business models representing existing customers and insights into new opportunities. Better simulations will help telco executives evaluate business opportunities and will allow engineering teams to plan and execute optimized infrastructure. The digital twin creation process starts with telco planning teams setting a particular goal. A strategic overview for a nationwide network starts with low-resolution data for high-level planning. A more tactical analysis for estimating the cost and time to provision a millimeter-wave or 5G network would begin with the highest-resolution possible. Currently, operators, marketing, sales, network planning, deployments and device purchasers operate in different business process and data silos. Each group is working towards different metrics that, at times, conflict with other departments. “There needs to be a single entity where the datasets from all these sources are combined and looked at holistically, but that rarely happens,” Lalwani told VentureBeat. Different formats required Digital Twin Sims creates a consolidated view of data from many sources and various formats presented within a single UI. For raster data, such as satellite and aerial imagery, they use GeoTIFF and are now moving to Cloud Optimized GeoTIFF (COG). All vector data representing the routes of lines, properties of buildings and demographic data, is stored in GeoPackage (GPKG) files. Point cloud data derived from lidar scans is also captured and stored in .laz files. The digital twin downloads data from the Nearmap API and loads it into an H3 server at a fixed resolution. H3 is an open source geospatial indexing system developed by Uber. Lalwani said they usually customize their existing code for a specific scenario and run it in Docker containers based on the size of computing needed. Nearmap’s general manager of North America, Tony Agresta, said the company prioritizes having a robust distribution of file formats that can easily be integrated with as many third-party applications as possible. They have developed tools to translate data feeds into standard ortho imagery (as viewed from directly above), 3D data created through photogrammetry and vector AI data. 3D formats include textured mesh OBJ, SLPK, 3MX, Cesium and FBX files and point cloud data in a LAS file. 3D digital elevation models, digital terrain models (bare earth) and true ortho are all stored as GeoTIFF. Vector AI is stored in GPKG and Shapefiles. As it is still relatively early in mass use and adoption of digital twins, best practices for developing this technology are still a work in progress. Engineers, data scientists and executives need to keep their options open for adopting the best data formats required to simplify data pipelines, create useful simulations and provide the greatest insight for each use case. Down the road, emerging efforts like universal scene description ( USD ) could make it easier to transform data across use cases. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,321
2,022
"Blackshark and Maxar partner on earth-scale digital twins | VentureBeat"
"https://venturebeat.com/technology/blackshark-and-maxar-partner-on-earth-scale-digital-twins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Blackshark and Maxar partner on earth-scale digital twins Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Maxar Technologies, a provider of satellite imaging services, has announced a strategic investment in Blackshark.ai, which creates earth-scale digital twins. The exact terms of the funding were not revealed. The companies plan to work together to combine their technology portfolios to support new use cases. Blackshark has developed tools to transform petabytes of raw data into semantically labeled digital twins that describe roads, buildings, vegetation and other structures without human intervention. Maxar’s Vivid base map product transforms timely cloudless satellite imagery into geointelligence about weather, productivity, pollution, crop yields and other use cases at scale. There are many companies providing satellite imagery and visualization tools. Blackshark’s advantage lies in transforming these into a semantically labeled digital twin, making it easier to use in various simulation use cases. More importantly, the company continuously updates this digital twin to keep the models fresh and drive alerts in response to changing conditions at scale. “Our technology outperforms other solutions in scalability,” Blackshark CTO Arno Hollosi told VentureBeat. “Our platform backend can process imagery of the entire planet in less than 72 hours, allowing us to create and update these large-scale semantic environments extremely quickly.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Maxar adds new services The strategic investment in Blackshark allows Maxar to bring additional 3D capabilities to broader markets for more customers and opens a new revenue stream from Blackshark. Blackshark will leverage Maxar’s global cloudless satellite imagery base map, Vivid, to create a highly performant and photo-realistic 3D map for enterprise and government customers in industries such as gaming, metaverse, simulation and mixed reality environments. For example, this type of offering would enable flight simulator customers to access immersive 3D digital experiences with low latency and global scale. This could support customers who may not need the global accuracy of Maxar’s full Precision3D suite. Blackshark came from the same team that built the realistic 3D models used in Microsoft Flight Simulator. The current offering continuously updates these models for new use cases like smart cities or better computer games. Maxar chief product officer and senior vice president Dan Nord spent ten years in the video game industry at Electronic Arts and Amazon, building large-scale simulation games before joining Maxar last year. At Maxar, he has led work on Vivid, its global 2D base map product and Precision3D, a 3D digital twin. The two companies provide a suite of 3D products covering nearly all use cases spanning from high resolution/low accuracy to low resolution/high accuracy across defense, intelligence, navigation and ESG. Nord believes the partnership could help Maxar expand these core offerings to a larger audience. “We expect the partnership with Blackshark to open up new revenue streams for Maxar by being able to enter new markets and reach customers who value photo-realism,” he told VentureBeat. “Our joint offer will position us well in industries such as metaverse , simulation for autonomous driving and flying, synthetic training environments and entertainment, among others.” Blackshark gains new imagery and sales The partnership will streamline Blackshark’s image acquisition process on the product development side. Until now, Blackshark acquired new satellite input imagery for customers on demand. Blackshark plans to collaborate with Maxar on the product side to provide the best available satellite image data on a global scale. The partnership will also provide a richer starting point to help refine Blackshark’s tools for automatically generating semantic labels for characterizing objects in petabytes of data. “Having consistent image quality across the entire planet will significantly increase our detection and 3D output quality,” said Gastao de Figueiredo, Blackshark senior VP of strategic partnerships. The partnership will also leverage Maxar’s sales channels. Blackshark CEO Michael Putz said, “We expect this to be a multiplier for our business,” said Blackshark CEO Michael Putz. The companies neglected to share financial details about the latest investment. Last November, Microsoft’s Venture Fund, M12, and Point72 Ventures invested $20 in a Series A round, joining a syndicate with i5invest. At that time, Google Earth cofounder Brian McClendon, former CEO of Airbus Space & Defense Dirk Hoke and CEO of Applied Intuition Qasar Younis also joined as advisors to the board. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,322
2,021
"How the pandemic is accelerating enterprise open source adoption | VentureBeat"
"https://venturebeat.com/2021/01/26/how-the-pandemic-is-accelerating-enterprise-open-source-adoption"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the pandemic is accelerating enterprise open source adoption Share on Facebook Share on X Share on LinkedIn Alienware M18X laptop keyboard, Bath, July 7, 2011. (Photo by Simon Lees/PC Format Magazine via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises increasingly shifted to open source software solutions in 2020 to meet their remote organizational needs and address new market demands for quality and speed. COVID-19 challenged not only the economy, but also enterprises’ existing frameworks for how, when, and at what volume people use information technology. Last year, major players like LinkedIn and Spotify open-sourced tools they developed — from Java machine learning libraries to audio file processing ecosystems — for non-proprietary IT team members like data scientists and software engineers to use. Seed-level startups like Eradani and RudderStack that built their products on open source thrived despite launching only a few months before the pandemic began to escalate. Most companies don’t publicize their internal tooling and infrastructure strategy transformations, but GitHub’s data suggests these have much to do with open source solutions. GitHub says 72% of Fortune 50 companies used GitHub Enterprise , which runs GitHub services on their local networks, between Q4 2019 and Q3 2020. GitHub also found over 40% YOY growth in open source project creation per any active user between late April 2019 and late April 2020. This first spiked with the pandemic in early March, when countries began to close schools, ban visitors, and initiate lockdowns en masse. In an interview with VentureBeat, GitHub VP Mario Rodriguez said, “Open source project creation just kind of shoots up” after March. He added, “2020 is interesting because everything from a technology perspective got accelerated, and you were trying to do more and more.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Rodriguez, “2020 also opened up a new set of talent from a software development perspective.” Companies were previously limited by their geography and could only hire within certain cities, like San Francisco or New York. But the transition to remote work has begun to change that. “Now, you do not have those restrictions … and the majority of software developers out there use open source today. And so you bring it into your enterprise now at an accelerated rate, which allows you to learn and continue to evolve your practices on how you develop software and how you use open source,” he said. The pandemic disrupted existing tech trends and likely helped amplify the growing movement toward open source software solutions as enterprises’ distributed, remote workforces needed to use more custom applications internally, to innovate their IT quickly with the help of reusable code, and to retain developers by using the tools they prefer. Speed: Accelerating technology with a remote, distributed workforce Internet traffic skyrocketed by over 30% in March , especially to platforms for online learning and telecommuting. Microsoft Teams users set a new record for 2.7 billion meeting minutes in one day, and Microsoft, along with Netflix and YouTube, temporarily reduced video streaming quality and download speeds to cut back on bandwidth consumption. These changes highlighted consumer demands for new digital communication tools and challenged enterprise IT teams to create and manage them — quickly. And enterprise IT teams, now distributed and in some cases fully remote, had to organize their own work around new kinds of applications and develop them in days or weeks, as opposed to months or years. “From an IT perspective, you have to accelerate the [number] of apps that you are creating for your internal use as an enterprise,” Rodriguez said. “So that has actually allowed the enterprises to say, ‘You know what, if we are going to have these constraints, maybe we should start to research and figure out a way to empower more use of open source internally, as well.’ … You’re trying to go faster.” While some enterprise frameworks and custom logic for business applications remain fully proprietary, integrating open source code can be a faster way to develop most software. Developers can import the existing work of thousands of people with a few inputs, making it easier to pull new applications together. For example, companies expedite their training and development of machine learning models with Google’s TensorFlow. Now that information is more democratized with open source software, some enterprise leaders suggest it’s hard to compete without it. “I’m a member of the CTO forum, which is a group of 150 CTOs around the globe that talks a lot about tech,” Pinterest head of engineering Jeremy King said in a recent interview with VentureBeat. “And for sure, lifecycles have sped up.” King described how companies used to try out maybe three or four vendors with an open source technology stack at a time for six months. Afterward, the company might figure out which vendor performed best and negotiate a rate. “All that has gone away,” King said. He said if a company knows that a given open technology works, they’ll make a prototype and they’ll have changed the cycle by the next week. “People are more rapidly adopting [open source], and I think it’s just because the tolerance for failure and making mistakes and moving quick has gone up,” he said. He explained that this is also true when it comes to “dealing with the consequences of moving fast in production, which, a year ago was unheard of.” Elephant Ventures CEO Arthur Shectman also commented on the pandemic’s disruption of enterprise IT in an interview with VentureBeat. “The market kind of collapsed around people. In that moment of high volatility or prices and market stress, you reach for data,” Shectman said. “You’re like, ‘These decisions are going to have profound impacts … the net impact a week from now is huge.'” Elephant Ventures is a digital transformation consultancy that helps corporations like Pfizer and American Express build their engineering capabilities. Shectman said he applies a technology readiness framework to his approach, finding ways for clients to create increments of business value with ETL technologies, API tooling patterns, and other strategies for deploying applications and workflows. According to Shectman, “People were clamoring for data, and they were clamoring to transform their kind of data ecosystems, very rapidly.” He said that in the past few months, the conversation went from planning out three years of digital transformation retool to looking for immediate answers with a return on investment in 90 days. Shectman noted that waiting to purchase proprietary software could cost his enterprise clients $3 million in some instances. Shectman said proprietary software’s cost and speed of deployment became greater roadblocks for his clients during 2020. “I felt like there was a lot more willingness to instantly adopt open source technologies and start applying them without any additional kind of software purchase cost to get a rapid ROI cycle project,” Shectman added. “Over the last year, if you could demonstrate that you knew how to implement it, you had a pattern that generated success in critical dimensions.” Volume: Innovating with code reuse and existing tools Developers can build applications at greater volumes by reusing open source code instead of starting from scratch. Enterprise IT teams can also integrate open source tools into their existing workflows to manage their data with improved precision. This control is increasingly important, given the pandemic’s overall shift toward digitalization, which increases the amount of data itself. “The number of apps that get created right now are at an all-time high. And then the number of those apps that [use] open source are also at an all-time high. It’s probably because of code reuse and ability to just go from zero to 60% or 70% of what you need to create very, very quickly,” Rodriguez said. These current trends in open source software are a continuation of many enterprises’ strategies. “I think we use a lot of open source technologies, largely due to the fact that our scale often prevents us from using a commercial product,” King said. “Pinterest is a 10-year-old company, and we have billions of pins … and that’s not always something you can just buy off the shelf. And so whether it’s logging data, understanding images, searching — all these technologies didn’t exist when we started; they’re getting better and better over time. Even the off-the-shelf cloud providers are getting better. But we’ve made Pinterest very unique to what we’re looking for as a result of us using a lot of open source technology,” he added. Other enterprises have tailored their tech stacks to maximize productivity and output with open source solutions more recently. Even maintaining the same products requires additional speed of deployment when competing in a volatile COVID-19 era market. McKinsey reports that North American businesses accelerated their share of digitized product offerings by 20% between March and October 2020. Open source software might play a role in expediting this process if the bulk of digitized products’ code can be built from an existing repository’s code. “For a long time, the digital economy and the computer revolution were just driven by Moore’s law, with your chips getting better and faster at this crazy growth rate,” Shectman said as he traced his client’s technology solutions from the early 2000s to late 2020. “Now, after you go through the early iterations of your product, you are able to do a certain amount of things with commercially available software, [but] you’re gonna have typically an easier time unwrapping and fixing or customizing the open source stuff.” According to Shectman, understanding and acting upon data has become key to enterprises. “I think it’s the scale of data production. I don’t think anybody really had a great sense of how rapidly data would proliferate in the world.” The market’s space for tools, particularly open source technologies that sieve and curate data, has widened as a result. In the past 30 years — and mostly in the last 10 — over 200 companies with open source technology at their core have raised over $10 billion in capital. Startups such as the customer data pipeline tool RudderStack , which provides an open source-based alternative to Segment, have recently capitalized on this change. RudderStack was launched in late 2019, and its 2020 growth mirrors the growth patterns of other early-stage startups, like Prisma and Streamlit , which contribute to GitHub community code and base their business strategies in open source software. RudderStack marketing lead Gavin Johnson said in an interview with VentureBeat, “Being open source makes it easy to deploy RudderStack in your own environment. … Anyone can look through the code and figure out what is being done with their customer data in-product, something you can’t do with closed-source products.” According to Johnson, RudderStack’s open source software reduces the number of tools enterprise engineering teams would have to build for sending and collecting data and allows the teams to modify the platform with custom integrations. He also suggested end-user enterprise IT teams tend to save more money with these alternatives to proprietary platforms. Novelty: Energizing IT teams and long-term growth The Linux Foundation reports that in 2020, 93% of hiring managers found it difficult to recruit employees with open source software skills and 37% of them wanted to hire more skilled IT professionals, despite the pandemic’s economic impact. Rodriguez told VentureBeat that in 2020, “We didn’t see companies shrink in developers. We actually saw them expand in developers.” He believes this growth has to do with enterprises’ goals that now, more than ever, revolve around IT capabilities. “You’re trying to create all this software, which means that you need more developers in order to do it right. And if you’re trying to actually have more, [you need] to not only have the best tools for your developers but have the best practices as well.” He said that is another reason he thinks open source accelerated significantly in 2020. The open source community organizes conferences, groups, and companies around making code more accessible. Developers contribute to repositories like Open Source for Good and develop personal projects with open source code. Johnson said that data engineers and data scientists work a lot in open source. He said, “RudderStack’s mission is to help make engineering and data teams be heroes of their organizations. So we needed to build our product in a way they like, which means open source.” Some enterprises are turning to open source tools to bring in skilled technical employees. King said open source attracts and retains engineers. “And the people who are the best in the world want to publish, and they want to work on things that are well known, and they want to contribute back. And so it is definitely a good return on investment as well,” he said. In addition to using open source for internal transformations, enterprises have been contributing open source tools to the community. Open source contributions can improve enterprises’ visibility and relevance, and this business strategy is important now that most, if not all, of their public operations are digital. RedHat brings open source technologies to enterprises that need them and has since it was founded in 1993. But in 2020, end-user companies like Facebook , LinkedIn , Spotify , and Uber have also begun to open-source their own tools for the public. “The open source movement is pretty well established and reasonably mature at this point, so it’s not a surprise for any large corporation,” Shectman said. “If there’s a bit of awareness, people can afford to shy away from some of the convenience of a particular vendor-specific thing in order to allow themselves fluidity.” Shectman suggested that businesses face similar constraints with their staffing, technology, and ability to create effective products. He added that he’s seen open source software help businesses remove constraints, including proprietary vendor lock-in. Rodriguez suggested that the advancements open source can provide in regard to speed, volume, and overall business strategy aren’t easily matched. “What everyone now is realizing is you cannot out-innovate, or you cannot out-execute open source,” he said. Will open source continue to grow in 2021 like it did in 2020? Rodriguez thinks it could. “For any company out there, from the most powerful company in the world to a startup, you’re not going to be able to hire [enough] people to build the software of that quality — open source gives you that immediately.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,323
2,021
"App security platform provider Pathlock raises $20M | VentureBeat"
"https://venturebeat.com/2021/03/10/app-security-platform-provider-pathlock-raises-20m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages App security platform provider Pathlock raises $20M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Unified access orchestration platform provider Pathlock , formerly Greenlight Technologies, today announced it has secured $20 million in a strategic growth round led by Vertica Capital Partners. Pathlock says the funds will be used to bolster R&D for its products, extending the capabilities of its insider threat prevention platform. According to Markets and Markets , the security orchestration, automation, and response (SOAR) segment is expected to reach $1.68 billion this year, driven by a rise in security breaches and incidents and the rapid development and deployment of cloud-based solutions. Risk Based Security found that data breaches exposed 4.1 billion records in the first half of 2019. That may be why 68% of business leaders in a recent Accenture survey said they feel their cybersecurity risks are increasing. Flemington, New Jersey-based Pathlock, which was founded in 2004, says its access orchestration solution surfaces violations and takes action to prevent loss. With Pathlock, enterprises can manage different aspects of access governance — including user provisioning and temporary elevation, ongoing user access reviews, internal control testing, transaction monitoring, and audit preparation. Pathlock continuously monitors transactions across enterprise apps to detect intrusions. It runs provisioning scenarios with “what-if” analysis before providing access and delivers real-time alerts for potential data loss. Moreover, Pathlock can automatically revoke privileges or terminate sessions to block malicious behavior, and its risk methodology is based on financial impact. The platform provides a quantitative risk score, rather than the risk rankings found in some rival solutions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pathlock claims to have monitored billions of events across its more than 100 customers. “As digital transformation is driving change, enterprises are rapidly adopting new applications, moving to the cloud, and increasing their automation efforts. These digital activities are often happening in the shadow of IT, making them harder to manage and control,” founder and CEO Anand Adya said in a press release. “Our increased focus on zero trust will help modern enterprises conquer insider threats through a unified platform. We look forward to saving our customers time and money while helping them pass their audits with flying colors.” Pathlock has a number of competitors in the identity and access management market. There’s Strata Identity , an identity orchestration platform for multicloud environments that recently raised $11 million. Another is JupiterOne , a cybersecurity management automation startup whose customers include Reddit and Databricks. Verified Market Research expects the segment to be worth $29.79 billion by 2027, growing at a compound annual growth rate of 13.21% from 2020, when it was valued at an estimated $11.82 billion. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,324
2,021
"Aqua Security protects containerized apps and infrastructure, raises $135M | VentureBeat"
"https://venturebeat.com/2021/03/10/aqua-security-protects-containerized-apps-and-infrastructure-raises-135m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aqua Security protects containerized apps and infrastructure, raises $135M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Aqua Security today announced that it raised $135 million in series E funding at a post-money valuation exceeding $1 billion. The Boston- and Tel Aviv-based company plans to put the proceeds toward broadening its product portfolio and its geographic footprint. Containers, the packages of code and dependencies that run across computing environments, are riding a worldwide wave in popularity. According to a recent study from Nemertes , 45% of enterprises used containers by the end of 2018 compared with 21% in 2017. Meanwhile, Grand View Research reports that by 2025, the industry might reach $8.2 billion. CTO Amir Jerbi and CEO David Davidoff, who cofounded Aqua in 2015, aim to corner the cybersecurity portion of the container market. Aqua’s container-based app management platform runs on-premises or in the cloud and ensures that containers remain immutable, preventing changes versus their originating images. Despite the increased risk ushered in by the pandemic-related uptick in remote work, containerized software security practices remain uneven in the enterprise, surveys show. According to a 2019 Veritis report , 34% of respondents had no container security strategy or were still in the planning stages. Most saw user-originated misconfi­gurations, exposed dashboards, and potentially compromised metadata as their biggest challenges. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Aqua aims to address this with a console from which multi-team, multi-customer environments can be managed either together or in isolation. The platform supports Active Directory and LDAP for permissions configuration and single sign-on and offers prebuilt integrations for third-party productivity and analytics tools. Vulnerability data, alerts, and audit events are logged in real time. And Aqua integrates with third-party secrets vaults, delivering encrypted tokens and private keys to containers at runtime and loading them in memory so they’re visible only to the containers that require them. Above: Aqua’s cloud dashboard. Aqua’s runtime protection service, which secures apps predeployment through a command-line interface that orchestrates automated validation testing, leveraging a stream of vendor advisories and other sources. It scans up to thousands of images nodes for vulnerabilities, embedded secrets, configuration and permission issues, and malware. It affords admins the ability to monitor and control container activity based on custom policies and machine-learned behavioral profiles and to block activities and processes or limit container networking based on app contexts auto-detected by Aqua’s firewall configuration utility. Aqua’s competitors include Sysdig and Tigera, which recently raised $68.5 million and $30 million, respectively. There’s also Docker, the San Francisco startup behind the ubiquitous container toolkit of the same name. Among others are Twistlock, Capsule8, NeuVector, StackRox, Layered Insight, and Tenable. But Aqua’s $265 million war chest sets it apart from many of the rest, as does its impressive customer momentum. In 2020, the company doubled the number of paying clients and claims it now protects several of the world’s largest container production environments. Aqua also says that as of March 2021, it has half a dozen customers with an annual recurring revenue of over $1 million. New product launches are responsible in part for the growth. This past year, Aqua launched Aqua Wave, a software-as-a-service offering that works to secure apps as they’re built and the infrastructure they’re deployed on. The company also launched Aqua Enterprise, a version of Aqua’s enterprise offering that runs on top of its existing self-hosted version, with added capabilities for securing workloads at runtime. ION Crossover Partners led Aqua’s series E funding round announced today with participation from existing investors M12 Ventures (Microsoft’s venture arm), Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates, and Acrew Capital. Aqua previously raised $30 million in in a series D round closed in May 2020. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,325
2,021
"Safe Security raises $33M to manage and mitigate cyber risk | VentureBeat"
"https://venturebeat.com/2021/07/21/safe-security-raises-33m-to-manage-and-mitigate-cyber-risk"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Safe Security raises $33M to manage and mitigate cyber risk Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Safe Security , which provides a platform to measure cyber risk, today announced that it raised $33 million in a strategic investment led by BT Group, the U.K.-based telecom provider. As a part of the investment, BT will be granted the exclusive rights to use and sell Safe Security products to organizations in the U.K. as it incorporates the platform into its wider portfolio. BT will also work with Safe Security to develop new products and with Safe’s customers to improve their cybersecurity postures, according to Safe Security CEO Saket Modi. With the frequency of large-scale data breaches increasing — from 662 in 2010 to over 1,000 by 2020 — businesses are looking for ways to assess how vulnerable they might be. The cost of cyber crime is estimated to have reached just under $1 trillion in 2020 as criminals exploited the pandemic to target enterprises. Safe Security, which is headquartered in Palo Alto, California, seeks to leverage AI to help organizations mitigate cyber risk in real time. It uses a scoring model that was built as a joint research project at MIT — one that runs cybersecurity sensor, external threat intelligence, and business data through an AI-powered engine to generate scores and the dollar value risk that an organization faces. The scores are calculated both at a macro and micro level and can be measured for particular lines of business as well as departments, Modi says. “Safe Security was incubated from IIT Bombay in 2012 as a cybersecurity services company with my two other cofounders, Rahul Tyagi and Vidit Baxi,” Modi told VentureBeat via email. “We offered various cybersecurity services such as red teaming, vulnerability assessment, penetration testing and boardroom training and more to Fortune 500 companies [and governments] globally … In early 2020, we launched our cybersecurity & digital business risk quantification platform, Safe, and pioneered a new category of products in cybersecurity [that] brings a unique way to proactively manage, measure, and mitigate cyber risks. This enables security and risk management leaders to not only make cybersecurity an informed business decision, but also help them communicate more effectively with all stakeholders.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Calculating risk Modi asserts that while cyberattacks have evolved over the years, cybersecurity remains an opaque concept for most senior business leaders. Organizations often invest in products such as endpoint detection and response, antivirus, firewalls, and more without knowing the “before and after” impact of their breach likelihood, he says. “Security and risk management leaders continue to evaluate cybersecurity through jargonized subjective measures and keep adding cybersecurity products to reactively respond to cyberattacks, rather than proactively defend them,” Modi said. “By contrast, the Safe platform provides a current and historic assessment of multiple threat vectors, including people, processes, technologies, and third parties — which is then quantified with a ‘breach likelihood score’ between 0 and 5 … The scoring algorithm is trained on data from cyber insurance claims and hack analyses from its research and analytics team, in collaboration with MIT and IIT Bombay.” Safe has rivals in startups including VisibleRisk and Exabeam , as well as Viso Trust , SecurityScorecard , and RiskLens. But Modi says that the company’s revenue grew by 270% in the last year and is expected to grow “sixfold” over the next 12 months, fueled by clients investing in digital business risk quantification, third-party risk management, and insider threat analysis. “The pandemic has significantly accelerated digitization across businesses globally and as organizations transform to a digitally native setup, cybersecurity becomes the number one priority,” Modi continued. “[For example, a] Fortune 50 fast-moving consumer goods company uses our platform to manage its third-party risks across suppliers and distributors where they combine the insights of questionnaire-based assessments with outside-in assessments and inside-out assessments to get a unified, real-time risk posture for all critical third parties in their environment. A Fortune 250 bank uses our platform to get a real-time cyber risk posture of its critical business units that contribute the most to its revenue. [And] one of the top five health care providers in the U.S. uses our platform to integrate all signals in regards to its insider threats such as phishing campaigns, device security, cybersecurity awareness campaigns, deep and dark web exposures, email gateway security and more to get a unified, real-time view of the breach likelihood of its 15,000 employees.” Modi says that the proceeds from the funding round will be used to grow Safe’s U.S. revenue and triple the company’s spend on R&D. Beyond this, Safe plans to double its engineering team to over 200 people and grow its total headcount to over 300 by the end of 2021. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,326
2,021
"Cyberattacks are getting worse, but most people aren't taking basic security steps | VentureBeat"
"https://venturebeat.com/2021/10/04/report-cyberattacks-are-getting-worse-but-most-people-arent-taking-basic-security-steps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cyberattacks are getting worse, but most people aren’t taking basic security steps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data breaches and ransomware attacks are worsening, but most people aren’t taking simple steps to protect themselves, according to a report released today that polled 2,000 individuals across the U.S. and U.K. on cybersecurity attitudes and behaviors. The report was conducted by National Cybersecurity Alliance , a nonprofit security advocacy group, and CybSafe , a behavioral security and data analytics company. The report documents a clear disconnect between IT professionals in the technology industry and the public when it comes to driving adoption of cybersecurity best practices. Public response and implementation of commonly known best practices — including strong passwords, multi-factor authentication (MFA), and others — are tepid at best, the report found. Findings on best practices include: Poor password hygiene: Less than half (46%) of respondents say they use a different password for important online accounts, with 20% saying they “never” or “rarely” do so. Additionally, only 43% said they create a long and unique password either “always” or “very often.” Multi-factor authentication remains a mystery: Nearly half (48%) of respondents say they have “never heard of MFA.” Software update installation lagging: Nearly a third (31%) of respondents say they either “sometimes,” “rarely,” or “never” install software updates. “There is overwhelming proof that simple best practices such as strong passwords, MFA, and regularly installing updates can work wonders for boosting overall cybersecurity,” said CybSafe CEO and founder Oz Alashe. In order to reverse the trend of people failing to take these steps, IT professionals need to take a more human-centric view when devising security solutions, the report concluded. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Other findings of the report include the following: 25% of millennials and 24% of Gen Zers said they had their identities stolen once, as opposed to only 14% of baby boomers 34% of individuals have personally been a victim of a cyber breach 64% of respondents have no access to cybersecurity training, while 27% of those who do have access choose not to use it Cybercrime considered more common among millennials and Gen Z Millennials (44%) and Gen Z (51%) are more likely to say they have experienced a cyber threat than baby boomers (21%), the report found. Additionally, 25% of millennials and 24% of Gen Zers said they’d had their identities stolen once, as opposed to only 14% of baby boomers. In fact, 79% of baby boomers said they had never been a victim of cybercrime, according to the report. “Despite the myth that older individuals are more likely to be susceptible to cybercriminals and their tactics, our research has uncovered that younger generations are far more likely to recognize that they have been a victim of cybercrime,” said NCA interim executive director Lisa Plaggemier. “This is a stark reminder for the technology industry that we cannot take cybersecurity awareness for granted among any demographic and need to focus on the nuances of each different group. And clearly we need to rethink perceptions that younger individuals are more tech-savvy and engage more frequently in cybersecurity best practices than older technology users.” Reporting challenges undermine cybersecurity Of those who were a victim of cybercrime, 61% said they did not report the incident. Only 22% of respondents said they “always” reported a phishing attempt — one of the leading threat types deployed by cybercriminals. “The technology industry relies on reporting as one of the key pillars in identifying and stopping bad actors, yet even those impacted directly by cybercrime routinely fail to notify the appropriate parties that an incident has occurred,” said CybSafe’s Alashe. “In day-to-day life, it is second nature for individuals to report a crime if they see one; however, this behavior isn’t being replicated with cybercrime. It’s crucial that cybersecurity professionals get to the bottom of why this is the case, as raising reporting rates among people will be pivotal in freeing up time for cyber professionals, helping them to prioritize threats and adjust their strategies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,327
2,021
"Gartner prescribes a human-centric, hybrid-focus for the future of work | VentureBeat"
"https://venturebeat.com/2021/10/19/gartner-prescribes-a-human-centric-hybrid-focus-for-the-future-of-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner prescribes a human-centric, hybrid-focus for the future of work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Early in his presentation on recommendations for the future of work, Gartner distinguished VP and analyst, Graham Waller, highlighted the need to rethink “archaic” assumptions like the idea of the traditional 9 to 5 workday. Waller noted this traditional structure of work dates back to early factories that depended on utilizing the natural light coming in the windows to do their work. Today, we have plenty of lightbulbs to make that practice irrelevant, just as we have remote collaboration technologies that make the assumption that in-office work is the most productive obsolete, Waller pointed out at the Gartner Symposium/ITxpo. “There has never been a better time than right now to shatter some of these industrial-era assumptions,” he said. The future of work is hybrid-focused The conference itself, traditionally a major tech gathering based in Orlando, was virtual once again this year due to resurgent concerns about COVID-19. Changes in where and how we work have been a major theme of this year’s event, with “the future of work is hybrid” as a common refrain. But even more than focusing on the mechanics of how we conduct meetings – in person or by videoconference – organizations ought to be thinking “human-centric, not location-centric,” Waller said. Organizations should be concerned with retaining their best talent, given the “great resignation” currently underway nationwide. Workers who have gotten a taste of greater flexibility and autonomy are likely to conclude their time at an organization that forces them to give it up, Waller explained. Nowhere is this truer than in IT organizations, where 66% of workers say flexible work is a factor in whether they will stay with an organization and only 29% express a high “intent to stay,” according to recent Gartner surveys. Driving strategy with human-centric data Planning for how work is done should be driven by data, rather than gut instinct, Waller said. Clients often come to him asking for advice on how to enforce, for example, their latest mandate that everyone ought to be in the office three days a week. But when he asks what data led them to settle on three days as the magic number, they tend not to have a good answer. Rather than setting arbitrary rules, organizations should look at maximizing flexibility for workers and teams within the context of the work to be done, Waller said. For example, an agile software development team working within the framework of two-week sprints has many days when programmers can be working autonomously and asynchronously on heads-down coding. It’s at specific milestones like sprint retrospectives – where they assess what went well and what can be improved – that meeting synchronously and ideally in person is most valuable. So an arbitrary requirement that employees be in the office on certain days would be counterproductive if it is out of sync with the rhythm of the work being executed. Done right, a hybrid work program reduces fatigue by 44%, improves intent to stay by 45%, and improves performance by 28%, according to Gartner. For company leaders looking to create a hybrid work policy at their own companies… What to avoid: Making decisions based on gut instinct, rather than data Assuming innovation can only happen in the office around the office cooler One-size-fits-all rules, regardless of the underlying work What to do: Craft a human-centric design Flex to fit the underlying work Apply intentional planning to shape collaboration VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,328
2,021
"‘Tis the season for cyberattacks: 3 tips for protecting your business | VentureBeat"
"https://venturebeat.com/2021/11/26/tis-the-season-for-cyberattacks-3-tips-for-protecting-your-business"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community ‘Tis the season for cyberattacks: 3 tips for protecting your business Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Johanna Baum, CEO, and Founder of S3 Cyberattacks , or the threat of an attack, might be top of mind during the festive season as consumers’ online buying increases , but in truth, every day is a holiday for cybercriminals. The lack of ongoing investment in cyber hygiene and landscape readiness creates weaknesses that bad actors know how to exploit. Yet, busier times, like the hustle and bustle of the holiday season, bring these problems more to the forefront, while they are constantly lingering just below the surface. In fact, in 2021 alone, the U.S. saw a surge of ransomware attacks during notable holidays like Mother’s Day, Memorial Day, and Independence Day. Why is this the case? Businesses are closed during many of these holidays, leaving networks unsupervised and data exposed and creating a season for cyberattacks. It’s safe to say that when our guards are down, cybercriminals are on duty. Although I’m certainly a holiday shopping procrastinator and pay my fair share in rush shipping fees, being a cyber defense procrastinator carries a far more significant price tag to an organization. While a typical breach, on average, costs around $4 million , Target’s infamous holiday season data breach cost the corporation $300 million — a number that is far from jolly. The season is filled with additional spending, both from consumers and organizations who are victims of cybercrime. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With this in mind, companies must commit to true security year-round , which requires ongoing vigilance, and continual investments in both time and resources. The discussion around prioritizing preventative or defensive spending is tough when the risk isn’t directly felt. But, to be successful, continuous focus on cyberhealth, months before the holiday season, is essential. As organizations anticipate another year filled with increased holiday digital traffic, they must prepare for the known holiday crime influx. This holiday season alone, online fraud is expected to spike 60%. Credit card theft will exponentially increase, continued supply chain issues will create excess noise, and ransomware attacks will certainly be on the rise — and consumers will have less tolerance for all of the above. So the question will remain, how can your IT department broker a winning holiday season against an angry mob of shoppers and an army of bad actors ? Here are three tips to help protect your business this holiday season and beyond. Be prepared One study showed that while 89% of organizations say they have experienced a ransomware attack during a holiday in 2021, 36% say they have no contingency plan in place to respond. Preparing the organization for an attack or period of increased risk must occur long before the risk rises to a heightened level of concern. Implementing tools, modifying policies and procedures, improving response times and monitoring, and developing response plans, all require time to lay the groundwork for a successful cyberdefense execution. Every user needs to understand their role in cyberdefense. Education and awareness measures take time and cannot be prioritized just as the busy holiday season approaches. Without these foundational elements in place, this season for cyberattacks will feel much like playing whack-a-mole with a wet noodle, not very successful. Preparation is the first step to success. Ensuring the organization is educated on initiatives and has a tactical short and long-term plan creates a visible roadmap for execution. It eliminates our wet noodle whack-a-mole scenario and provides an organizational playbook for success. Without a plan, a cohesive strategy is difficult to come by and makes it exponentially more difficult to launch countermeasures for protection. A solid playbook with prioritized improvements provides clarity around existing risk, mitigating controls, and a schedule for remediation. Be responsive No one yearns for a delayed response, especially when valuable information is at risk. The ability to rapidly address issues, respond to incidents, and actively deploy solutions and procedures to support operations is critical to establishing a strong cyber posture, especially during a season where cyberattacks are on the rise. When a company is silent after a data breach, dragging its feet, it portrays a lack of priority or care for its constituents, causing consumers to feel their information is not as important as their money. This can result in a massive backlash from consumers, leading to much more than monetary loss, but diminished loyalty amongst customers and decreased organizational reputation. By responding quickly with intent, you reduce exposure to risk, contain damage, and instill confidence in the program. Be transparent Transparency can cure a lot of woes. As Brené Brown says, “clear is kind.” On average, it takes a business 279 days to identify and contain a breach – that’s more than three-quarters of a year to share with stakeholders the details of the cyberattack. Ensuring that your constituents, both internal and external, have the information they need to understand their own risk is critical. A well-informed community of stakeholders builds confidence in your organization, while a lack of transparency fosters discomfort and a sense of dishonesty. In the social media age, it greatly increases the likelihood of a social attack, potential hacktivism, or a good old-fashioned transition of buying power. The gift that keeps on giving The gift of a solid IT team and cyber-aware organization should be greatly appreciated, as it’s one that can be utilized year-round, and it’s crucial to show your appreciation to the team that defends your house. The McCallister’s home was safely secured by the crafty eight-year-old, played by Macaulay Culkin in Home Alone. Without him, the holiday season and beyond would have been assuredly way less enjoyable and Marv and Harry would have made out like bandits. As an organization or a consumer, you should be constantly cyber-vigilant. During the holidays, when pressure and transactional volumes increase, that focus should be your utmost priority. However, it’s even more important to be prepared for the first half of the year, when the risk is less prevalent, giving you quiet time to develop a thoughtful plan that truly protects the data of the company and the consumer. This holiday shopping season will either highlight your preparedness or your deficiencies. As we dive into the most active season for cyberthreats , be ready to respond, do so with authority, and protect your organization and the high-value assets – your consumers. Honesty and transparency, both inside and outside the organization, create a culture of support and loyalty, so don’t forget to appreciate the team that holds the keys to protecting your kingdom and ensure they know their value. This article was contributed by Johanna Baum, CEO, and Founder of S3 DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,329
2,021
"Fugue: 36% of organizations have suffered a serious cloud leak or breach in the last year | VentureBeat"
"https://venturebeat.com/2021/07/27/fugue-36-percent-orgs-suffered-serious-cloud-breach-in-last-year"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fugue: 36% of organizations have suffered a serious cloud leak or breach in the last year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A survey of 300 cloud engineering professionals found that 36% of organizations suffered a serious cloud security data leak or a breach in the past 12 months. Eight out of ten are concerned they’re vulnerable to a major cloud data breach , and 64% say the problem will get worse or remain the same over the next year. The findings are part of The State of Cloud Security 2021 report, produced by Fugue and Sonatype. Above: The Fugue and Sonatype State of Cloud Security 2021 report found that 36% of orgs suffered a breach in the last year As the scale of cloud environments grows, cloud teams say that risks — and the challenges and costs of addressing them — are increasing. The primary causes of cloud misconfiguration are too many APIs and interfaces to govern, cited by 32%, 31% cited a lack of controls and oversight, 27% cited a lack of policy awareness, and 32% cited team negligence. 21% are not checking Infrastructure as Code (IaC) prior to deployment, and 20% are not adequately monitoring their cloud environment. Cloud security teams tasked with preventing and eliminating cloud misconfiguration vulnerabilities are struggling with many familiar security issues, including false positives (cited by 27%), alert fatigue (21%) and human error (38%). 36% are finding it difficult to hire and retain cloud security professionals, and 35% cite challenges with training. Half say their teams are investing 50+ engineering hours per week to IaC security, and a similar investment is going to cloud runtime security. When asked what they need to more effectively manage cloud security, 96% say having one set of policies that works for both IaC and the cloud runtime would be valuable. 47% say they need better visibility into their cloud environment, and 43% say that better compliance auditing and reporting automation would help. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Fugue partnered with Sonatype to survey 300 DevOps, cloud, and security engineers on cloud security risks, challenges, and organizational impact. The online survey was conducted by Propeller Insights. Read then full report by Fugue and Sonatype here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,330
2,021
"Amazon launches AWS RoboRunner to support robotics apps | VentureBeat"
"https://venturebeat.com/2021/11/28/amazon-launches-aws-roborunner-to-support-robotics-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches AWS RoboRunner to support robotics apps Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At a keynote during its Amazon Web Services (AWS) re:Invent 2021 conference today, Amazon launched AWS IoT RoboRunner, a new robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together. Alongside IoT RoboRunner, Amazon announced the AWS Robotics Startup Accelerator, an incubator program in collaboration with nonprofit MassRobotics to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies. The adoption of robotics — and automation more broadly — in enterprises has accelerated as the pandemic prompts digital transformations. A recent report from Automation World found that the bulk of companies that embraced robotics in the past year did so to decrease labor costs, increase capacity, and navigate a lack of available workers. The same survey found that 44.9% of companies now consider the robots in their assembly and manufacturing facilities to be an integral part of daily operations. Amazon — a heavy investor in robotics itself — hasn’t been shy about its intent to capture a larger part of a robotics software market that is anticipated to be worth over $7.52 billion by 2022. In 2018, the company unveiled AWS RoboMaker , a product to assist developers with deploying robotics applications with AI and machine learning capabilities. And Amazon earlier this year rolled out SageMaker Reinforcement Learning Kubeflow Components, a toolkit supporting the RoboMaker service for orchestrating robotics workflows. IoT RoboRunner IoT RoboRunner, currently in preview, builds on the technology already in use at Amazon warehouses for robotics management. It allows AWS customers to connect robots and existing automation software to orchestrate work across operations, combining data from each type of robot in a fleet and standardizing data types like facility, location, and robotic task data in a central repository. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The goal of IoT RoboRunner is to simplify the process of building management apps for fleets of robots, according to Amazon. As enterprises increasingly rely on robotics to automate their operations, they’re choosing different types of robots, making it more difficult to organize their robots efficiently. Each robot vendor and work management system has its own, often incompatible control software, data format, and data repository. And when a new robot is added to a fleet, programming is required to connect the control software to work management systems and program the logic for management apps. Developers can use IoT RoboRunner to access the data required to build robotics management apps and leverage prebuilt software libraries to create apps for tasks like work allocation. Beyond this, IoT RoboRunner can be used to deliver metrics and KPIs via APIs to administrative dashboards. IoT RoboRunner competes with robotics management systems from Freedom Robotics , Exotec , and others. But Amazon makes the case that IoT RoboRunner’s integration with AWS — including services like SageMaker, Greengrass, and SiteWise — gives it an advantage over rivals on the market. “Using AWS IoT RoboRunner, robotics developers no longer need to manage robots in silos and can more effectively automate tasks across a facility with centralized control,” Amazon wrote in a blog post. “As we look to the future, we see more companies adding more robots of more types. Harnessing the power of all those robots is complex, but we are dedicated to helping enterprises get the full value of their automation by making it easier to optimize robots through a single system view.” AWS Robotics Startup Accelerator Amazon also announced the Robotics Startup Accelerator, which the company says will foster robotics companies by providing them with resources to develop, prototype, test, and commercialize their products and services. “Combined with the technical resources and network that AWS provides, the strategic collaboration will help robotics startups and the industry overall to experiment and innovate, while connecting startups and their technologies with the AWS customer base,” Amazon wrote in a blog post. Startups accepted into the Robotics Startup Accelerator program will consult with AWS and MassRobotics experts on business models and with AWS robotics engineers for technical assistance. Benefits include hands-on training on AWS robotics solutions and up to $ 10,000 in promotional credits to use AWS IoT, robotics, and machine learning services. Startups will also receive business development and investment guidance from MassRobotics and co-marketing opportunities with AWS via blogs and case studies. Robotics startups — particularly in industrial robotics — have attracted the eye of venture capitalists as the trend toward automation continues. From March 2020 to March 2021, venture firms poured $6.3 billion into robotics companies, up nearly 50% from March 2019 to March 2020, according to data from PitchBook. Over the longer term, robotics investments have climbed more than fivefold throughout the past five years, to $5.4 billion in 2020 from $1 billion in 2015. “Looking ahead, the expectations of robotics suppliers are bullish, with many believing that with the elections over and increased availability of COVID-19 vaccines on the horizon, much demand will return in industries where market skittishness has slowed robotic adoption,” Automation World wrote in its report. “Meanwhile, those industries already seeing an uptick are expected to plough ahead at an even faster pace.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,331
2,021
"Amazon brings automated secrets detection to CodeGuru | VentureBeat"
"https://venturebeat.com/2021/11/29/amazon-brings-automated-secrets-detection-to-codeguru"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon brings automated secrets detection to CodeGuru Share on Facebook Share on X Share on LinkedIn Lock, key, and hand -- concept security illustration Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon is rolling out a new machine learning-powered “secrets detection” feature that automatically finds confidential system credentials that might be hidden in source code. Secrets Detector, as the new feature is called, is part of Amazon’s AI-powered code review service called CodeGuru , which the internet giant launched for developers last year. CodeGuru is all about helping developers improve the quality of their code by checking logic, syntax, and style before fresh code is committed to an existing codebase. There are two parts to the tool — CodeGuru Profiler, which focuses on fixing inefficient code which might cause an app to lag or drive up compute costs; and CodeGuru Reviewer, which uses machine learning techniques to find bugs, security vulnerabilities, and other critical issues — and then suggests remedies. The term “secrets” refers to digital credentials — such as passwords, API tokens, certificates, and encryption keys — that companies use for managing access to their critical applications, systems, and infrastructure. Such credentials can inadvertently find their way into the public domain due to developer complacency. Uber, for example, revealed a major breach back in 2017 that exposed millions of its users’ personal data — the root cause , apparently, was an AWS access key hackers discovered in a personal GitHub repository belonging to an Uber developer. Recent data from GitGuardian, a cybersecurity platform that helps companies find sensitive data hidden in their code, revealed a 20% increase in secrets found in public GitHub repositories. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Secret sauce Amazon’s new Secrets Detector is included as part of CodeGuru Reviewer at no additional cost, and it supports most of the APIs from providers such as Amazon’s AWS, Twilio, GitHub, Salesforce, Slack, Stripe, Tableau, Atlassian, Databricks, and more. As well as working with all Java and Python code, Secrets Detector can also be used to scan documentation and configuration files, with CodeGuru Reviewer suggesting measures for developers to secure their secrets using Amazon’s very own AWS Secrets Manager service. Above: AWS Secrets Manager recommendation: Create a new secret Secrets management has emerged as a crucial facet of companies’ broader security hygiene ethos, opening the door for dedicated third-party services to flourish — earlier this year, password-management giant 1Password revealed that it was expanding into secrets management when it acquired Dutch company SecretHub. A slew of younger companies have emerged on the scene too, such as Spectral, which exited stealth this year with $6.2 million to find costly security mistakes buried in code; Doppler , which expanded its secrets manager to the enterprise with $6.5 million in funding ; and Akeyless , which raised a $14 million series A round. While secrets management can involve different tools and processes, the goal is ultimately the same across the board — to protect companies’ internal systems from being infiltrated by bad actors. And that means automating the process of spotting secrets in public codebases. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,332
2,019
"5 data privacy startups cashing in on GDPR | VentureBeat"
"https://venturebeat.com/2019/07/23/5-data-privacy-startups-cashing-in-on-gdpr"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 data privacy startups cashing in on GDPR Share on Facebook Share on X Share on LinkedIn InCountry Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In a month with two gargantuan fines levied at tech companies for data breaches, the full effects of Europe’s General Data Protection Regulation (GDPR) are coming into focus. GDPR, which took effect last May , requires companies to report data breaches to the appropriate European authorities within 72 hours of discovery and stipulates that local data protection agencies across the EU bloc can fine a company up to 4% of its total annual revenue if authorities determine it took insufficient measures to protect data. Until this month, the vast majority of GDPR fines amounted to tens or hundreds of thousands of euros — with the one notable exception of Google, which was hit with a €50 million ($57 million) fine by French data privacy body CNIL back in January. Then a few weeks ago British Airways (BA) was slapped with a provisional £183.39 million ($230 million) fine over a 2018 security lapse that compromised the personal data of around 500,000 customers, and a day later hotel giant Marriott was hit with a £99 million ($123 million) fine for similar breaches. By contrast, Facebook received a paltry £500,000 ($644,000) fine for the Cambridge Analytica episode — arguably one of the biggest data-harvesting debacles in recent times — because it fell under pre-GDPR regulations. If nothing else, GDPR means companies that work with large pools of customer data now have to treat it with kid gloves. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Aside from GDPR, Europe is also weighing up a new ePrivacy Regulation , which covers individuals’ privacy in relation to electronic communications. Elsewhere, countries and jurisdictions around the world are increasingly adopting their own privacy-focused regulations, with the likes of China and Russia already instilling local data residency requirements for citizens. And the California Consumer Privacy Act ( CCPA ) designed to enhance privacy rights of consumers living in the state will take effect on Jan 1, 2020. Amid all this turmoil, companies are emerging to capitalize on the growing demand for data privacy tools, both for regulatory compliance and consumer peace of mind. In the past month alone, at least five such companies have raised sizable sums of cash for various data privacy, protection, and compliance products. Here’s a quick look at the companies and what they do. 1. InCountry Above: InCountry dashboard InCountry touts itself as a “data residency-as-a-service” platform that helps international companies store customer data locally. It offers the global infrastructure to store and retrieve data in its country of origin, serving up an API that funnels data between InCountry’s local datacenters, which are provided by Amazon’s AWS, Microsoft Azure, Google Cloud Platform, and Alibaba Cloud. The InCountry platform is not so much about replacing an application’s own data store as it is adding an extra local repository for specific regulated data. For now, InCountry offers a single product called Profile, which enables compliance around user profile and registration data, but plans are in place to expand this to cover payments, transactions, and health data. Above: InCountry: How it works San Francisco-based InCountry officially launched back in May with $7 million in seed funding , and this month the startup followed that up with a $15 million series A round. “We’re witnessing more countries signing in data laws each week, and we’re only going to see those numbers increase,” noted Sundeep Peechu, managing director at InCountry seed investor Felicis Ventures. 2. OneTrust Atlanta-based OneTrust is a data privacy management compliance platform that, similar to InCountry, was established to help businesses adhere to the growing array of regulations around the world, including GDPR and CCPA. The OneTrust platform includes a template-based self-assessment tool that allows companies to see how close they are to complying with GDPR, Privacy Shield , and other legal frameworks, while “data mapping” helps companies understand how data is flowing through the organization and across borders. Above: Cross-border data-mapping with OneTrust OneTrust also offers various tools for marketers, including cookie compliance, mobile app compliance, and consent management, in addition to risk-management and breach response tools. Earlier this month, OneTrust raised its first round of funding — $200 million at a $1.3 billion valuation, a clear indicator of the growing value being placed on data privacy compliance services. “New privacy regulations, like the CCPA and GDPR, are a direct market reaction to consumer demand for improved data privacy protection,” noted Richard Wells, managing director at Insight Partners, which invested in OneTrust’s series A round. 3. TrustArc TrustArc , which raised a $70 million round of funding a few weeks back, develops data protection, certification, and compliance products for enterprises — its platform is about helping companies monitor risk around regulations and identify gaps across various regulatory frameworks. Similar to OneTrust, TrustArc can also handle cookie consent preferences for GDPR and facilitate processes for marketing campaigns, including user consent for outbound emails. The San Francisco-based company has actually been around since 1997, when it was founded as a nonprofit called TRUSTe , but it evolved into a VC-backed for-profit company in 2008. It changed its name to TrustArc in 2017 to “reflect its evolution from a privacy certification company into a global provider of technology-powered privacy compliance and risk management solutions,” the company said at the time. The emergence of GDPR played a part in TrustArc’s evolution — at the time of its rebrand, the company carried out a survey of “privacy professionals” and found that 83% intended to invest a six-figure sum in GDPR compliance, while one-quarter anticipated spending more than $1 million to meet privacy standards. “The survey data validates the growing market demand TrustArc has experienced for new technology solutions and consulting services to help businesses address global privacy compliance and risk management challenges,” the company said at the time. 4. Privitar London-based Privitar , which raised $40 million last month , helps enterprises engineer privacy protection into their data projects, allowing them to leverage large, sensitive data sets while complying with regulations and ethical data principles. “The world is increasingly aware of the importance of protecting private information, and privacy engineering is becoming intrinsic to the way organizations manage and share data,” said Privitar CEO Jason du Preez. Among the company’s products is Privitar Lens , designed to help companies build “privacy-preserving access to sensitive data sets.” Then there is Privitar Publisher , which offers “privacy engineering” smarts such as data-masking and k-anonymity — helping non-technical users create and manage data protection policies. It can also embed invisible watermarks in protected data to help trace unauthorized data distribution back to the responsible party. Above: Privitar: Watermarks Privitar also offers SecureLink, a data-linking system designed to circumvent data silos among organizations and encourage companies to share information securely with each other. 5. BigID BigID was founded in 2016, just as Europe was finalizing GDPR. The New York-based startup helps enterprises protect customer and employee data, using machine learning to automatically find sensitive data held on internal servers and databases, analyze it, de-risk it, and ensure that organizations are complying with data protection regulations. The platform makes it easier for large companies, which may hold petabytes of customer information, to uncover “dark” or uncatalogued data and correlate it to a specific user identity. The BigID platform also helps companies track cross-border data flows and can generate customized data access reports. Above: BigID: product collage BidID raised a $30 million series B round of funding last year. Last week, news emerged that it has now raised an additional $50 million — though the round isn’t closed yet, so the final tally could end up being more. The bigger picture Meanwhile, all of the major cloud companies have been investing heavily in local infrastructure for a while. While this promises lower latency and faster data transfers to attract new customers, another core reason the likes of Amazon, Google, and Microsoft are expanding their datacenter coverage comes down to data sovereignty. The basic idea is that digital data should be subject to the laws of the country in which it is located — that’s why Google is shifting control of European data from the U.S. to Ireland, and it’s why Amazon has opened datacenters in India. Barely a day goes by without data breaches, regulations, or general privacy issues grabbing the headlines. Yesterday alone, the Federal Trade Commission (FTC) announced that Equifax would pay at least $575 million in a settlement for a 2017 data breach , while over in Asia news emerged that China’s Bytedance, which owns the popular short video app TikTok, would open its first Indian datacenters ahead of new data protection legislation. The broader data protection market, which covers everything from cybersecurity to disaster recovery and compliance, will reportedly be worth $120 billion by 2023. But the recent flurry of investment in the data privacy and compliance realm, specifically, highlights the shifting regulatory landscape globally. With that comes big opportunities for companies such as InCountry, OneTrust, TrustArc, Privitar, BigID, and countless others that take a product-based approach to managing data privacy. “Privacy has become a defining 21st-century social and corporate issue, but it’s hard to ensure — even for the most sophisticated companies,” said BigID CEO Dimitri Sirota during the startup’s raise last year. “Managing privacy to date has been based on policies and processes, not product.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,333
2,021
"Strata Identity raises $11 million to unify identity management in a multi-cloud world | VentureBeat"
"https://venturebeat.com/2021/02/16/strata-identity-raises-11-million-to-unify-identity-management-in-a-multi-cloud-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Strata Identity raises $11 million to unify identity management in a multi-cloud world Share on Facebook Share on X Share on LinkedIn Strata Identity's software finds all of a company's identity systems across clouds and on-premises infrastructure Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Strata Identity , a self-proclaimed “identity orchestration” platform for multi-cloud environments, has raised $11 million in a series A round of funding led by Menlo Ventures. Founded out of Boulder, Colorado in 2019 by the co-authors of the Security Assertion Markup Language (SAML), Strata is setting out to unify identity and access management (IAM) systems to enable consistent identity management across clouds and on-premises infrastructure. Multi-cloud Cloud infrastructure spending is continuing to grow along a hockey stick trajectory, and businesses seem eager to diversify. Flexera’s 2020 State of the Cloud report found that 93% of enterprises are now using multiple cloud service providers. And over the past year, both Microsoft and Google have shown their willingness to embrace multi-cloud enterprises (AWS still has some way to go ). A multi-cloud approach , spanning an array of private and public cloud service providers, holds a number of benefits, such as giving companies flexibility, helping them avoid vendor lock-in, and facilitating adherance to strict governance or regulatory compliance. And some cloud providers are simply better than others at tackling specific tasks. Politics can even come into the equation, with Walmart known to discourage potential partners from using AWS, the cloud division arm of its fierce retail rival Amazon. Moreover, mergers and acquisitions can lead companies down a multi-cloud path as they grow, and moving a new subsidiary onto a new cloud platform may not always be a priority. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But adopting distributed cloud environments also comes with friction, as there is no easy way to manage everything centrally, and each cloud platform uses a different IAM system — what businesses rely on to manage user (e.g. employees or customers) access privileges to networks, databases, and other applications. Recent Canalys’ figures show that cloud spending grew by a third to $142 billion last year. And given that the global IAM market was pegged at $12.3 billion , it’s easy to see how creating a unified identity management system across clouds would be an appealing proposition. That is essentially what Strata is setting out to achieve. Identity orchestration Strata’s software finds all of a company’s identity systems across clouds and on-premises infrastructure and then migrates all the identities and policies from legacy software to modern identity systems. It basically “decouples” apps from identity and offers an easy way to get older apps working with a new ID system. Strata’s identity orchestration smarts, marketed as Maverics , can then determine how all the identities and data connect across systems and applications, allowing companies to run their apps on any cloud or identity system and even change their minds at a later point without jumping through hoops. This is what Strata refers to as a distributed identity fabric , one that bridges cloud identity systems and older datacenter systems so they can be managed as a single entity. The only way for a company to use cloud identity management with its legacy applications is to rewrite each of them,” Strata cofounder and CEO Eric Olden told VentureBeat. “Meanwhile, identity management systems provided by AWS, Azure, and Google Cloud cannot talk to each other, so they must be managed as separate silos, each with their own user repository. Strata makes it possible to centrally control all identity and access management security policies and enforce them on any cloud or legacy identity management system without making any changes to applications.” The identity management market is huge, currently pegged at $12 billion and projected to grow to $30 billion by 2027. Notable players in the space include Okta and Ping Identity , both of which are now multi-billion dollar publicly traded companies, and Auth0, which last year raised $120 million at a $1.92 billion valuation. Microsoft also offers an identity management platform called Azure Active Directory. But Strata considers all of these identity management systems partners, rather than the competition. “We help customers keep using these new identity products by making them work together, and with identity systems inside cloud platforms, and be ‘backward’ compatible with existing on-premises apps,” Olden said, adding that Strata doesn’t have any direct competition yet. “What exists today is a lack of awareness and skepticism that our new distributed approach is even possible,” Olden said. “Customers think they have to rewrite apps to make them work with cloud identity systems and spend millions of dollars and lots of time, a situation that disappears with Strata.” Above: Strata Identity cofounder and CEO Eric Olden Track record Olden has a track record in the identity and access management sphere, having sold web access management company Securant to RSA Security for a reported $137 million back in 2001. He later founded an identity-as-a-service company called Symplified, which was bought by RSA Security in an apparent fire sale in 2014 and subsequently shuttered. Sandwiched between all of this activity, Olden and his Strata cofounders were instrumental in developing SAML, an open standard that enables identify providers and service providers to play ball. In effect, this makes it possible to access multiple websites using a single set of credentials. This is particularly important in enterprise setups, as it means employees don’t have to keep track of different logins to access all their applications. Strata Identity emerged from stealth last June, with Olden noting that the company has completed “several large, successful proofs-of-concept,” signed a Fortune 50 client, and drawn interest from at least 15 Fortune 500 companies from across the finance, technology, and retail spheres. Last October, Strata was also invited to join the Microsoft Intelligent Security Association ( MISA ). With $11 million in the bank, Strata said it’s now well-financed to scale its R&D, sales, and marketing efforts. As part of this investment, Menlo Ventures’ partner Venky Ganesan has joined Strata’s board of directors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,334
2,021
"AI-powered identity access management platform Authomize raises $16M | VentureBeat"
"https://venturebeat.com/2021/05/13/ai-powered-identity-access-management-platform-authomize-raises-22m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered identity access management platform Authomize raises $16M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud-based authorization startup Authomize today announced that it raised $16 million in series A funding led by Innovation Endeavors, bringing the startup’s total raised to $22 million to date. CEO and cofounder Dotan Bar Noy says that the capital will be used to support Authomize’s R&D and hiring efforts this year, as expansion ramps up. One study found that companies consider implementing adequate identity governance and administration (IGA) practices to be among the least urgent tasks when it comes to securing the cloud. That’s despite the fact that, according to LastPass , 82% of IT professionals at small and mid-size businesses say identity challenges and poor practices pose risks to their employers. Authomize, which emerged from stealth in June 2020, aims to address IGA challenges by delivering a complete view of apps across cloud environments. The company’s platform is designed to reduce the burden on IT teams by providing prescriptive, corrective suggestions and securing identities, revealing the right level of permissions and managing risk to ensure compliance. “As security has evolved from endpoints and networks, attention has increasingly moved to identity and access management, and specifically the authorization space. Many of the CISOs and CIOs we spoke with expressed the need for a system that would secure and manage permissions from a single platform. They took access decisions based on hunches, not data, and when they tried to take data-driven decisions, they found out that the data was outdated. Additionally, most, if not all, of the process has been manually managed, making the IT and security teams the bottleneck for growth,” Noy told VentureBeat in an interview via email. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Authomize’s secret sauce is a technology called Smart Groups that aggregates data from enterprise systems in real time and infers the right-sized permissions. Using this data in tandem with graph neural networks , unsupervised learning methods, evolutionary systems , and quantum-inspired algorithms , the platform offers action and process automation recommendations. AI-powered recommendations Using AI, Authomize detects relationships between identities and company assets throughout an organization’s clouds. The platform offers an inventory of access policies, blocking unintended access with guardrails and alerting on anomalies and risks. In practice, Authomize constructs a set of policies for each identity-asset relationship. It performs continuous access modeling, self-correcting as it incorporates new inputs like actual usage, activities, and decisions. Of course, Authomize isn’t the only company in the market claiming to automate away IGA. ForgeRock , for instance, recently raised $93.5 million to further develop its products that tap AI and machine learning to streamline activities like approving access requests, performing certifications, and predicting what access should be provisioned to users. But Authomize has the backing of notable investor M12 (Microsoft’s venture fund), Entrée Capital, and Blumberg Capital, along with acting and former CIOs, CISOs, and advisers from Okta, Splunk, ServiceNow, Fidelity, and Rubrik. Several undisclosed partners use the company’s product in production, Authomize claims — including an organization with 5,000 employees that tapped Smart Groups to cut its roughly 50,000 Microsoft Office 365 entitlements by 95%. And annual recurring revenue growth is expected to hit 600% during 2021. Authomize recently launched an integration with the Microsoft Graph API to provide explainable, prescriptive recommendations for Microsoft services permissions. Via the API, Authomize can evaluate customers’ organization structure and authorization details, including role assignments, group security settings, SharePoint sites, OneDrive files access details, calendar sharing information, applications, and service principal access scopes and settings. “Our technology is allowing teams to make authorization decisions based on accurate and updated data, and we also automate day-to-day processes to reduce IT burden … Authomize currently secures more than 7 million identities and hundreds of millions of assets, and our solution is deployed across dozens of customers,” Noy said. “Using our proprietary [platform], organizations can now strike a balance between security and IT, ensuring human and machine identity have only the permission they need. Our technology is built to connect easily to the entire organization stack and help solve the increasing complexity security, and IT teams face while reducing the overall operational burden.” Authomize, which is based in Tel Aviv, Israel, has 22 full-time employees. It expects to have more than 55 by the end of the year as it expands its R&D teams to develop new entitlement eligibility engine and automation capabilities and increases its sales and marketing operations in North America. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,335
2,021
"Cloud directory service JumpCloud raises $159M as remote work booms | VentureBeat"
"https://venturebeat.com/2021/09/13/cloud-directory-service-jumpcloud-raises-159m-as-remote-work-booms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud directory service JumpCloud raises $159M as remote work booms Share on Facebook Share on X Share on LinkedIn JumpCloud: Device security Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. JumpCloud , a cloud-based directory service used by enterprises such as Splunk, Square, and Matterport, has raised $159 million in a series F round of funding at a $2.56 billion valuation. In the IT sphere, directory services are a core component of network operating systems and are chiefly concerned with mapping, storing, locating, retrieving, and managing items across a network — including devices, user identities, accounts, files and folders, and more. They are part of the broader identity and access management (IAM) sphere, a market pegged at nearly $12 billion in 2019 and projected to more than double within six years. The directory service, specifically, enables companies to connect their workforce with the IT resources needed to execute their jobs. The market is largely dominated by Microsoft’s Active Directory , given that most endpoints have historically been Windows-based PCs, which remains the case when it comes to on-premise installations. Remote control However, several trends have sparked a need for a more flexible directory service, such as the transition to web apps, cloud infrastructure, alternative operating systems in the workplace (e.g. Linux and MacOS), and distributed workforce arrangements that may include employees accessing company systems from any number of locations and device-software configurations. JumpCloud is as much about supporting personal technological preferences as it is about managing access and identities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “JumpCloud is an open platform that leverages industry standards,” chief product officer Jagadeesh Kunda told VentureBeat. “This matters because as workers demand to use whatever technology they want to be productive, IT has had a choice to say ‘no’ and become a blocker or buy a bunch of different tools to support secure access for the different technologies. JumpCloud provides a better option — a central directory that securely unifies identity and device management.” Depending on a person’s role or industry, fully remote or hybrid work arrangements will likely be the norm for millions more in a post-COVID world — one JumpCloud is particularly well-positioned to capitalize on. “The pandemic made remote work the norm for companies of all sizes — they all scrambled to keep their staff productive over time wherever they had to work,” Kunda added. “This approach is not going away. That will mean that more companies will have to manage how their staff access their assets remotely — identity is the key element for this, as it is the only point of consistency, and small businesses will need help here as much as large enterprises with thousands of employees.” In the cloud With cloud infrastructure spending going through the roof , a trend accelerated somewhat by the global pandemic, JumpCloud is in a favorable position as a cloud-native solution that sits at the intersection between devices, user identities, and systems access. For enterprises, this means no longer having to manage hardware, patching, or upgrades, with admins able to easily manage Windows, Mac, and Linux devices from a single platform. Above: JumpCloud: Interactive directory insights It is worth stressing, though, that JumpCloud is an entirely cloud-focused product — there is no scope for on-premises deployment. This might be a major drawback for some enterprises, but JumpCloud argues that the cloud helps businesses steer clear of “weak onboarding and offboarding procedures” and avoid many of the potential security risks associated with “ shadow IT ” scenarios in which workers end up storing sensitive data in places that don’t have rigorous security in place. “The perception is that physical hardware on-prem gives a sense of control,” Kunda said. “Though potentially more customizable, this approach often leads to critical errors through improper management and maintenance of identity and access management infrastructure. Regardless of established routine, cloud-based IAM is more secure, flexible, and often more cost-effective for organizations of any size.” JumpCloud’s raise also comes amid a boom in activity in the IAM space, with Okta recently completing its $6.5 billion Auth0 acquisition and a slew of newer companies attracting venture capital investment. JumpCloud itself announced a $100 million series E round back in January. With today’s announcement, the Louisville, Colorado-based company has raised a total of $350 million since its inception in 2012. Its series F backers include a mix of new and old investors, such as Sapphire Ventures, BlackRock, General Atlantic, Owl Rock, H.I.G. Growth Partners, Whale Rock Capital, Sands Capital, and Endeavor Catalyst. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,336
2,021
"One Identity acquires OneLogin to unify identity security for enterprises | VentureBeat"
"https://venturebeat.com/2021/10/04/one-identity-acquires-onelogin-to-unify-identity-security-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages One Identity acquires OneLogin to unify identity security for enterprises Share on Facebook Share on X Share on LinkedIn The reception area at OneLogin's San Francisco office. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One Identity , an identity and access management (IAM) company owned by Quest Software , has acquired rival IAM platform OneLogin. Terms of the deal were not disclosed. Founded out of San Francisco in 2009, OneLogin markets a slew of identity products spanning single sign-on, multi-factor authentication (MFA), user provisioning, and more. The company had raised some $175 million since its inception and claimed a number of big-name customers, including Airbus. One Identity’s path has been somewhat circuitous. Parent Quest Software launched in 1987, and Dell acquired the Aliso Viejo, California-based company for $2.4 billion in 2012, with a view to extending Dell’s enterprise software focus into systems management, security, data protection, and more. One Identity was among these products. Dell sold its software business four years later to Francisco Partners and Elliott Management, and One Identity was spun out as its own independent brand in 2017. One Identity offers identity-focused security products that range from privileged access management (PAM) to endpoint privilege management (EPM), identity governance and administration (IGA), log management, active directory management and security (ADMS), and more. Although One Identity already touts its IAM capabilities , bolstering its identity smarts with OneLogin’s established cloud-native IAM offering will help One Identity sell itself as a holistic solution to identity security, spanning PAM, IGA, ADMS, and IAM. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Identity parade The acquisition follows a flurry of activity in the IAM space, with Okta recently completing its $6.5 billion Auth0 acquisition and Ping Identity snapping up no-code identity and security orchestration platform Singular Key last week. Throw into the mix a slew of a VC investments in the burgeoning identity management space , and the appetite for technology that can safeguard employees’ and customers’ digital data is evident. The global IAM market was worth $12.3 billion in 2020, according to some reports , and it’s predicted to roughly double in size within four years. With businesses across the spectrum increasing their cloud IT spend as part of ongoing digital transformation efforts and remote work emerging as the new norm , the need to invest in security is growing. For enterprises, managing the online identities of their workforce is a key part of this transition. “With the proliferation of human and machine identities, the race to the cloud, and the rise of remote working, identity is quickly becoming the new edge — and protecting identity in an end-to-end manner has never been more important,” One Identity president and general manager Bhagwat Swaroop said in a press release. “By adding OneLogin to our portfolio and incorporating it into our cloud-first unified identity security platform, we can help customers holistically correlate all identities, verify everything before granting access to critical assets, and provide real-time visibility into suspicious login activity.” While terms of the deal were not disclosed, a Moody’s report from last month indicated Seahawk Holdings Limited, a business entity representing Francisco Partners and an Elliot Management affiliate, was planning to “raise a $330 million fungible add-on” to support the OneLogin acquisition. However, it’s not clear if those funds were for the entirety of the transaction or were part of a larger deal constituting funds from elsewhere. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,337
2,021
"AI-powered personalization provider Qubit gets acquired by Coveo | VentureBeat"
"https://venturebeat.com/2021/10/15/ai-powered-personalization-provider-qubit-gets-acquired-by-coveo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered personalization provider Qubit gets acquired by Coveo Share on Facebook Share on X Share on LinkedIn Consumers shop online for their needs Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI-powered relevance platform Coveo today announced that it acquired Qubit , a leader in AI-powered personalization technology for merchandising teams. The acquisition expands Coveo’s ability to use AI to power engaging experiences across commerce, service, support, and digital workplace solutions, and also bolsters the company’s geographic expansion into the U.K. and European markets. “Qubit’s IP and expertise comes at a perfect time as enterprises turn to AI-driven solutions to provide highly relevant responses, offers, and recommendations to their customers at scale,” said Louis Têtu, CEO and chairperson of Coveo. “Adding Qubit’s capabilities to the Coveo Relevance Cloud platform helps us accelerate the delivery of new incremental innovations to further personalize digital commerce experiences.” New options for AI-powered retail solutions The acquisition enables Coveo to provide retailers with innovative, AI-powered ecommerce solutions , allowing them to keep pace with the accelerated digital shift and meet consumer expectations around relevance. Merchandisers will have more tools at their disposal to deploy promotions, test different strategies, and iterate more quickly, thereby driving customer lifetime value from acquisition through to retention. “Our two businesses complement each other perfectly across both technology and expertise and are aligned to deliver a total solution for the benefit of our customers’ success,” said Graham Cooke, Qubit CEO. “We look forward to delivering more business value to our collective customers as we roll out our integrated offering that drives forward the next level of personalization.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The enhanced Coveo Relevance Cloud enables r etailers to make use of AI-powered commerce search , which detects shopper intent and determines individual context. The relevance platform supports personalized search, product and content recommendations, guided browsing, and product listing pages, as well as optimization through A/B and multivariate testing and analytics. Retailers will be able to engage their customers across multiple digital touchpoints (e.g., chatbots, mobile applications, and digital marketplaces) through plug-in UI components, headless API, and open commerce support. AI-powered relevance across industries The AI-powered relevance platform has applications across different industries, including beauty and cosmetics, fashion, luxury, home and garden, grocery, travel and tourism, direct-to-consumer, business-to-business, media, and financial services. It supports shared personalization across customer self-service, assisted support, and customer communities throughout the customer lifecycle. Coveo’s SaaS-native, multi-tenant platform provides search, recommendations, and personalization solutions to create digital experiences for ecommerce, service, website, and workplace applications. The company says its solutions drive revenue growth, reduce customer support costs, increase customer satisfaction and digital engagement, and improve employee proficiency and satisfaction. Coveo says its AI powers relevant interactions for hundreds of global brands and is supported by a large network of global system integrators and implementation partners, including Salesforce, ServiceNow, and Zendesk. The company has been named a leader in the Gartner Magic Quadrant for Insight Engines and a leader in The Forrester Wave: Cognitive Search. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,338
2,016
"Atlassian acquires hosted status page company StatusPage | VentureBeat"
"https://venturebeat.com/2016/07/14/atlassian-acquires-hosted-status-page-company-statuspage"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Atlassian acquires hosted status page company StatusPage Share on Facebook Share on X Share on LinkedIn Coinbase's StatusPage page. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprise software company Atlassian today is announcing that it has acquired StatusPage , a company that offers webpages that companies can customize to show the status of their online services. StatusPage has thousands of customers, including Alphabet’s Nest, Coinbase, Disqus, Engine Yard, Linode, Loggly, New Relic, Tesla, UserVoice, and Atlassian itself. The service will remain available for the time being. These days, services are dependent on other services, and figuring out the service that’s causing downtime is challenging. StatusPage can help in that department. “In a world where a single popular technology service going down could affect thousands of companies and millions of end customers, trust, transparency and up-to-the minute information on service status is an essential part of the cloud fabric,” Atlassian cofounder and co-chief executive Scott Farquhar wrote in a blog post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Adding the sort of functionality that StatusPage offers will help Atlassian further separate itself from other companies that provide source code hosting, like GitHub and GitLab, as well as team collaboration tools like Slack and issue tracking like Trello. StatusPage started in 2013 and is based in Denver, with 15 employees, who will be joining Australia-based Atlassian’s San Francisco office. StatusPage has been profitable since its third month of operations, an Atlassian spokesperson told VentureBeat in an email. Investors include Y Combinator , WeFunder, and FundersClub. Atlassian went public last year. Previous Atlassian acquisitions include HipChat and SourceTree. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,339
2,020
"Pandemic drove VC funding for health care to record $80.6 billion in 2020 | VentureBeat"
"https://venturebeat.com/2021/01/20/pandemic-drove-vc-funding-for-health-care-to-record-80-6-billion-in-2020"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pandemic drove VC funding for health care to record $80.6 billion in 2020 Share on Facebook Share on X Share on LinkedIn CB Insights annual State of Healthcare report pointed to record fundraising for healthcare. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The global pandemic has proved a catalyst for reshaping health care as venture capitalists reached deep into their pockets to deliver a record level of funding to medical-related startups in 2020. They sprinkled the money across sectors ranging from telemedicine to AI to medical devices in an effort to seize the moment and shake up an industry that has been notoriously resistant to reinvention. According to the latest State of Healthcare report from research firm CB Insights , health care startups raised $80.6 billion in 2020, up from $53.7 billion in 2019. The report notes that COVID-19 has exposed inefficiencies in current health care systems, in some cases highlighting the necessity of turning to tools like telemedicine in the face of resource shortfalls and the obligation to find alternative means to deliver care when in-person doctor visits became more problematic. The result is a sudden acceleration toward digitizing systems and a desire to find new ways to leverage a surge of patient data. The growing momentum can be seen across the year as funding climbed steadily each quarter. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While an aging population has already been putting pressure on health care systems, the pandemic seems to have finally created fertile terrain to allow investors to believe that widespread disruption is within startups’ grasp. CB Insights counted a record 187 health care venture capital rounds that topped $100 million in 2020. This faster pace also helped startups that are applying AI to health care raise $2.3 billion in Q4 2020. This included notable deals such as Human API raising $20 million to standardize health records with AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,340
2,020
"Roblox raises $150 million round led by Andreessen Horowitz | VentureBeat"
"https://venturebeat.com/2020/02/26/roblox-raises-150-million-round-led-by-andreessen-horowitz"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Roblox raises $150 million round led by Andreessen Horowitz Share on Facebook Share on X Share on LinkedIn Work at a Pizza Place has been played 1.9 billion times on Roblox. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Online gaming platform Roblox has raised $150 million in funding in a round led by Andreessen Horowitz’s late stage venture fund. It seems the folks at Andreessen Horowitz have been playing a lot of user-generated games on Roblox, like Work at a Pizza Place or Meep City, each of which have gotten billions of plays on the platform. In the funding round, Roblox is also offering employees and stakeholders an opportunity to participate in a secondary round. Roblox said it comes at a time of significant growth for Roblox as it hits new milestones including reaching over 115 million monthly active users and more than 1.5 billion hours of monthly engagement. Roblox will use the funding to further grow its engineering team, and support creators around the world with the technology and infrastructure to build more immersive, collaborative experiences. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Additionally, Roblox said it is holding its seventh annual Bloxy awards on March 21 but this year the award show is going to be streamed in a brand new way — live within the platform. Roblox is available on a variety of platforms, including the PC, Mac, consoles, virtual reality platforms like Oculus and HTC, and mobile devices. It’s not clear why Roblox needs to raise the money, as it should be making a lot of money on all those users. But it is a very ambitious company, competing with rivals such as Microsoft’s Minecraft. Roblox has now raised more than $185 million. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,341
2,020
"Mainframe Industries raises $8.3 million for cloud-native games | VentureBeat"
"https://venturebeat.com/2020/03/19/mainframe-industries-raises-8-3-million-for-cloud-native-games"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mainframe Industries raises $8.3 million for cloud-native games Share on Facebook Share on X Share on LinkedIn This image is from the first cloud-native game being made by Mainframe Industries. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Mainframe Industries has raised $8.3 million to develop cloud-native games in a financing led by Andreessen Horowitz , with additional investment from Riot Games. The Finnish-Icelandic gaming studio founded by former CCP, Remedy Entertainment, and Next Games veterans is developing an unannounced cloud-native massively multiplayer online game that is designed and built at Mainframe’s Helsinki and Reykjavik studios. “We are starting from the ground up with no legacy to constrain us, across mobile, PC or console,” said Thor Gunnarsson, the CEO of Mainframe Industries, in an interview with GamesBeat. “All platforms are equally strong.” Existing investors Maki.vc, Play Ventures, Sisu Game Ventures, and Crowberry Capital all joined the new round of funding. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Part of Mainframe’s Helsinki team. In cloud gaming, a game is computed in internet-connected datacenters, and then the results are sent via video to a user’s screen. That enables high-end games to tap the vast computing and memory systems in the cloud and then display the results on a user’s machine, whether that’s a low-end laptop, a desktop gaming PC, a mobile device, or a console display such as a TV. “There are so many different aspects of being cloud-native that we can benefit from,” Gunnarsson said. “The player population and simultaneous player counts is one benefit. But we see other aspects of being able to develop for a single runtime targeted in the cloud, giving us the potential to access faster, greater storage for the game, allowing us to basically create a much more vibrant and detailed world.” Above: Part of Mainframe’s Icelandic team. Cloud gaming allows for seamless crossplay between mobile, PC and console, allowing players to access their game from any device and play against anyone on any machine. Streaming games via the cloud is a lot like what Netflix or Spotify did for film and music. More details about the game will be revealed later, Gunnarsson said. It is an open world, sandbox MMO, and the aim is to build out a persistent online world accessible from any screen. Regarding Andreessen Horowitz, Gunnarsson said, “We were deeply impressed by their take on the games industry and how the industry is developing. We were aligned on the trends that are important. The thing that clinched it is they have strong operating partners.” Gunnarsson said that his team also spent a lot of time with Riot Games, maker of League of Legends. “The value of that relationship is the ability to gain advice from the right leadership team with unparalleled expertise in live operations and monetization,” he said. Above: A scene from Mainframe Industries upcoming MMO. With further details of their MMO to be announced at a future date, Mainframe is now focusing on growing their teams in both Helsinki and Reykjavik. “The Mainframe team is a collection of superstars and the game maxes out ambition on technical and artistic fronts. Having worked with them since the get-go, I know these pioneers will change the way how we play games,” said newly named Chairman of the Board, Ilkka Kivimäki from Maki.vc. The Mainframe Industries founders are Börkur Einarsson, Kjartan Pierre Emilsson, Thor Gunnarsson, Fridrik Haraldsson, Reynir Hardarson, Sulka Haro, Kristján Valur Jónsson, Jyrki, Korpi-Anttila, Saku Lehtinen, Ansu Lönnberg, Eetu Martola, Vigfús Ómarsson, Jón Helgi Thórarinsson. The company has 20 people today in both Helsinki and Reykjavik. The team is using Epic Games’ Unreal Engine. The game is in pre-production, and work started about 18 months ago. “We have a team that is really complimentary, with a number of people from Remedy with story backgrounds and console development, as well as key people from Next Games who have mobile game experience, and MMO people from CCP,” Gunnarsson said. “We felt that the opportunity to create a next-generation MMO that is designed to be more in the cloud-first really requires a team that has the skillset and background for each of these different corners of the industry.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,342
2,021
"Intel promises industry will meet or exceed Moore's law for a decade | VentureBeat"
"https://venturebeat.com/2021/10/27/intel-promises-industry-will-meet-or-exceed-moores-law-for-a-decade"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel promises industry will meet or exceed Moore’s law for a decade Share on Facebook Share on X Share on LinkedIn Intel CEO Pat Gelsinger (left) speaks with Intel founder Gordon Moore, presented at Intel Innovation on October 27, 2021. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Intel CEO Pat Gelsinger promised on Wednesday that the semiconductor industry would meet or beat Moore’s law (the rule of thumb saying that processing power doubles every two years) for the next decade. His remarks are significant because many in the industry have assumed Moore’s law is no longer valid, and that software is destined to make more advances in efficiency than hardware. After it took 12 years to transition from petascale to exascale computing, Gelsinger said he is challenging his team to get to the next order of magnitude, zettascale , within five years. He made the remarks in a brief video interview with Intel cofounder Gordon Moore included in Intel Innovation , an event designed by Intel to rekindle excitement about Intel products in the developer community. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Innovating for developers Gelsinger and other company leaders made their pitch to serve the most extreme requirements of hyperscale cloud vendors, while also promising to make advanced capabilities available for all. Even the introductory customer video that played before the keynote included references to Intel “finally” paying attention to developers again. “Clearly, we’ve got some work to do,” Gelsinger said. Meanwhile, the seesaw movement of networks from being centralized around mainframes, then distributed to PCs, then becoming more centralized in the cloud (a trend likely to continue for the next few years) will swing back toward decentralization with edge computing bringing intelligence as close as possible to the user, Gelsinger said. Performance for enterprise More specific announcements included Intel’s partnership with Google on the development of its Mount Evans intelligent processor unit (IPU) and an associated infrastructure programmer development kit for making networking and datacenter infrastructure programmable. And while that might seem a capability designed by a cloud vendor for use by cloud vendors, sophisticated enterprises like financial services firms are already taking advantage of programmable infrastructure technologies like the Intel Tofino 3 fabric processor for network switches and the P4 language for network programming to achieve high performance for applications like high-frequency trading, according to Nick McKeown, senior VP, general manager, and Senior Fellow with Intel’s Network and Edge Group. These technologies will allow organizations to modernize how they manage networks and datacenters, McKeown said. For example, suppose your network is dropping packets. “We’ve been using ping and traceroute to diagnose network problems since I was a student,” he said, but often such techniques don’t capture transitory problems that might happen in the space of milliseconds. Now it becomes possible to program every element of the network and server infrastructure, McKeown said. “You just write a small program, running in the IPU or the switches, and you can decide whether you need it running all the time or just when needed — because it’s just a program,” he said. And these programs can run at “line speed,” meaning no degradation in performance, he said. “This would have been unimaginable just a few years ago because it would have come at such an expense in terms of loss of throughput,” McKeown added. Such work does require a pretty sophisticated programmer, but organizations with high-performance requirements are willing to do it, he said. AI for the masses At the same time, Intel is working to make sophisticated computing capabilities like machine learning accessible to more people, with initiatives like oneAPI toolkits and the OpenVINO AI inference engine for Intel processors. “We’re making this technology more accessible with low-code to no-code development tools,” said Sandra Rivera, executive VP, and general manager of the Datacenter and AI group at Intel. “I like to say it’s the AI you need on the processor you have.” Intel aims to further ramp up the broad availability of AI with Sapphire Rapids, the code name for its next-generation Xeon processor, which it is promising will deliver a 30X performance boost. That means AI developers will be able to work with a general-purpose processor “and not the expensive and power-consuming accelerators we have in the market today,” Rivera said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,343
2,019
"Complexity Gaming rebrands with an esports manifesto | VentureBeat"
"https://venturebeat.com/2019/05/03/complexity-gaming-rebrands-with-an-esports-manifesto"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Complexity Gaming rebrands with an esports manifesto Share on Facebook Share on X Share on LinkedIn Complexity Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Esports club Complexity Gaming is rebranding itself for the Esports 3.0 era, where it says esports athletes will be treated just like professional sports stars. The Frisco, Texas-based company said it has a new manifesto for esports, and that will drive an organization-wide shift. Market researcher Newzoo forecasts that esports revenue will grow from $906 million in 2018 to $1.7 billion by 2021, and this is a sign that investors are hot on this space. Above: Complexity This overhaul includes an expanded company vision as well as a new logo, color scheme, and website uniquely aligned with their sister team, the Dallas Cowboys. “We’re not just changing a logo. We are building upon Complexity’s legacy as an innovator in the space by paving the way yet again,” said Jason Lake, CEO of Complexity Gaming. “This expanded vision takes everything we have learned over the course of the last 16 years as a premier esports organization and levels it up — setting a new standard and revolutionizing how esports organizations should operate.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Complexity Gaming’s manifesto. Complexity’s manifesto “This is Our Game” represents a focus on inclusion, access, and support for the next generation of esports players and fans. The organization’s brand narrative and player-centric mentality will deepen with the anticipated opening of Complexity’s new headquarters, the GameStop Performance Center , which will include state-of-the-art performance training and recovery facilities for players as well as a public space for fans and sponsors among other amenities. Above: Complexity As represented by Complexity’s new logo design, the five points of the star articulate the company’s five C’s brand pillars: Competition, Community, Culture, Cause, and Convergence. Each pillar represents a specific area of focus: • Competition: Continuing to build competitive teams. • Community: Engaging esports fans both locally and globally. • Culture: Addressing the dynamic needs of a diverse and growing audience. • Cause: Contributing to meaningful and relevant causes. • Convergence: Incorporating advances in technology from traditional sports to continue to evolve and reshape the broader esports ecosystem as a whole. I’m not if this will make them win more games. But they’re quite serious about it. Complexity Gaming has won more than 140 championships in nearly 30 game titles over it’s more than 15 year history. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,344
2,020
"Wonder raises $11 million to make large virtual events more sociable | VentureBeat"
"https://venturebeat.com/2020/12/07/wonder-raises-11-million-to-make-large-virtual-events-more-sociable"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wonder raises $11 million to make large virtual events more sociable Share on Facebook Share on X Share on LinkedIn Wonder cofounders Leonard Witteler, Pascal Steck, and Stephane Roux Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Virtual event platforms have gained significant traction in a year marked by social distancing. Berlin-based Wonder is part of a new crop of contenders and promises to make large online gatherings more sociable. The startup, which was founded as Yotribe in April, today announced that it has raised a substantial $11 million seed round of funding led by EQT Ventures, with participation from BlueYard Capital. Much as large in-person meetups can segue into more intimate bubbles as people begin to network, Wonder aims to enable smaller, more organic video-based interactions within a larger virtual conference setup. This is reminiscent of features in numerous other events platforms, many of which have breakout spaces. But Wonder’s maplike interface allows guests to see at a glance who is speaking to whom and move their avatars around to join conversations in a virtual room. Above: Wonder The idea is that online meetings are typically “top-down,” with organizers dictating the structure and layout. This leaves little room for guests to maneuver and reduces the chance of spontaneous encounters and chats. Whether it’s possible to truly recreate physical meetups in this way isn’t clear, as real-world socializing and networking rely on social cues and nonverbal communication. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But Wonder strives to mimic real-world social cues through two core actions. If someone wants to initiate a conversation, they can direct their “nose” (a little dot in front of the avatar) toward a person or group, and the recipient can respond by directing their “nose” back toward the initiator. Another option is to “slowly approach each other,” similar to the way you might in a live networking situation. “If someone is not interested, they would just walk away and go somewhere else,” Wonder cofounder Pascal Steck told VentureBeat. “Besides that, people can invite others to form a circle — or to join an existing one — without walking up to them.” This kind of technology may be particularly well-suited to specific corporate encounters, such as company away days, departmental meetings, or other scenarios where people are already familiar with each other. But Wonder can be used for any type of video-based networking event, much as Zoom and others are currently being used for everything from social gatherings to dance classes. And there is already at least one similar tool being developed by New York-based Gatherly. So “spatial video chats” could become a new format in the virtual meeting space. More broadly, a number of events platforms have raised significant investments over the past few months, including Hopin, which last month secured $125 million at a $2.1 billion valuation, and Bizzabo, which attracted a mammoth $138 million after being forced to add virtual events to its existing offline events platform. Similar companies that have secured sizable investments this year include Hubilo , Welcome , Run The World , and Airmeet. Hybrid Wonder was founded in April by Stephane Roux, Leonard Witteler, and Pascal Steck, and the product was essentially an extension of Witteler’s first coding project, which he developed in his spare time a few years ago and tested with family and friends. With global lockdowns on track to extend throughout 2020 (and likely well into 2021), now is as good a time as any to bring the product to market. In its short tenure, Wonder said it has secured customers spanning a broad gamut, from individuals to corporate entities, including NASA, SAP, and Deloitte, as well as educational bodies like Harvard. With 200,000 users, the company said it has seen 30% week-on-week growth since its April beta launch in terms of active users, based entirely on word of mouth. One big question mark hangs over its business model, however, as the platform is currently free, with little indication as to its eventual pricing structure. “We are thinking about introducing pricing in mid-2021,” Steck said. With $11 million in the bank, the company is well-financed to continue building out its product, but unlike some of the incumbents in the space, it said it won’t be focused on providing customers with interaction or engagement data anytime soon. This is on its longer-term roadmap, however. “Our main focus is the guest experience,” Steck said. “We believe that delivering the best possible guest experience beats everything else. One great side effect of the virtual space is that anyone — including the host — has a good overview of the dynamics of the event, and therefore a good understanding of interactions and engagement.” But there is a growing sense that even if businesses do embrace physical gatherings with the same enthusiasm as they did before the pandemic, virtual events will remain part of a new hybrid approach. One of the reasons for this is that companies have now realized how much potential virtual events hold in terms of opening things up to a wider audience. Virtual events also give event organizers valuable data that is difficult to gather in the brick-and-mortar world, spanning registrations, attendance, sessions watched, questions submitted, messages sent, connections made, and more. Leading European technology conference Web Summit recently confirmed that it will be going with a hybrid model next year — using an events platform it developed entirely in-house. Reuters also said it would be adopting a hybrid events model next year, combining local networking meetups with online incarnations after it saw some success with this approach during the pandemic. As the world slowly transitions back to some semblance of normality in 2021 and beyond, Wonder is betting on a hybrid future. “We believe that in a hybrid event setting, having a lively virtual space where virtual guests actually meet, talk, and mingle trumps everything else,” Steck said. “We will let hosts link the offline elements into the virtual ones through feeds and other interactions. This way, real social interactions take place offline, as well as online at the same time.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,345
2,012
"Big leap in bio-engineering: scientists simulate an entire organism in software for the first time ever | VentureBeat"
"https://venturebeat.com/2012/07/21/big-leap-in-bio-engineering-scientists-simulate-an-entire-organism-in-software-for-the-first-time-ever"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Big leap in bio-engineering: scientists simulate an entire organism in software for the first time ever Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Scientists at Stanford University and the J. Craig Venter Institute — remember the Human Genome project — have simulated an entire organism in software for the first time ever. Using a 128-node computing cluster, a team of scientists led by Stanford professor Markus Covert incorporated data from more than 900 scientific papers and 1,900 experiments to simulate every molecular interaction and the effects of all 525 genes of the smallest known free-living bacterium: the parasite Mycoplasma genitalium. And, yes, that bacteria lives right where its name suggests. Simulating a single cell division takes about 10 hours, according to Covert, and generates half a gigabyte of data. Not exactly big data, but then it is a very small organism. Adding more computing power should shorten the simulation time. But the reason for building a bacterium in emulation? “You don’t really understand how something works until you can reproduce it yourself,” says graduate student and team member Jayodita Sanghvi. Now that an entire organism has been simulated, researchers believe that Bio-CAD (computer-aided-design) will take a big leap forward. From understanding genes in isolation the scientists look forward to being able to study their interactions, which is an essential step to understanding key issues in health, disease, and growth. As Max McClure wrote in a story published by Stanford’s news service: CAD – computer-aided design – has revolutionized fields from aeronautics to civil engineering by drastically reducing the trial-and-error involved in design. But our incomplete understanding of even the simplest biological systems has meant that CAD hasn’t yet found a place in bioengineering. The new simulation will help scientists understand biology better, and understand cells better, the researchers say. And it will help both speed up research and enable testing that just isn’t possible otherwise. “If you use a model to guide your experiments, you’re going to discover things faster. We’ve shown that time and time again,” said team leader Covert. But don’t get your hopes up too high for personalized medicine or helpful simulations of how medication will affect you before you actually have to take it. A fully simulated human being is still a long way off. Image credit: Image Wizard/ShutterStock , Victor Habbick/ShutterStock VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,346
2,021
"Understanding the differences between biological and computer vision | VentureBeat"
"https://venturebeat.com/2021/05/15/understanding-the-differences-between-biological-and-computer-vision"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Understanding the differences between biological and computer vision Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Since the early years of artificial intelligence, scientists have dreamed of creating computers that can “see” the world. As vision plays a key role in many things we do every day, cracking the code of computer vision seemed to be one of the major steps toward developing artificial general intelligence. But like many other goals in AI, computer vision has proven to be easier said than done. In 1966, scientists at MIT launched “ The Summer Vision Project ,” a two-month effort to create a computer system that could identify objects and background areas in images. But it took much more than a summer break to achieve those goals. In fact, it wasn’t until the early 2010s that image classifiers and object detectors were flexible and reliable enough to be used in mainstream applications. In the past decades, advances in machine learning and neuroscience have helped make great strides in computer vision. But we still have a long way to go before we can build AI systems that see the world as we do. Biological and Computer Vision , a book by Harvard Medical University Professor Gabriel Kreiman, provides an accessible account of how humans and animals process visual data and how far we’ve come toward replicating these functions in computers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Kreiman’s book helps understand the differences between biological and computer vision. The book details how billions of years of evolution have equipped us with a complicated visual processing system, and how studying it has helped inspire better computer vision algorithms. Kreiman also discusses what separates contemporary computer vision systems from their biological counterpart. While I would recommend a full read of Biological and Computer Vision to anyone who is interested in the field, I’ve tried here (with some help from Gabriel himself) to lay out some of my key takeaways from the book. Hardware differences In the introduction to Biological and Computer Vision , Kreiman writes, “I am particularly excited about connecting biological and computational circuits. Biological vision is the product of millions of years of evolution. There is no reason to reinvent the wheel when developing computational models. We can learn from how biology solves vision problems and use the solutions as inspiration to build better algorithms.” And indeed, the study of the visual cortex has been a great source of inspiration for computer vision and AI. But before being able to digitize vision, scientists had to overcome the huge hardware gap between biological and computer vision. Biological vision runs on an interconnected network of cortical cells and organic neurons. Computer vision, on the other hand, runs on electronic chips composed of transistors. Therefore, a theory of vision must be defined at a level that can be implemented in computers in a way that is comparable to living beings. Kreiman calls this the “Goldilocks resolution,” a level of abstraction that is neither too detailed nor too simplified. For instance, early efforts in computer vision tried to tackle computer vision at a very abstract level, in a way that ignored how human and animal brains recognize visual patterns. Those approaches have proven to be very brittle and inefficient. On the other hand, studying and simulating brains at the molecular level would prove to be computationally inefficient. “I am not a big fan of what I call ‘copying biology,’” Kreiman told TechTalks. “There are many aspects of biology that can and should be abstracted away. We probably do not need units with 20,000 proteins and a cytoplasm and complex dendritic geometries. That would be too much biological detail. On the other hand, we cannot merely study behavior—that is not enough detail.” In Biological and Computer Vision, Kreiman defines the Goldilocks scale of neocortical circuits as neuronal activities per millisecond. Advances in neuroscience and medical technology have made it possible to study the activities of individual neurons at millisecond time granularity. And the results of those studies have helped develop different types of artificial neural networks , AI algorithms that loosely simulate the workings of cortical areas of the mammal brain. In recent years, neural networks have proven to be the most efficient algorithm for pattern recognition in visual data and have become the key component of many computer vision applications. Architecture differences Above: Biological and Computer Vision, by Gabriel Kreiman. The recent decades have seen a slew of innovative work in the field of deep learning , which has helped computers mimic some of the functions of biological vision. Convolutional layers , inspired by studies made on the animal visual cortex, are very efficient at finding patterns in visual data. Pooling layers help generalize the output of a convolutional layer and make it less sensitive to the displacement of visual patterns. Stacked on top of each other, blocks of convolutional and pooling layers can go from finding small patterns (corners, edges, etc.) to complex objects (faces, chairs, cars, etc.). But there’s still a mismatch between the high-level architecture of artificial neural networks and what we know about the mammal visual cortex. “The word ‘layers’ is, unfortunately, a bit ambiguous,” Kreiman said. “In computer science, people use layers to connote the different processing stages (and a layer is mostly analogous to a brain area). In biology, each brain region contains six cortical layers (and subdivisions). My hunch is that six-layer structure (the connectivity of which is sometimes referred to as a canonical microcircuit) is quite crucial. It remains unclear what aspects of this circuitry should we include in neural networks. Some may argue that aspects of the six-layer motif are already incorporated (e.g. normalization operations). But there is probably enormous richness missing.” Also, as Kreiman highlights in Biological and Computer Vision , information in the brain moves in several directions. Light signals move from the retina to the inferior temporal cortex to the V1, V2, and other layers of the visual cortex. But each layer also provides feedback to its predecessors. And within each layer, neurons interact and pass information between each other. All these interactions and interconnections help the brain fill in the gaps in visual input and make inferences when it has incomplete information. In contrast, in artificial neural networks, data usually moves in a single direction. Convolutional neural networks are “feedforward networks,” which means information only goes from the input layer to the higher and output layers. There’s a feedback mechanism called “backpropagation,” which helps correct mistakes and tune the parameters of neural networks. But backpropagation is computationally expensive and only used during the training of neural networks. And it’s not clear if backpropagation directly corresponds to the feedback mechanisms of cortical layers. On the other hand, recurrent neural networks , which combine the output of higher layers into the input of their previous layers, still have limited use in computer vision. Above: In the visual cortex (right), information moves in several directions. In neural networks (left), information moves in one direction. In our conversation, Kreiman suggested that lateral and top-down flow of information can be crucial to bringing artificial neural networks to their biological counterparts. “Horizontal connections (i.e., connections for units within a layer) may be critical for certain computations such as pattern completion,” he said. “Top-down connections (i.e., connections from units in a layer to units in a layer below) are probably essential to make predictions, for attention, to incorporate contextual information, etc.” He also said out that neurons have “complex temporal integrative properties that are missing in current networks.” Goal differences Evolution has managed to develop a neural architecture that can accomplish many tasks. Several studies have shown that our visual system can dynamically tune its sensitivities to the common. Creating computer vision systems that have this kind of flexibility remains a major challenge , however. Current computer vision systems are designed to accomplish a single task. We have neural networks that can classify objects, localize objects, segment images into different objects, describe images, generate images, and more. But each neural network can accomplish a single task alone. Above: Harvard Medical University professor Gabriel Kreiman. Author of “Biological and Computer Vision.” “A central issue is to understand ‘visual routines,’ a term coined by Shimon Ullman; how can we flexibly route visual information in a task-dependent manner?” Kreiman said. “You can essentially answer an infinite number of questions on an image. You don’t just label objects, you can count objects, you can describe their colors, their interactions, their sizes, etc. We can build networks to do each of these things, but we do not have networks that can do all of these things simultaneously. There are interesting approaches to this via question/answering systems, but these algorithms, exciting as they are, remain rather primitive, especially in comparison with human performance.” Integration differences In humans and animals, vision is closely related to smell, touch, and hearing senses. The visual, auditory, somatosensory, and olfactory cortices interact and pick up cues from each other to adjust their inferences of the world. In AI systems, on the other hand, each of these things exists separately. Do we need this kind of integration to make better computer vision systems? “As scientists, we often like to divide problems to conquer them,” Kreiman said. “I personally think that this is a reasonable way to start. We can see very well without smell or hearing. Consider a Chaplin movie (and remove all the minimal music and text). You can understand a lot. If a person is born deaf, they can still see very well. Sure, there are lots of examples of interesting interactions across modalities, but mostly I think that we will make lots of progress with this simplification.” However, a more complicated matter is the integration of vision with more complex areas of the brain. In humans, vision is deeply integrated with other brain functions such as logic, reasoning, language, and common sense knowledge. “Some (most?) visual problems may ‘cost’ more time and require integrating visual inputs with existing knowledge about the world,” Kreiman said. He pointed to following picture of former U.S. president Barack Obama as an example. Above: Understanding what is going on it this picture requires world knowledge, social knowledge, and common sense. To understand what is going on in this picture, an AI agent would need to know what the person on the scale is doing, what Obama is doing, who is laughing and why they are laughing, etc. Answering these questions requires a wealth of information, including world knowledge (scales measure weight), physics knowledge (a foot on a scale exerts a force), psychological knowledge (many people are self-conscious about their weight and would be surprised if their weight is well above the usual), social understanding (some people are in on the joke, some are not). “No current architecture can do this. All of this will require dynamics (we do not appreciate all of this immediately and usually use many fixations to understand the image) and integration of top-down signals,” Kreiman said. Areas such as language and common sense are themselves great challenges for the AI community. But it remains to be seen whether they can be solved separately and integrated together along with vision, or integration itself is the key to solving all of them. “At some point we need to get into all of these other aspects of cognition, and it is hard to imagine how to integrate cognition without any reference to language and logic,” Kreiman said. “I expect that there will be major exciting efforts in the years to come incorporating more of language and logic in vision models (and conversely incorporating vision into language models as well).” Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,347
2,021
"Resolve, Zeiss partner on spatial biology apps that let scientists see inside cells | VentureBeat"
"https://venturebeat.com/2021/08/03/resolve-zeiss-partner-on-spatial-biology-apps-that-let-scientists-see-inside-cells"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Resolve, Zeiss partner on spatial biology apps that let scientists see inside cells Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Resolve Biosciences and Zeiss today announced a collaboration to improve spatial biology applications. The partnership will help enable digital twins of cells to combine information about the shape, size, gene activations, and protein interactions of cells. This promises to accelerate the market for spatial biology tools that help scientists and doctors peer into the inner workings of cells. The new collaboration will focus on improving the integration between Zeiss’ advanced microscopy and 3D imaging solutions for subcellular spatial analysis and Resolve’s genetic analysis technology. The companies hope to streamline the workflows by developing novel optical systems and computational approaches. “Spatial biology is the next frontier in life science and diagnostic applications,” Resolve CEO Jason Gammack told VentureBeat. “We refer to this transition as Genomics 3.0.” Gammack believes it will usher in a new era of molecular biology that mirrors previous genomic, proteomic, and metabolomic developments. Zeiss is one of the world’s largest producers of high-powered microscopes. Resolve is a startup focused on developing tools for watching genes and proteins interact in a cell. The company received $24 million in funding last December. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Resolve’s Molecular Cartography platform has been in development since 2016 and is currently available through a commercial service offering to researchers in Europe and North America. The company’s existing platform offers users the ability to watch transcription activity — how genes are activated — and correlate this with a 3D microscopic view of what is happening in the elements of a cell called organelles. Future upgrades to the technology will improve the ability to track DNA sequences and add more comprehensive information about protein and metabolomic interactions. The partnership could help Resolve keep pace with a bevy of new competitors in the spatial biology market. Recent entrants include 10x Genomics, Spatial Genomics, BGI Research, and nanoString. Acquisitions and mergers in the space include 10x Genomics’ recent acquisitions of Spatial Transcriptomics, ReadCoor, and Cartana. Dawn of the spatial biology market The field of spatial biology began to take hold in 2016 with the discovery of new techniques to watch gene expression at the level of individual cells in a process called spatial transcriptomics. Nature Methods declared spatially resolved transcriptomics the method of the year. Resolve calls their specific flavor of technology molecular cartography. JP Morgan predicts spatial biology could grow into a $2 billion market. But more significantly, it could lead to much more sophisticated and comprehensive maps of humans and microbes. Scientists will have a much better window into how cells trigger proteins and respond to microbes and exactly how drugs work inside the cell. Spatial biology could also help scientists improve microbial engineering for industrial applications such as breaking down waste, growing drugs, and mass-producing vaccines. Current molecular cartography applications include studying the progression of COVID-19 between cells, mapping neural development, and understanding how eye retinas develop and age. Spatial biology provides new context of genomic info The real power of digital twins lies in finding better ways to co-register information from across different sensors. In larger digital twins, this might include lidar, enterprise applications, and building information models. The innovation lies in finding better ways to align high-resolution microscopic images with data about gene transcription and protein activation at this smaller scale. Spatial biology provides an entirely new context of genomic information. It allows scientists to understand the temporal-spatial relationship of biological activity within cells and tissues for the first time. The future of spatial biology will also require new data analytics pipelines to find novel ways of combining different kinds of data for further research and diagnostics. Future advances will benefit from improvements in the workflows enabled by this new partnership and integration with other tools and services. “We are also actively engaging with the rich community of open source developers to ensure our datasets can be analyzed with their next-generation tools,” Gammack said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,348
2,021
"Colossal wants to resurrect the woolly mammoth | VentureBeat"
"https://venturebeat.com/2021/09/13/colossal-wants-to-bring-the-woolly-mammoth-back-to-life"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Colossal wants to resurrect the woolly mammoth Share on Facebook Share on X Share on LinkedIn Colossal cofounders Ben Lamm (left) and George Church (right) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As the climate crisis worsens, technology companies are increasingly sparking conversations about how tech can make an impact. There’s talk about intelligent grids and using data and machine learning to combat climate change in a variety of ways. But one company, Colossal, which emerged from stealth today, is taking an especially unique approach: wrangling massive amounts of data — specifically biological data — in an attempt to bring the currently extinct woolly mammoth back to life. The idea is that returning the woolly mammoth to its native arctic grasslands can help repair the currently degraded ecosystem and act as a step toward repairing our planet. And this work is already well underway — the well-known Harvard biologist George Church and his team at the university have been making progress on the initiative for years. Now, Church and cofounder Ben Lamm, a serial entrepreneur in the emerging tech space, are spinning the academic research into a startup to accelerate the work, better fund it, and take it to “the next level.” “Being able to truly understand genomics and synthetic biology and dig deeper into being able to preserve and recreate life, I think [those] are pretty fundamental steps towards that future,” Lamm told VentureBeat. “Decades from now, I think there are many cascading scientific revolutions that could come from this.” The woolly mammoth project While the concept may sound like science fiction (which indeed has always been an inspiration for Lamm’s work ), the team says there are no big science hurdles standing in the way. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As George says, the science is solved,” said Eriona Hysolli, head of biological sciences at Colossal. She added that “the tools are there” and “it’s just about scaling it up.” At Harvard, Hysolli spent six years working in Church’s lab, developing and optimizing novel genetic tools for making genome editing much more flexible. Specifically, Hysolli, Church, and team have been tackling the woolly mammoth project using CRISPR, the popular and sometimes controversial gene editing technology. They’re also building software that will be used to assemble genomes. Lamm says the company believes it will have its first woolly mammoth calves in the next four to six years. And while the company is laser-focused on delivering on this and repopulating the arctic with the mammoth, the technique could be replicated to “de-extinct” other species and even conserve ones that are currently endangered. Almost like a hard drive, Lamm says the technology Colossal is building can be used to “back up” and preserve critically endangered species and bring them back to life. All about the data “In biology, it’s all about big data. And at Colossal, it’s about a colossal amount of data,” Hysolli said. But it’s not just the fact that the genome sequence itself is huge; parsing all that data is also a tremendous challenge. And while working on de-extinction, the team also intends to better the tools currently available to biologists so they can more easily make comparative analysis between genomes. Often, researchers have to go to multiple data stores to get the data they need, Lamm says, adding that “they’re not normalized and not connected.” “Different universities and scientists in different labs are leveraging data from a different perspective. They have their own databases, so I think there’s a huge opportunity for normalization of that data, and to speed up the processing of that data and to build tools on top of that data,” Lamm added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,349
2,020
"CCPA compliance lags as enforcement begins in earnest | VentureBeat"
"https://venturebeat.com/2020/07/04/ccpa-compliance-lags-as-enforcement-begins-in-earnest"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest CCPA compliance lags as enforcement begins in earnest Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. [Updated July 6, 2020 to correct the company name of TrustArc.] Enforcement of the California Consumer Privacy Act (CCPA) began on Wednesday July 1, despite the final proposed regulations having just been published on June 1 and pending review by the California Office of Administrative Law (OAL). The July 1 date has left companies, many of which were hoping for leniency during the pandemic , scrambling to prepare. COVID-19 appears to be shifting the privacy compliance landscape in other parts of the world — both Brazil’s LGDP and India’s PDPB have seen delays that will impact when the laws will go into effect. Nonetheless, the California Attorney General (CAG) has not capitulated on the CCPA’s timeline, with the attorney general’s office stating: “CCPA has been in effect since January 1, 2020. We’re committed to enforcing the law starting July 1 … We encourage businesses to be particularly mindful of data security in this time of emergency.” With the CCPA being one of the most demanding pieces of privacy legislation that some companies have ever faced, compliance has understandably lagged. In 2019, different estimates placed the percentage of organizations that would be ready for the CCPA by Jan 2020 somewhere between 12% and 34%. A recent poll by TrustArc revealed that as of June 2020 just 14% of companies were completely done with CCPA compliance, while another 15% have a plan but haven’t started implementation. This leaves an additional 71% of companies whose plans for CCPA compliance are unaccounted for. These numbers, while large, might not be all that surprising as only 28% of firms were compliant with GDPR over a year after it went into effect , with companies greatly underestimating what it would take to be compliant. What should companies expect next? Although the CAG’s ability to take enforcement actions is now in effect, companies can be held liable for breaches of the law that occurred earlier in the year. Additionally, consumers have been able to take legal action against non-compliant companies since the beginning of the year, with at least 19 lawsuits having been filed since Jan 1, 2020. These lawsuits illustrate the circumstances under which enforcement can take place as well as the potential compliance blindspots companies might face. Companies also face the prospect of new California privacy legislation in the form of the The California Privacy Rights Act of 2020 (CalPRA or CPRA), colloquially referred to as CCPA 2.0. The initiative has collected over 900,000 signatures and is expected to be on the November 2020 ballot, with 88% of Californians supporting its passage. Although this bill is not expected to take effect until January 1, 2023, organizations lagging behind on CCPA compliance will likely struggle to meet their obligations under the CPRA as well. What should companies behind on CCPA compliance be doing? Companies that are just now starting to implement their compliance programs should do their best to align themselves with the final regulations that have been sent to the OAL. While there’s no silver bullet to doing this, below are some considerations worth taking into account: Operationalizing the CCPA at scale requires a serious commitment to security. The CCPA has formally made clear that the era of security as an afterthought is over. Although the legislation is fairly agnostic about the types of security frameworks and controls organizations will have to deploy to ensure CCPA compliance, it’s apparent that satisfying the functional requirements of the CCPA will require developing comprehensive data discovery and data security programs organization-wide. For example, the ability to provide accurate disclosure notices at collection or within privacy policies, as well as the ability to process consumer requests and reduce breach risk all implicitly require companies to understand the categories of data they ingest. Companies will also need to know how this data is used, where it’s stored, and who has access to it. This will often require building consistent security processes with the help of tools like privileged access management, securely configured firewalls, and application security controls like data loss prevention. While it’s true that strong security practices alone aren’t enough to operationalize CCPA compliance, companies who are already complying with one or more privacy regimes or who otherwise have mature information security programs will likely find compliance easier. Continuous compliance requires clear ownership within your compliance program. While IT and security will form the bedrock of an organization’s ability to comply with the CCPA, it may not be the case that IT or security should own the entirety of your organization’s compliance initiative. Your organization’s structure and the business purpose served by consumer data collection should inform who the relevant stakeholders will be. Clearly delineating who’s responsible for which aspects of your organization’s compliance program will be critical to making sure your program makes sense and will scale well as the privacy landscape continues to evolve. Make your compliance program future-proof. While no one in your organization likely has a crystal ball, you don’t exactly need one to see that privacy is the future and that investing in consumer privacy today is a smart decision. Despite stalled privacy legislation stateside and abroad, the GDPR, CCPA, and potentially the CPRA will continue to serve as bulwarks that future legislation will aspire to. This means that should your organization limit itself to simply satisfying CCPA requirements, you’ll likely be playing catch-up as you suddenly find the privacy landscape maturing. Aiming to have your security and compliance programs scale to ensure the same rights and protections across your entire customer base will ensure you stay ahead of the game. Michael Osakwe is a tech writer and Content Marketing Manager at Nightfall AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,350
2,020
"How data is making game creative perform more powerfully than ever (VB Live) | VentureBeat"
"https://venturebeat.com/2020/11/19/how-data-is-making-game-creative-perform-more-powerfully-than-ever-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live How data is making game creative perform more powerfully than ever (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by yellowHEAD With data, you can optimize game creative in more ways than ever before. In this VB Live event, GamesBeat, yellowHEAD, Zynga, and Snapchat discuss how to gain a competitive edge through new groundbreaking creative technologies and more. Register here for free. “Data lets you get creative with your creatives,” says Maria Waters, head of conversion optimization at Zynga. “Before, there were a lot of hunches involved in figuring out what was going to work, but now you can test everything and try everything. Don’t think twice about changing things — being open-minded goes a long way.” Developing creative has become a more dynamic process than ever before, l etting game marketers dive deeper into what might be driving your players , or potential players, as well as keeping an eye on what your competitors are doing, Waters says. Rather than just looking at creative as a piece of design, it’s about going into the emotions people might have when looking at an ad. Do you want them to feel like they’re getting help? Do you want them to feel happy? If it’s a slots game, do you want them to feel like they’ve won a lot of money? “Be open to testing things,” says Waters. “Using yellowHEAD’s creative analysis platform, Alison , we’re able to optimize fully, and understand not only conversion metrics, but also ROAS metrics and retention metrics. We get to see the full picture of what’s working on the micro level.” For example, they can understand what’s working in their Harry Potter game, understanding how much the brand plays a role in their conversion, and how much time or length their creatives need to be, and what needs to be the hero images in their creatives. The first order of business is always trying to understand who‘s playing their games, explains Waters, and understanding what the game data reveals about their players. Gender and age are very important factors for their ads. For those games that appeal to older women, they steer clear of the popular TikTok style ads – they’ve learned these won’t resonate well with that audience. With a match-three game, they might try to target users with a purist approach, using the game features to sell it. “We don’t rely heavily on just one style — we try to use many components,” Waters says. “Where that helps us, for example, is spending more on creatives that would perform with our core audience, while testing other styles that are more trendy to help us attract a broader audience.” As creative rises to a new dimension, it’s important to stay on top of their evolution, understand what other studios are doing, examine what elements might work for your own games. But Zynga has seen big wins with Alison by daring to do new things that haven’t been done elsewhere. For example, for the longest time they couldn’t get any headway on Words with Friends, Water explains. But when they embraced video ads, trying drastically different things that didn’t even look like the game, but could still appeal to a Words lover, they brought a lot of efficiency into their final lineup of ads. For an upcoming unbranded game they’ve been able to really push the envelope, with creatives that don’t even have gameplay. “By trying drastically different things, and continuing to test, we’re seeing a lot of success, not only in conversion metrics, but also in yield,” she says. “We do very quick ads to try to understand and learn quickly. You can identify the things that you might have been overlooking, and you can go really deep to understand what really performs. Seeing performance, learning quickly, understanding the things in the game that are performing, and iterating on that, that’s made all the difference for us.” Don’t miss out! Register here for free. You’ll learn about: How data-based creatives boost user acquisition Developing your own set of best practices Optimizing creatives for every social media platform How to gain a competitive edge in your UA efforts with smart creative ideation And more! Speakers: Maria Waters, Head of Conversion Optimization, Zynga Oliver Wapshott, Creative Strategist, Snap Noa Miller, Marketing Creative Strategist, yellowHEAD Dean Takahashi, Lead Writer, GamesBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,351
2,021
"Google: No tracking tools to replace third-party cookies in Chrome | VentureBeat"
"https://venturebeat.com/2021/03/04/google-no-tracking-tools-to-replace-third-party-cookies-in-chrome"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google: No tracking tools to replace third-party cookies in Chrome Share on Facebook Share on X Share on LinkedIn Google Chrome Logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — Alphabet’s Google will not build or use alternate tools to track web browsing traffic once it begins phasing out existing technology from its Chrome browser next year, the company said in a blog post on Wednesday, in a move that will reshape how online advertising works. Google first announced it would get rid of third-party cookies, which for decades have enabled online ads, early last year to meet growing data privacy standards in Europe and the United States. Privacy activists have for years criticized tech companies, including Google, for using cookies to gather web browsing records across websites they don’t own, enabling them to develop profiles on users’ interests to serve personalized ads. Now Google is pledging it will not use other technology to replace the cookies or build features inside Chrome to allow itself access to that data, though it continues to test ways for businesses to target ads to large groups of anonymous users with common interests. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Keeping the internet open and accessible for everyone requires all of us to do more to protect privacy — and that means an end to not only third-party cookies, but also any technology used for tracking individual people as they browse the web,” Google said in the blog post. Rival advertising tech companies are building tools to identify users across the web anonymously, including Criteo SA and The Trade Desk. Both companies saw their shares drop in January 2020 immediately after Google first announced it would eliminate cookies, but those shares have risen consistently over the past year. Shares of Trade Desk were down 2.2% in trading before the bell on Wednesday. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,352
2,021
"Apple's newest privacy changes mean more rework for the ad industry | VentureBeat"
"https://venturebeat.com/2021/07/10/apples-newest-privacy-changes-mean-more-rework-for-the-ad-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Apple’s newest privacy changes mean more rework for the ad industry Share on Facebook Share on X Share on LinkedIn Apple is enforcing tougher privacy rules. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As Apple continues to erode ad targeting capabilities with the privacy upgrades in its upcoming iOS 15 release , ad tech is crying “foul.” Because Apple’s primary source of revenue is not from advertising, it can change the rules within its ecosystem with minimal financial risk to itself while imposing a severe impact on other ad tech companies. By turning its rivals’ (read Google and Facebook’s) key revenue source into a liability while wooing consumers with a privacy-first message, Apple has become the leader of the very market it publicly decries. Apple’s ad campaigns made public its stance on how advertisements are intrusive and violate our privacy, and how Apple is stepping up to safeguard us from this ‘invisible stalker.’ CEO Tim Cook himself has been quoted saying ad tracking is “ creepy. ” By strategically aligning its marketing messaging with public sentiment, Apple comes out looking like the good guy. Specifically, the company has said that its iOS 15 release will premiere a new privacy solution, Private Relay, which can mask IP addresses, preventing companies from acquiring a user’s location without consent. Private Relay also lets users log into websites with anonymized email addresses, putting a wrinkle in an industry where a majority of data providers are latching onto email as a replacement for the third-party cookie. Configurability varies across Apple’s different privacy features; some features are customizable on an app-by-app basis, while others are simply on/off, limiting users’ ability to personalize the value exchange they wish to have across different companies and apps. For example, people often want to share location data with companies that provide value in return, such as a fitness app or chain store that offers local promotions. Apple’s proposed features won’t always make that possible. By putting a spotlight on the various ways data is harvested and used across the digital ecosystem, Apple portrays itself as the privacy protector and as siding with the people. This ethos is similar to Reddit’s WallStreetBets movement, strengthening people’s ability to take down systemic institutions. In Apple’s case, these institutions are the billion-dollar companies that make up a majority of the ad tech landscape — specifically Google and Facebook. By building up popular support for privacy and restricted ad tracking to complete a successful end-around, Apple is winning over the consumer while hurting its competitors, and the digital advertising industry suffers collateral damage from this fallout. Ad tech scrambles while Apple plays the long game While the iOS 15 announcement still represents a “wait and see” moment for the industry, the masterful moves that Apple has made in nearly a 10-year history of being a privacy advocate has led us to this point, and many of us have been like frogs in a pot of water, waiting and watching as we are boiled. When the Device ID to was translated to IDFA in 2012, Apple took its first major swing at the ad industry by obscuring user data. The second blow in Apple’s privacy battle was aimed at browser-based traffic with the introduction of ITP in 2017, putting restrictions on the longevity of the browser’s third-party cookie. Many criticized Apple as being hypocritical for placing more stringent privacy safeguards on its browser platform rather than on its app ecosystem, so earlier this year Apple finally reached privacy parity of both browser and app based ecosystems with the launch of iOS 14.5 and the introduction of SKADNetwork. The ad tech industry, having no choice but to act, responded by building Unified ID 2.0 and other ID solutions that depend on an email address. Some companies even use the IP address in their calculus for higher ID resolution. Both of these methods pull on the opposite end of ensuring privacy. Apple is now countering these moves with iOS 15, offering obfuscation of both the email and IP on its platform. Checkmate. A lesson in strategy We’re all dedicating significant resources towards building our “post cookie” and “post IDFA” strategies right now, but we’re not thinking big enough. We need a fully independent, privacy-safe industry that isn’t tied to any one giant tech company. But in order to accomplish this, a few things must change. First, brands must work like Apple, appealing to their customers with a clear value exchange to gather data under full consent. Come out in favor of privacy, be vocal about how it works and what consumers can expect, then create a media plan that lives up to this. Sephora and Thrive Market are examples of companies that live their brand truth across product, company positioning, and advertising approach. Then, publishers need to cooperate with brands to activate these relationships in a clean content environment to preserve and enhance that value exchange. The goal here is to improve on the overall ad experience while maintaining publisher margins. Sharing data safely is key to making this work at scale. The IAB is proposing a standard data taxonomy, and we should all be involved in refining this and other initiatives like it so that we’re able to work together easily. We can also look at value exchange drivers that work best across brands and publishers to help our industry define itself uniquely from Google, Facebook, and Apple. Perhaps consumers will one day have unique data profiles with each brand they interact with, a better scenario to find ourselves in than an oligarchy defined by the giants. The silver linings prove privacy-first thinking is key The global forces moving our industry towards a consent-based model has its positives. In addition to the obvious improvement in company-customer value exchange already discussed, coming moves by Apple may actually help publishers who rely on ad revenue. We know that a sizable portion of ad spend goes to third-party data brokers. If Apple continues to erode the capabilities of these brokers, more money may start to flow back into publishers’ coffers. So while nearly every external force — from Apple, to Google Chrome, to governments at nearly every level — is pushing us all to reimagine advertising in a privacy-first way, we’re still just focused on incremental changes that can easily falter with a single wave of Tim Cook’s hand. It’s time to move on to better things. It’s time advertisers and publishers advance their own stance on privacy in order to reduce Apple’s outsized voice and beat it at its own game. Chris Keune is VP Data Product and Insights at Kargo. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,353
2,022
"The big performance marketing shakeup of 2021, and the trends for 2022 (VB Live) | VentureBeat"
"https://venturebeat.com/2021/11/03/the-big-performance-marketing-shakeup-of-2021-and-the-trends-for-2022-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live The big performance marketing shakeup of 2021, and the trends for 2022 (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by yellowHEAD Look ahead to the most important trends in ecommerce performance marketing, from optimizing first-party data to implementing the newest techniques for SEO, paid acquisition, ad bidding, voice search, and more, in this VB Live event. Register here for free. There was an earthquake this year in the performance marketing industry, says Asaf Yanai, VP of growth at yellowHEAD. As most now know, the seismic activity came courtesy of Apple when the company introduced new privacy regulations and changes to third-party cookies. “It caught most advertisers by surprise,” Yanai says. “Within almost a day, their entire campaigns became obscured. They could not get the same level of data as they had before. They could not optimize their campaigns.” First came the shock, but the big challenge has been trying to figure out what comes next — how do you optimize in this brand-new landscape where data has become tremendously limited, and how do you fill in those data gaps? Advertisers have been forced to come up with innovative and creative ways of completing this data, and many have shifted to alternative platforms, like Google and Apple Search Ads. The change has shaken up all industry predictions for 2022, the first year that can truly be called post-COVID, Yanai says. Here’s a look at some of the top trends that are emerging. What marketers need to know for 2022 The need for SEO brilliance. With performance campaigns being undercut by a 20-to-40% reduction in data, SEO will be taking a much more central role with marketers and advertisers. “Now they understand that they can’t solely rely on performance campaigns, but they also understand the value of connectivity and search, the value of organic traffic and brand awareness, brand recognition, and brand positioning,” Yanai says. Marketers need to figure out how to become more discoverable and more searchable, and appear more in organic search results. Even with the world opening up, people are still staying close to home and that means they’re looking for more ways to engage with brands. They’re not going to stores as frequently, so search engines, news portals, and social media have become the new billboards. Plus, consumers have more than just one device at their disposal at home. They’re not just on their phones, but using their smart TV, iPad, laptop, and more — sometimes all at once. Adapting to ad bidding changes. The most important thing to be mindful of is that ad prices have gone up as traditional or offline channels have been reduced. With the increase in cost, companies have to be more mindful than ever of how they track, optimize, and maintain a good level of performance with their ad campaigns, because it’s going to be a bigger challenge than ever. “Every single startup, every small brand, and small app developer can now launch a full-scale campaign globally on Facebook in an hour,” Yanai says. “They can start bidding wars with all of the other big brands. You need to innovate and become more creative to catch the attention of your users, to become a sticky brand.” It will be essential to be innovative in your creatives, he emphasizes, and find new and engaging ways to meet your audience in a unique way that differentiates you from your competitors. The rise of social ecommerce platforms. Nowadays, even TikTok has an ecommerce platform, while Facebook and Google have significantly increased their internal shopping capabilities in the past year. And every social channel is headed that way, from Snapchat to Pinterest and LinkedIn. “Customers are more keen to try alternative commerce platforms like Facebook and TikTok,” Yanai says. “In the past, they used to go to Amazon. Now it’s also fine by them to find all of those products in their social media. Marketers need to take advantage of this.” Competition is fierce on the traditional commerce platforms, he points out. You’re swimming in a sea of your own kind, and getting ahead of the school is a challenge. On Facebook, you compete with a broad array of brand types, giving you the opportunity to make a splash in your own sphere. “If you’re able to innovate, if you’re able to attract the attention of users, if you’re able to come up with amazing creatives, and a different approach to how users will look at your products, then you’ll have a very significant edge on the social platforms,” he says. “The platforms are only growing. The commerce capabilities are only becoming stronger. The competition between social platforms and commerce platforms is yet to come. You want to be there when the revolution happens.” Yanai emphasizes that companies should put their brand awareness, brand recognition, and capabilities into clear visual elements. Big media consolidation. A few years back, Facebook, Messenger, Instagram, and WhatsApp were separate companies, but now they’re controlled by a single media conglomerate. The same thing happened with Google enveloping YouTube, AdMob, Google Play and so on, all of which impacted ad campaigns. This kind of consolidation is going to continue to happen, and marketers need to stay ahead of the news, and how these big mergers are going to impact their ad campaigns before they’re caught short-footed. The rise of black box AI. AI can be a powerful marketing tool, but many media channels are embracing what Yanai calls the black-box automated ad campaign. In other words, companies are taking the creative, bids, and targeting and automating everything from there. “I think that can be dangerous, because what’s really happening is that we as marketers are losing control over our campaigns, and a lot of control over the specific media channels that we can use,” he says. “It requires us to either use third-party tools that can give us an additional layer of data to compete with this black box, or to experiment, experiment, experiment — and try to track internally how those black boxes are working, try to predict what’s going to be the next result.” For more performance marketing predictions for 2022, best practices to handle the shakeups of 2021, and more, don’t miss this VB Live event! Register here for free. You’ll learn about: The latest ad bidding strategies Strategies to strengthen SEO in spite of Google’s most recent algorithm changes How to amplify sales by leveraging shopping features on multiple platforms The latest trends in D2C/DTC ecommerce Presenters: Gretchen Saegh-Fleming , Chief Commercial Officer, Hydrow Rob Webb , SVP, Growth, Tonal Ren Lacerda , Head of SEO, CarMax Gal Bar , co-CEO and Founder, yellowHEAD Franco Folini (moderator), Digital Marketing Instructor, UC Berkeley Extension The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,354
2,022
"Marketing News | VentureBeat"
"https://venturebeat.com/category/marketing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Marketing How to ticket and price events in 2022 Rain nabs $11M to build voice products Improvado offers tool for deeper marketing insights, nabs series A How AI can improve services revenue and customer success Community 5 ways AI can help solve the privacy dilemma Community Why you need both consumer data and data management platforms Sponsored The secrets to Apple and Tesla’s customer success are finally attainable for B2B businesses Sponsored Why companies need to bridge the digital-physical divide — and how Community Your marketing approach may be why your customers are blocking you VB Event GamesBeat Summit: What brands need to understand about the metaverse Community Brands, music and the metaverse Community AWS CMO Rachel Thornton on the future of customer-obsessed marketing in 2022 Community Get ready for your evil twin Community Why NFTs should be seen as worldbuilding opportunities for brands Tempr raises $5M to reboot user acquisition for mobile marketers Community SocialFi could empower content creators to break free of brand partnerships Community Waiting for the metaverse? The revolution is already here Guest Top 7 ways to get ahead in m-commerce in 2022 Community The critical question about the metaverse that no one is asking Community Why entrepreneurs need a market-integrated, data-driven approach to valuing startups Community How machine learning frees up creativity and strategy for marketers Community How to use web scraping for marketing and product analytics Community Less can be more with hyper-personalization Community The rise of NFTs and brands’ quests to protect their intellectual properties Community What advertisers need to focus on as they prepare for the metaverse Community The difference between web and mobile search Report: One in five customers won’t wait for a website error to be fixed Report: 95% of businesses have a customer success function Community How NFTs can turn consumers into brand advocates Community How to avoid buying AI-based marketing tools that are biased Companies are commercializing multimodal AI models to analyze videos and more Brew, which develops AI-powered marketing analytics software, raises $12M Community Metaverse-as-a-service? It’s only a matter of time before it’s next to eat the world Google Cloud expands contact center automation offerings with third-party integrations Community How self-service advertisers can optimize performance Community How to bridge the metrics gap damaging your customer relationships Community Documenting the NFT voyage: A journey into the future How Adobe’s new B2B products enhance customer hyper-personalization RightBound raises $15M more to automate B2B sales development Community The problem with manual optimization and what to do instead 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,355
2,007
"Roundup: Oracle's feast, Wired's dig, Simulscribe and more | VentureBeat"
"https://venturebeat.com/2007/03/02/roundup-oracles-feast-wireds-dig-simulscribe-and-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Roundup: Oracle’s feast, Wired’s dig, Simulscribe and more Matt Marshall Share on Facebook Share on X Share on LinkedIn Here’s the latest action: Oracle’s play in business intelligence — Oracle ‘s acquisition of Santa Clara’s Hyperion Solutions for $3.3 billion, announced yesterday, gives it a leg up in the area of business intelligence. Oracle is hoping to overtake German competitor SAP. This is just the latest course of an impressive feast ( 30 deals, $20 billion in three years ) to ge there. Jeff Nolan, a former executive at SAP, suggests Oracle’s latest move is a good one , as Hyperion is the best in the intelligence sector (Merc tries to explain the sector a bit ). Business Objects and Cognos are relatively weak competitors, he says. Mercury News story Wired nails Digg — Wired has published a piece explaining how a reporter paid Diggers to get a pretty lame, poorly written blog post featured prominently on Digg. They did it using a service called User/Submitter. The eye-opening piece tells how the phony blog was — surprisingly — voted for even by people who weren’t paid (because those who digg early on stories that turn popular become more “reputable” in the Digg system, and so Diggers blindly digg away without taking time to read them). One commenter wondered: “How the hell did this get to the front page?” Of course, the parent of Wired, you’ll recall, is Conde Nast, which owns a competitor to Digg, called Reddit. This is particular harmful for Digg, because its management has said gaming can’t happen. Silicon Valley start-up temperature still feverish — Last year, the new owners of the 150,000 square foot Plug&Play building in Sunnyvale complained they weren’t filling up their office space quickly enough. This week, however, we stopped by for a Stanford business plan competition cocktail event, and they said they now have a waiting list. They’re housing 90 companies. (Plug&Play doubles as an incubator for new Silicon Valley start-ups. See latest story about them here. ) Google strikes hundreds deals for Youtube — Frustrated by inability to sign deals with big publishers of video and music content, it is signing deals with hundreds of smaller ones. And not so small ones: It just signed a deal to create a news channel and two entertainment channels with BBC. Joost, meanwhile, signs deal with Jump Media — Details here, in Time story about Joost. Buzzy transcription services — We’re hearing folks chatting about Simulscribe , a New York company, and Spinvox , of the UK, because of the useful service they provide for busy people — transcribing voice mail and emailing it to you along with a wav file so that you can play the original voice mail if you want. The price for Simulscribe is $9.95 per month for 25 messages and then $0.25 per message after that. (Via Fred Wilson , who is the latest convert). Omidyar and Jeff Bezos invest in O’Reilly’s new VC fund — We mentioned the new O’Reilly AlphaTech Ventures venture capital fund yesterday. This document reveals that that Pierre Omidyar (of eBay fame) and Jeff Bezos (Amazon) are backers. (Via Valleywag) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,356
2,017
"Typeform raises $35 million to grow its user-friendly survey platform | VentureBeat"
"https://venturebeat.com/2017/09/11/typeform-raises-35-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Typeform raises $35 million to grow its user-friendly survey platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Popular online survey platform Typeform has raised $35 million in a series B round of funding led by General Atlantic, with participation from existing investors, including Index Ventures, Point Nine, and Connect Ventures. Founded out of Barcelona in 2012, Typeform is setting out to create a “conversational approach to data collection,” as the company puts it. In real terms, Typeform is a cross-platform app designed to let its users create surveys and other kinds of forms that are simple, user-friendly, and easy on the eye. It’s also designed for the mobile-first, touchscreen-oriented user, with surveys presented in a format that’s easy to follow and interact with. Above: Typeforms In terms of pricing, Typeform follows a freemium model, with a basic free plan that limits the number of responses and fields on a given form. Users can then pay a monthly subscription to unlock unlimited fields and responses, as well as additional features. Prior to now, Typeform had raised around $17.5 million in equity funding, the bulk of which arrived via its series A round two years ago. With its latest tranche of financing, the company said it will double down on efforts to grow its platform and global aspirations. “We are proud to be the first company to transform the online data collection space by creating conversational forms,” noted Typeform co-CEO and cofounder Robert Muñoz. “But now we’d like to take things further. Our partnership with General Atlantic will enable us to continue to bring world-class technology to our customers while further empowering our community of developers by bridging the gap between data collection and customer interaction.” The funding announcement also coincides with the launch of a new developer portal that serves as an expansion to Typeform’s existing developer platform and ushers in greater access to the company’s API. The expansion is designed to help companies use Typeform in conjunction with other products. One example Typeform mentions is a new integration with email marketing service MailChimp that will allow companies to collect emails in MailChimp lists from a Typeform survey. Developers will also be able to create “more intuitive, custom-made survey forms than ever before,” the company said. “Digital forms are transitioning from a simple data-collection tool into an integral part of the customer engagement journey,” added General Atlantic growth investor Chris Caulkin, who will now also join Typeform’s board of directors. “With its developer-targeted go-to-market strategy and versatile, highly customizable product, we believe there is a significant opportunity for Typeform to continue to grow.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,357
2,017
"Stripe wants to modernize commerce for the internet age | VentureBeat"
"https://venturebeat.com/2017/12/29/stripe-wants-to-modernize-commerce-for-the-internet-age"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stripe wants to modernize commerce for the internet age Share on Facebook Share on X Share on LinkedIn Stripe chief business officer Billy Alvarado Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Back in 2010 , Patrick and John Collison cofounded Stripe , which quickly gained a following thanks to how easily it let developers integrate a payment system with a few lines of code, something that financial tech giants like PayPal lacked at the time. More than seven years later, the Y Combinator alumnus has matured; its mission of helping developers process online payments has moved beyond transactions to tackle an even more important problem for entrepreneurs: setting up an online business. Modernizing the internet for commerce The internet has long had an “infrastructural deficiency,” according to Stripe’s chief business officer, Billy Alvarado. While businesses long to establish themselves online, optimizing the experience has been difficult. “The original architects of the internet knew that commerce needed to be woven into its fabric, but, for whichever reason, [they] didn’t quite get around to implementing it. What resulted was that — almost 30 years into the web — people were confronted with dated finance industry tools and legacy infrastructure, an economic system that simply wasn’t native to the internet.” “It took more than 20 years for ecommerce to become a $2 trillion global industry,” Alvarado explained. “In the next three years, that’s expected to double to $4 trillion. At the same time, the cost of starting an online business has dropped three orders of magnitude, while the amount of time it takes to expand internationally has halved… As more people come online and smartphones become widely accessible, the opportunity to create more businesses, increase economic activity, and ultimately create more jobs also grows exponentially.” Accepting online payments may seem simple now, but the process involves many complications, such as numerous payment methods, off-site redirects, checkout flows, international sales requirements, multiple languages, and the myriad rules within each country. All of this can be a headache for any fledgling business owner — it’s tough enough planning a brick-and-mortar operation, but scaling it to the global arena comes with an excessive amount of stress. It’s easy to imagine that a large part of Stripe’s customer base comes from Silicon Valley, but Alvarado said that nearly half of all its U.S. merchants are businesses based in the Midwest and the South, and it has a large number of international users. “Tithely, a global platform for churches based in Nashville, Tennessee, used Stripe Connect to expand into Hong Kong, Singapore, New Zealand, and the United Kingdom, in a matter of months,” he shared. “There’s a misplaced belief that innovation in the U.S. is concentrated along the coasts.” While traditional tech companies are quicker to adopt new technologies, the question is how Stripe can reach out to the mainstream business users — those in the Heartland. Above: The Missouri Star Quilt Company is using Stripe to transform its brick-and-mortar business into a multimillion-dollar online business that’s saving the small town it’s in. Alvarado responded by citing the Missouri Star Quilt Company as an example, explaining that this brick-and-mortar store has used online commerce to “revive its local economy.” “What started as a quilting supplies store in… Hamilton, Missouri, has grown into an Amazon for Quilting,” he said. “About 90 percent of its multimillion-dollar business is now online, and it uses Stripe to sell to quilting aficionados all over the world. It’s now the largest employer in Caldwell County, with hundreds of employees, physical stores, and even restaurants.” This echoes what we’ve heard from others in the financial technology space over the years. Square, one of Stripe’s key competitors in certain spaces, has also been touting ways its offerings can boost economies and improve the odds for small businesses. Summing up Stripe’s position, Alvarado said: “Stripe’s mission is to fix this incomplete part of the internet’s tooling, to build the economic infrastructure for the internet. We’re building the tools, APIs, and the platform to help businesses accept payments, and for new kinds of companies to build previously impossible products and services at internet scale — the kinds of things that you couldn’t do 10 years ago.” AWS dreams While he didn’t disagree with those who describe Stripe as a payment processing service, Alvarado stressed that there’s much more to the company, including a suite of software and tools to help start and operate an online business. We’re told that Stripe only dedicates 10 percent of its documentation to payments. The majority of the company’s focus is on things businesses will have to deal with when “operating at scale,” including security, managing compliance, identity verification, anti-money laundering tools, and more. In fact, Stripe would rather be considered the Amazon Web Services (AWS) of the financial tech space than a traditional payment processor. Just as AWS has become the technological source for everything companies need to get up and running, Stripe wants to be a major player in the commerce space. At the GeekWire Summit in October, cofounder John Collison offered a bit more commentary on this aspiration, saying Stripe wants to give companies basic but “extremely useful services” to start with, and build from there. The company’s Atlas product is perhaps its most alluring offering for business owners outside of the United States, and it could prove the foundational product that Stripe needs to truly stake its claim. Launched nearly two years ago, the service helps businesses based internationally incorporate in the U.S. Today, “thousands of founders” across 130 countries, including the U.S. , have signed up for the service. Stripe CEO Patrick Collison accompanied Barack Obama on his historic visit to Cuba in 2016, when the president became the first U.S. head of state to visit the country in 88 years. In conjunction with the trip, and perhaps as a symbol of warming relationships between the U.S. and Cuba, the company announced that Stripe Atlas would be available to Cuban entrepreneurs , giving them access to a U.S. bank account and making it possible for them to establish a business entity in the country. Although this was a historic moment for the company, the United States’ changing policy over Cuba could impact Stripe’s support. Above: Stripe cofounder and CEO Patrick Collison Perhaps a notable sign of Atlas’ prominence in Stripe’s arsenal is the hiring earlier this year of Sarah Heck , who previously led the global entrepreneurship outreach efforts for the Obama Administration. Besides Atlas, Stripe has other products available around the commerce space, not specifically dealing with payments. Alvarado mentioned Radar (fraud management), Sigma (business analytics), Connect (global payment management), and the most recent addition, Elements (ecommerce checkout solution) as examples of the company’s full-stack offering. Above: Former Uber engineer Susan Fowler has joined Stripe to lead its engineering-focused publication Increment. Stripe has also expanded into adjacent areas that businesses might find necessary as they establish themselves not only online, but also globally. In the past few years, it has acquired several startups, including onboarding, payments, and 1099 compliance service Payable , the developer toolkit formerly known as Tonic (since rebranded to RunKit), and entrepreneur resource site IndieHacker. It has also enlisted the help of Susan Fowler , the former engineer at Uber whose blog post toppled the CEO reign of Travis Kalanick. She’s now overseeing the production of an engineering publication called Increment to share tips and advice on building software systems at scale. While there are obvious integrations the aforementioned startups can make within Stripe’s products, they’re helping the company capitalize on areas that entrepreneurs will likely come across when doing business, such as building mobile apps, managing their employees and independent contractors, and looking for advice and resources on how to be a better business owner. The desire to mimic AWS’ success specifically in the commerce space should probably be viewed with some skepticism for now, especially since it’s unclear whether developers and business owners are really viewing the company as the de facto source for all their ecommerce needs. The marketplace is cluttered with many players, not to mention firms like Square, PayPal, Clover, and Shopify. However, Stripe has many of the building blocks it needs to move forward to meet this ambitious goal. “While our product stack has deepened to meet the needs of this new generation of businesses, our mission remains the same: We want to reduce the barriers to starting and operating a business, regardless of location, and accelerate the internet economy,” Alvarado explained. The business evolution Above: Stripe’s office For nearly eight years, Stripe has been pushing to reshape how companies conduct business on the internet. We asked if Alvarado’s team noticed any trends over the time period. Complex business models “The internet has given rise to new business models that are highly complex to operate at scale,” he said. “Think of multi-sided marketplaces that connect buyers and sellers, or SaaS companies that have to manage several subscription plans at once.” Alvarado claimed businesses adopt Stripe because they don’t just need help with processing payments, but want someone with expertise in the infrastructure and technology to power these complex systems. Developer resource allocation Developers may be in high demand, but their time is limited. So companies have to be cognizant about optimizing their usage — will their time be devoted to maintenance work (what Alvarado described as “technical debt”) or the core product? “Developers today expect to use tools built to maximize their leverage. With online documentation and cloud-based technology, the standard is to spin up new servers or tools within minutes,” he said. “Stripe has always put the needs of developers first. This philosophy has heavily shaped our approach to developer tooling… We’re always asking ourselves: How can we enable developers to build better products for their users, faster? How can we help our users get the most value out of the limited developer resources they have available?” Startups want to be enterprises, enterprises want to be startups Alvarado’s third observation is perhaps not a unique one — startups eventually look to go up-market, targeting larger companies as potential customers, while those in the enterprise envy the agility of startups and seek to move just as fast. In Stripe’s case, “we’re placing more focus on how we can become a revenue engine for our users’ businesses, creating value not just for developers, but for their finance and accounting teams. This isn’t only pushing us upstream, but up the stack too.” What’s next? Above: Stripe Amid all of the movement Stripe has made, it’s far from satisfied with its progress. “We’ve yet to fully realize the internet’s potential as an economic platform,” Alvarado remarked. “Stripe wants to change that. That entails building the tools, the partnerships, and the infrastructure that will lower barriers to entry for upstarts, and accelerate the next wave of great technology companies.” He didn’t mention an initial public offering, but based on comments Patrick Collison made last spring , we shouldn’t expect one anytime soon. But there’s likely great interest should Stripe look to public money (it’s raised $440 million to date and is valued at $9.2 billion ). Still, that AWS dream has yet to be realized. While competitors PayPal and Square have moved into dispensing capital to business customers, Stripe isn’t interested. Instead, Alvarado said that Stripe will focus on helping users “scale and be more successful” and suggested that no matter what businesses need, his company will develop the tools and technology needed to “keep them ahead of the curve.” “We want to help startups innovate as fast as they can write code, and help established businesses get the reliability they expect out of a financial partner while still innovating at the speed of software,” he stated. “We’ve been chipping away at this challenge for over seven years, but when we step back and look at the bigger picture, Stripe is really only just getting started.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,358
2,019
"European challenger bank N26 launches in the U.S. | VentureBeat"
"https://venturebeat.com/2019/07/11/european-challenger-bank-n26-launches-in-the-u-s"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages European challenger bank N26 launches in the U.S. Share on Facebook Share on X Share on LinkedIn N26: Mobile-first bank founded in Germany Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. N26 , the European mobile banking startup with more than $500 million in funding from big-name backers including Peter Thiel’s Valar Ventures and Tencent, is finally launching in the U.S. The German company first revealed plans to expand to the U.S. back in 2017, a rollout that was initially mooted for 2018 before being pushed back to 2019. Today, N26 is kicking off the first phase of its beta launch, starting with the 100,000 customers who signed up for the U.S. waiting list. By way of a quick recap, the Berlin-based startup was founded in 2013 as Number26, before rebranding as N26 in 2016 when it obtained its own banking license. The pitch behind N26 is one of speed and efficiency — it’s available only on mobile and the web, and it promises that anyone can open an account within minutes of applying. Through the app, users can see all their payments and activities in real time and receive notifications for all in and out transactions. Also, each transaction is automatically categorized by type, while users can set themselves daily spending limits. Above: N26 Account N26 has been gradually expanding into new markets following its debut in Germany and Austria in 2015, and it now serves 3.5 million customers across Europe. The U.S. is its first market outside of Europe, while it plans to launch the service in Brazil shortly too. Show me the money With brick-and-mortar bank branches closing across the U.S. , mobile-native services such as N26 could be well-positioned to capitalize on the shifting banking landscape. It is actually one of a number of so-called “challenger banks” already in operation, such as Chime, which recently raised $200 million at a $1.5 billion valuation. Other players include Zero , which raised a $20 million tranche of funding a few months back; Varo Money , which raised $45 million last year; and Simple, which was acquired by Spain’s BBVA for $117 million back in 2014. Fresh from a $300 million raise, which bestowed upon it a valuation of $2.7 billion , N26 is well-financed to challenge the challengers, so to speak. It’s also the latest in a line of European startups to ramp up their ambitions stateside — a few weeks back, money transfer service TransferWise introduced its debit card to the U.S. At launch, N26 will offer a standard checking account and debit card, with no account maintenance fees or minimum balances required. “We will eliminate the frustration of visiting branches, waiting on the phone, and paying fees for basic services that should already be included,” said N26’s U.S. CEO Nicolas Kopp. During the beta phase, the company said that it will waive its ATM withdrawal fees, though the ATM operator may still charge something. And when it launches fully to the public, N26 will make the first two ATM withdrawals each month free, after which it will cost $2 per withdrawal plus however much the ATM operator charges. It’s also worth noting here that N26 will make money through the usual interchange fees that banks charge to the merchant. Later this summer, N26 plans to launch its Metal premium tier accounts , which come with additional perks and a stainless steel Mastercard — the company hasn’t revealed pricing for that, but using the European figure as a starting point, we can perhaps expect the figure to be in the $15-$20 per month range. Above: N26: Account types N26 has been setting out to create a modern bank built from from various fintech technologies and services. A few years back, N26 partnered with TransferWise , giving its customers access to cheap international money transfer tools directly inside the N26 app. Later, it launched a new investment product in conjunction with Vaamo and a savings account with Raisin. The company hasn’t indicated whether it plans to roll out similar offerings in the U.S., but if it did decide to do so, such shared revenue deals would be another obvious conduit for it to earn money. While N26 already has its own banking license in Europe, for its U.S. foray it’s partnering with Axos Bank — previously known as BofI Federal Bank — to operate legally. The company told VentureBeat that it may pursue its own U.S. banking license in the future. “The U.S. launch is a major milestone for N26 to change banking globally and reach more than 50 million customers in the coming years,” added N26 cofounder and CEO Valentin Stalf. “We know that millions of people around the world and particularly in the U.S. are still paying hidden and exorbitant fees and are frustrated by poor banking experiences. N26 will radically change the way Americans bank as it has for so many people throughout Europe.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,359
2,020
"Pigment raises $25.9 million to 'reboot the spreadsheet' with next-gen business forecasting | VentureBeat"
"https://venturebeat.com/2020/12/03/pigment-raises-25-9-million-to-reboot-the-spreadsheet-with-next-gen-business-forecasting"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pigment raises $25.9 million to ‘reboot the spreadsheet’ with next-gen business forecasting Share on Facebook Share on X Share on LinkedIn Pigment co-CEOs Eléonore Crespo & Romain Niccoli Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Pigment , which is setting out to “reboot the spreadsheet” and give businesses a “multidimensional view” of their data with its platform, has raised $25.9 million in a series A round of funding led by Blossom Capital. Founded out of Paris in 2019, Pigment touts itself as a business forecasting platform that circumvents the limitations of “error-prone” spreadsheets and inflexible software to bring a “new standard” to planning and modeling. The platform is still in closed beta, though the company claims to have secured some undisclosed high-profile clients, including a major European bank and pre-IPO startups. Pigment’s founding team includes co-CEOs Eléonore Crespo, a former financial analyst at Google who later joined Index Ventures as an investor, and Romain Niccoli, cofounder and former CTO of advertising tech giant Criteo. Incumbents Pigment is entering a space occupied by some long-standing incumbents, including Anaplan , Workday (which acquired Adaptive Insights in 2018), and trusty old Microsoft Excel. But alongside its main lead investor, Pigment managed to attract a number of notable angel investors, including Paul Melchiorre, a former Anaplan exec who served as the company’s interim CEO before its 2018 IPO , former Workday CTO David Clarke, and the respective CEOs of esteemed enterprise companies Dataiku and Datadog. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pigment’s founders say they are setting out to address three core problems in the planning and reporting process, including an inherent lack of transparency and insight into the data that underpins a business; the vast amount of data that needs to be processed; and the “static” way many current tools present that data, making it hard for anyone in the company to change or manipulate. Pigment wants to help companies save time and money and sidestep the mistakes that hinder many organizations while adopting a more visual approach that allows users to tinker with various parameters and forecast continuously. This includes bringing potential future scenarios to life through simulations, charts, and models and enabling real-time strategic decisions based on that data. Above: Pigment cohorts Above: Pigment promises a “visualy-driven” experience But Pigment’s core selling point is arguably that it allows anyone in a company, not just a limited few in finance, to create forecasting applications, modify the parameters, and see where the company is heading. Crespo said Pigment enables users to create an app from scratch in days versus months. “We believe modeling should be easy,” Crespo told VentureBeat. “In competitors’ platforms, only a handful of people have creation rights in the tool to build models. In Pigment, everyone can create an application and build their own model. We bring the right governance so that data can be shared easily while preserving access rights. This makes Pigment the proper solution for all business users within a company rather than just for the central finance team.” With $25.9 million in the bank, the company said it is well-financed to scale up operations and engineering teams ahead of a full public launch next year. Other institutional investors in the series A round include New York-based FirstMark Capital and Frst. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,360
2,021
"Cube Software challenges legacy financial planning and analytics tools | VentureBeat"
"https://venturebeat.com/2021/03/09/cube-software-raises-10-million-to-take-on-legacy-financial-planning-and-analytics-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cube Software challenges legacy financial planning and analytics tools Share on Facebook Share on X Share on LinkedIn Cube: A FP&A platform "for modern finance teams" Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cube Software , a New York-based startup that’s building a next-gen financial planning and analysis (FP&A) platform for modern finance teams, has raised $10 million in a series A round of funding to disrupt what it calls the “decades-old” enterprise performance management (EPM) industry. Founded in 2018, Cube has entered a space occupied by numerous legacy players, including publicly traded Anaplan; Adaptive Insights, which Workday acquired for north of $1.5 billion in 2018; Hyperion , which Oracle bought for more than $3 billion in 2007; and good old-fashioned spreadsheets. Cube is looking to combine the power of a modern software-as-a-service (SaaS) platform with the familiar flexibility of spreadsheets. “Within Finance, FP&A is a mission critical function that relies on spreadsheets alone in 80-90% of companies,” Cube founder and CEO Christina Ross told VentureBeat. “The problem is that spreadsheets are manual and error prone, while legacy FP&A software requires users to learn an entirely new ecosystem outside of spreadsheets to do the same job.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Cube’s FP&A platform And so Cube is setting out to make FP&A software more accessible, with the ability to onboard users in days versus months, while connecting directly into existing spreadsheets and other data sources to deliver analytics and insights. In other words, Cube wants to supplement, rather than replace, spreadsheets. “We believe that the spreadsheet is not your enemy,” Ross added. Companies can connect Cube to multiple data sources — using Cube’s API or via prebuilt integrations — to consolidate all their data automatically. They can also model how changes to key assumptions may impact their outputs, create customizable reports based on reusable templates, and integrate with any spreadsheet, including Excel and Google Sheets, with support for bi-directional data sharing. So while Cube is a full-fledged EPM platform in its own right, replete with its own built-in spreadsheet-style tooling, it allows users to work in whatever environment they’re comfortable with, including Excel. Above: Cube: Sharing data with Excel spreadsheet There has been a flurry of activity across the EPM and business forecasting sphere of late, with Jedox locking down $100 million in financing and French upstart Pigment securing $26 million. Prior to now, Cube had only raised a small $5 million seed round of funding, but it has amassed a fairly impressive roster of customers, including Japanese ecommerce giant Mercari and freshly minted DevOps unicorn Harness. With a fresh $10 million in the bank, the company said it’s tripling the size of its product and engineering teams. “Much of this is to build out our platform for enterprise scale and continue to add integrations for our growing customer base,” Ross said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,361
2,021
"DataRails, which automates financial reporting for Excel users, nabs $18.5M | VentureBeat"
"https://venturebeat.com/2021/04/20/datarails-nabs-18-5m-to-automate-financial-reporting-for-excel-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DataRails, which automates financial reporting for Excel users, nabs $18.5M Share on Facebook Share on X Share on LinkedIn DataRails dashboard Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. DataRails , an Israeli startup that wants to help businesses understand their financial data better — and more quickly — has raised $18.5 million in funding as it looks to double down on its enterprise integrations and invest in its AI capabilities. The raise comes amid a flurry of activity across the financial planning and analytics sphere, with OneStream this month raising $200 million at a $6 billion valuation , shortly after Jedox locked down more than $100 million. “Businesses typically spend between 10 to 14 days every month on manually gathering data from different sources and bringing it together to understand the current status of the organization and try to predict future performance,” DataRails cofounder and COO Eyal Cohen told VentureBeat. “Despite their efforts, the results tend to be difficult to analyze, error-prone, and lacking insights. DataRails shortens the time spent on this to a few hours and allows organizations to get better insights into their business.” Legacy Founded in 2015, DataRails has entered a space that includes legacy players such as Anaplan ; Hyperion , which Oracle bought for more than $3 billion in 2007; and Adaptive Insights, which Workday acquired for north of $1.5 billion in 2018. But the biggest incumbent DataRails is up against is arguably trusty old Excel. This is particularly true for small to medium-sized businesses, according to Cohen, as they use Microsoft’s omnipresent spreadsheet software for all their month-close and management reports, budgets, forecasts, and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Although there are well-established solutions in the FP&A [financial planning and analysis] market, more than 80% of small and medium-sized organizations conduct their routine processes manually, using Microsoft Excel and PowerPoint,” Cohen said. “The reason that this persists as the status quo is due to the fact that existing solutions require redesigning existing models and processes, leaving behind years’ worth of invested time in analyses, models, and reporting templates.” Excel’s persistence is one of the reasons we’ve seen a slew of newer players enter the FP&A market — such as Cube Software and Vena Solutions — taking a more modern approach that works on top of familiar spreadsheets. DataRails takes a similar path insofar as it seeks to supplement — rather than replace — Excel. It has created what it calls an “elastic database technology,” one that can transform spreadsheets into a structured database. Underpinning this are AI and machine learning algorithms that can take both structured and unstructured spreadsheet data (e.g. cell values, formulas, formats, and macros) to develop a “logical, centralized database.” “With this technology, DataRails automates existing Excel-based processes by leveraging existing models and templates to create one unified database,” Cohen said. “DataRails combines the flexibility of Excel with the power of a cloud-based database and a web-based dashboard.” DataRails’ customers , which include businesses from across the medical, transport, manufacturing, and cybersecurity spheres, can access the platform as a layer directly on top of Microsoft Excel and PowerPoint, which perhaps is how a financial analyst is most likely to use it. But it can also be accessed through a dedicated web interface, where management, executives, and board members would be more inclined to go to access data and insights. Above: DataRails dashboard It’s worth noting that DataRails can also glean data from sources such as enterprise resource planning (ERP), customer relationship management (CRM), and human resource information systems (HRIS), including Netsuite, Quickbooks, SAP, Salesforce, and Microsoft Dynamics. And given that its interface is based on Excel, DataRails can connect with pretty much any tool capable of exporting data as a CSV file. “DataRails has very strong analysis capabilities, thanks to the fact that cross-organizational data, from all financial and operational systems, is housed under one roof in our unified database,” Cohen said. “With all organizational data centralized in one place, customers can conduct variance analyses [and] drill-downs for full-scale granularity and quickly design ad-hoc reports.” DataRails had previously raised $10 million, and with another $18.5 million from Zeev Ventures Fund, Vertex Ventures Israel, and Innovation Endeavors, the company is well-financed to add more data and analytics tooling to its platform. “We’re looking to add stronger analysis capabilities, as well as newer prediction capabilities and better insights,” Cohen said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,362
2,021
"How fintech in a box is supercharging new opportunities for entrepreneurs (VB Live) | VentureBeat"
"https://venturebeat.com/2021/11/08/how-fintech-in-a-box-is-supercharging-new-opportunities-for-entrepreneurs-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live How fintech in a box is supercharging new opportunities for entrepreneurs (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Envestnet | Yodlee The fintech ecosystem has everything developers need to build the next generation of innovative, consumer-focused fintech. Join this VB Live event to learn what consumers are looking for, how to uncover what they need before they know it, and more. Register here for free. The past year and a half dramatically impacted the acceleration of fintech solutions, both for consumers and for fintech companies. The landscape for financial services delivery has changed completely, says Seb Taveau, vice president of developer experience at Envestnet | Yodlee, from the need to go digital as the default, to increased consumer concern about finances in an unpredictable world. “Fintech companies need to deliver actionable items with a highly customized experience,” Taveau says. “And with risk models changing, from the gig economy to rising unemployment and resignations, financial institutions and fintech startups have to revisit how they do business.” New opportunities in a new fintech ecosystem The past five years have seen massive VC investment into the fintech ecosystem, supercharging the startups like Stripe that are dominating the fintech world today. The most successful companies aren’t just creating apps anymore — they’re developing platforms and integrated solutions for a growing ecosystem. Fintech has sprouted insuretech, wealthtech, proptech, and more, that no longer require a bank partner to do business. As well, regulation has finally adapted to the world of digital finance that was unknown 20 years ago, providing a lot more flexibility, he explains. “Open finance, open banking , and open APIs mean that as an entrepreneur in fintech now, you don’t have to be an expert in financial regulations in every state, because the fintech ecosystem is providing these foundational elements,” Taveau says. “Now you can focus on something else other than being an expert in the basics of employment tax or mortgage loan regulations, and just build a solution.” And fintech companies need to think about finance 360 — moving beyond payments, or solutions built around bank accounts, checking accounts, and credit cards. Consumers want the big picture view of their financial situation, from their 401k and IRA to the value of their real estate, what kind of equity they have, and the best-value insurance whether that’s health, automotive, or home. “All these questions involve some kind of fiduciary decision, and that’s where you have to look,” Taveau says. “All these elements that may not seem connected to fintech or finance, at the end of the day, there’s a movement of value behind it. Consumers want a place that will give them that information, and they want it at speed.” With newly open access to a broad array of data, from industry information to consumer trends and private, personal information, the tricky part for a fintech entrepreneur is figuring out what’s relevant to their users — and using it correctly. The big challenge here is navigating compliance issues in every region you’re operating. It’s important to find the right partner not just for the immediate need but also for the long run and at scale, if you want to continue to grow. Fintech in a box is giving developers an edge Along with financial standards and regulations finally catching up with the real world, cloud-based architecture is creating a drastic improvement in the accessibility and control of data. Open finance, open banking, and open APIs are also creating the opportunity for companies to create a totally integrated platform — in other words, offering fintech in a box. Having the technology you need under the hood lets entrepreneurs refocus on the customer experience, or what they’re experts at, versus the finance side Taveau explains. And because it’s powered by APIs, the technology will always be up to date. “Today it’s not really about innovation — it’s about integration,” he says. “For the fintech industry, it’s not going to be the greatest innovation or the greatest invention. It’s going to be the greatest integration that wins it all.” So the challenge for new entrants in the field is manifold, from integrating information and APIs effectively on one side, to creating new business models and new ways to analyze financial information to create the extremely custom experience consumers expect — and in the middle, the full digital bonanza. “That’s why you see all these neobanks doing very well during the pandemic, because they can understand each side,” Taveau says. ” They understand the digital aspect of delivering a customized experience. They understand how to use APIs from third-party partners. They’re not dragged down by old business models. They’re creating new business models.” To learn more about the opportunities that the new digital landscape is creating for entrepreneurs, how to navigate the challenges, and where to look for the right partners and solutions, don’t miss this VB Live event! Register for free here. You’ll learn: How the fintech ecosystem is evolving — and how fintech in a box is changing the landscape for developers The most important APIs to avoid reinventing the wheel How to ensure your data is accurate, reliable and diversified — and why that’s important What the fastest growing new fintech segments are Presenters: Ran Harpaz , Chief Technology Officer, Hippo Insurance Joe Mocerino , Vice President of Engineering, HomeLight Seb Taveau , Vice President, Developer Experience, Envestnet | Yodlee Seth Colaner , Moderator, VentureBeat VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,363
2,021
"Why enterprises should build AI infrastructure first | VentureBeat"
"https://venturebeat.com/2021/11/22/why-enterprises-should-build-ai-infrastructure-first"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why enterprises should build AI infrastructure first Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence is bringing new levels of automation to everything from cars and kiosks to utility grids and financial networks. But it’s easy to forget that before the enterprise can automate the world, it has to automate itself first. As with most complicated systems, IT infrastructure management is ripe for intelligent automation. As data loads become larger and more complex and the infrastructure itself extends beyond the datacenter into the cloud and the edge, the speed at which new environments are provisioned, optimized, and decommissioned will soon exceed the capabilities of even an army of human operators. That means AI will be needed on the ground level to handle the demands of AI initiatives higher up the IT stack. AI begins with infrastructure In a classic Catch-22, however, most enterprises are running into trouble deploying AI on their infrastructure, in large part because they lack the tools to leverage the technology in a meaningful way. A recent survey by Run: AI shows that few AI models are getting into production – less than 10% at some organizations – with many data scientists still resorting to manual access to GPUs and other elements of data infrastructure to get projects to the finish line. Another study by Global Surveys showed that just 17% of AI and IT practitioners report seeing high utilization of hardware resources, with 28% reporting that much of their infrastructure remains idle for large periods of time. And this is after their organizations have poured millions of dollars into new hardware, software, and cloud resources, in large part to leverage AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! If the enterprise is to successfully carry out the transformation from traditional modes of operation to fully digitized ones, AI will have to play a prominent role. IT consultancy Aarav Solutions points out that AI is invaluable when it comes to automating infrastructure support, security, resource provisioning, and a host of other activities. Its secret sauce is the capability to analyze massive data sets at high speed and with far greater accuracy than manual processes, giving decision-makers granular insight into the otherwise hidden forces affecting their operations. A deeper look into all the interrelated functions that go into infrastructure management on a daily basis, sparks wonder at how the enterprise has gotten this far without AI. XenonStack COO and CDS Jagreet Kaur Gill , recently highlighted the myriad functions that can be kicked into hyperspeed with AI, everything from capacity planning and resource utilization to anomaly detection and real-time root cause analysis. With the ability to track and manage literally millions of events at a time, AI will provide the foundation that allows the enterprise to maintain the scale, reliability, and dynamism of the digital economy. Intelligence to the edge With this kind of management stack in place, says Sandeep Singh, vice president of storage marketing at HPE , it’s not too early to start talking about AIOps-driven frameworks and fully autonomous IT operations, particularly in greenfield deployments between the edge and the cloud. The edge, after all, is where much of the storage and processing of IoT data will take place. But it is also characterized by a highly dispersed physical footprint, with small, interconnected nodes pushed as close to user devices as possible. But its very nature, then, the edge must be autonomous. Using AIOps, organizations will be able to build self-sufficient, real-time analytics and decision-making capabilities, while at the same time ensuring maximum uptime and failover should anything happen to disrupt operations at a given endpoint. Looking forward, it’s clear that AI-empowered infrastructure will be more than just a competitive advantage, but an operational necessity. With the amount of data generated by an increasingly connected world, plus the quickly changing nature of all the digital processes and services this entails, there is simply no other way to manage these environments without AI. Intelligence will be the driving force in enterprise operations as the decade unfolds, but just like any other technology initiative, it must be implemented from the ground up – and that process starts with infrastructure. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,364
2,021
"Amazon Web Services unveils enhanced cloud vulnerability management | VentureBeat"
"https://venturebeat.com/2021/11/29/amazon-web-services-unveils-enhanced-cloud-vulnerability-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon Web Services unveils enhanced cloud vulnerability management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon Web Services (AWS) today announced several new features for improving and automating the management of vulnerabilities on its platform, in response to evolving security requirements in the cloud. Newly added capabilities for the Amazon Inspector service will meet the “critical need to detect and remediate at speed” in order to secure cloud workloads, according to a post on the AWS blog , authored by developer advocate Steve Roberts. The announcement came in connection with the AWS re:Invent conference, which began today. In a second security announcement, AWS unveiled a new secrets detector feature for its Amazon CodeGuru Reviewer tool, aimed at automatically detecting secrets such as passwords and API keys that were inadvertently committed in source code. The security updates from AWS come as enterprises continue their accelerated shift to the cloud, even as security teams have struggled to keep up. Gartner estimates 70% of workloads will be running in public cloud within three years, up from 40% today. But a recent survey of cloud engineering professionals found that 36% of organizations suffered a serious cloud security data leak or a breach in the past 12 months. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Changing cloud security needs In the post about the Amazon Inspector updates, Roberts acknowledged that “vulnerability management for cloud customers has changed considerably” since the service first launched in 2015. Among the new requirements are “enabling frictionless deployment at scale, support for an expanded set of resource types needing assessment, and a critical need to detect and remediate at speed,” he said in the post. Key updates for Amazon Inspector announced today include assessment scans that are continual and automated — taking the place of manual scans that occur only periodically — along with automated resource discovery. “Tens of thousands of vulnerabilities exist, with new ones being discovered and made public on a regular basis. With this continually growing threat, manual assessment can lead to customers being unaware of an exposure and thus potentially vulnerable between assessments,” Roberts wrote in the post. Using the updated Amazon Inspector will enable auto discovery and begin a continual assessment of a customer’s Elastic Compute Cloud (EC2) and Amazon Elastic Container Registry-based container workloads — ultimately evaluating the customer’s security posture “even as the underlying resources change,” he wrote. More feature updates AWS also announced a number of other new features for Amazon Inspector, including additional support for container-based workloads, with the ability to assess workloads on both EC2 and container infrastructure; integration with AWS Organizations, enabling customers to use Amazon Inspector across all of their organization’s accounts; elimination of the standalone Amazon Inspector scanning agent, with assessment scanning now performed by the AWS Systems Manager agent (so that a separate agent doesn’t need to be installed); and enhanced risk scoring and easier identification of the most critical vulnerabilities. A “highly contextualized” risk score can now be generated through correlation of Common Vulnerability and Exposures (CVE) metadata with factors such as network accessibility, Roberts said. Secrets detector Meanwhile, with the new secrets detector feature in Amazon CodeGuru Reviewer, AWS addresses the issue of developers accidentally committing secrets to source code or configuration files, including passwords, API keys, SSH keys, and access tokens. “As many other developers facing a strict deadline, I’ve often taken shortcuts when managing and consuming secrets in my code, using plaintext environment variables or hard-coding static secrets during local development, and then inadvertently commit them,” wrote Alex Casalboni, developer advocate at AWS, in a blog post announcing the updates for CodeGuru Reviewer. “Of course, I’ve always regretted it and wished there was an automated way to detect and secure these secrets across all my repositories.” The new capability leverages machine learning to detect hardcoded secrets during a code review process, “ultimately helping you to ensure that all new code doesn’t contain hardcoded secrets before being merged and deployed,” Casalboni wrote. AWS re:Invent 2021 takes place today through Friday, both in-person in Las Vegas and online. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,365
2,018
"Facebook introduces more customizable avatars to social VR app Spaces | VentureBeat"
"https://venturebeat.com/2018/04/02/facebook-introduces-more-customizable-avatars-to-social-vr-app-spaces"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook introduces more customizable avatars to social VR app Spaces Share on Facebook Share on X Share on LinkedIn Facebook has added more variety to avatars in the Facebook Spaces social VR app. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Facebook is announcing that more customizable avatars, or virtual characters, are coming to Facebook Spaces, its social virtual reality application. The fully redesigned avatars in Facebook Spaces are more expressive and customizable than in the past. The update is all in the spirit of having a meaningful social experience in VR. “You need an engaging avatar that represents you and helps you relate to other people in the virtual space,” the company said in a blog post ahead of the Tribeca Film Festival , which has a VR section. “It’s a huge part of feeling like you’re ‘really there’ together. That’s why we’ve been continuously working to learn what helps people represent and express themselves while spending time with friends in VR.” In Facebook Spaces, people can bring their virtual selves to shared experiences from dance parties to live VR broadcasts. In the past, people only had a few options to choose from when customizing features, like hair color or eye shape. And some of the avatars’ movements didn’t seem as lifelike as they could be. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: The avatar on the right represents richer virtual characters in Facebook Spaces, the VR app. “Our goal is that everyone can represent themselves in VR in a way that feels natural, so we knew we could do better,” Facebook said. To improve the avatars, Facebook’s engineers and artists drew on techniques from film animation, graphics, game character design, and mathematics to create a revamped version of avatars for Spaces. Now there are hundreds of new options to customize your appearance, including new head shapes, hairstyles, and facial features to choose from. You can also customize your body type for the first time. It’s easier to fine-tune your look with new controls to adjust the size, position, and angle of your features in the avatar creator. “We’ve worked on making avatars feel more present in the VR space with richer materials, better lighting and shadows. We’ve also fine-tuned the tech under the hood to make avatar body movements look more fluid and natural,” Facebook said. Above: Facebook Spaces can suggest an avatar for you. The system also provides people with avatar suggestions using machine learning. The avatar updates will roll out in Facebook Spaces this week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,366
2,018
"Deputy raises $81 million to help companies manage hourly paid workers | VentureBeat"
"https://venturebeat.com/2018/11/28/deputy-raises-81-million-to-help-companies-manage-hourly-paid-workers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deputy raises $81 million to help companies manage hourly paid workers Share on Facebook Share on X Share on LinkedIn Deputy homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Australia-based employee management platform Deputy has raised $81 million in a series B round of funding led by IVP, with participation from OpenView, Square Peg Capital, and EVP. Founded out of a Sydney suburb in 2008, Deputy is an all-in-one platform that covers pretty much all bases in the workforce management sphere, from rota-setting to timesheets, performance management, task-assignment, and company news broadcasts. The company claims more than 1 million users across 90,000 businesses — including such big names as Amazon, Uber, Hubspot, and McDonald’s — who use Deputy’s tools to help manage their hourly paid contractors. Deputy had raised $25 million via a series A round last year, and with another $81 million in the bank the startup said it plans to double down on its product development and global expansion. “There are 2.3 billion hourly paid workers in the world today, but too many of these are still managing their jobs in archaic ways,” said Deputy cofounder and CEO Ashik Ahmed. “The opportunity here is huge: We envisage a day when every single shift being worked is powered by Deputy — and this funding allows us to build the best product and engineering team in the world that helps us get there.” Above: Deputy: Scheduling Nabbing Silicon Valley’s IVP as a lead backer is a notable coup for Deputy, given the stature of IVP’s recent exits, including Dropbox, which went public this year ; GitHub, which sold to Microsoft for north of $7 billion ; and Shazam, which was bought by Apple. Other big names in IVP’s portfolio include Netflix, Snap, Slack, and Twitter. “IVP has a long history backing some of the most forward-thinking technology companies, many of which are solving some of the world’s biggest problems, and Deputy certainly fits within that criteria,” added IVP general partner Eric Liaw, who now joins Deputy’s board of directors. Shift The broader workforce has seen a shift away from permanence to the “gig economy” and similar short-term positions. With the evolving employment landscape in mind, venture capitalists have been betting big on myriad recruitment platforms — including marketplaces specializing in short-term workers, such as Shiftgig, which connects employers with jobseekers , and Job Today, which offers a 24-hour recruitment platform. “The future of work is transforming dramatically,” Liaw continued. “In a trend that started decades ago, businesses and workers around the world are moving toward flexible, hourly-based work. While this offers both businesses and employers greater freedom, the coordination of shifts and schedules presents an increasingly complicated set of pain points for workers and businesses. Existing solutions are antiquated; paper timeclocks, whiteboards in the break room, and spreadsheets are woefully inadequate for an increasingly mobile workforce. Deputy brings an incredibly simple and effective technology solution to make work easier for both workers and their employers.” In terms of pricing, Deputy has two core tiers — priced from $2 to $4 per user each month, depending on what services are needed. There is an additional enterprise plan available upon request that gives companies more customizations. Deputy counts more than 200 employees in Australia, the U.S., and the U.K., and with its new investment it will set about growing its engineering team in Sydney. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "