id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
4,013 | 2,020 |
"Ubisoft releases its own list of PS4 games that won't work on PS5 | VentureBeat"
|
"https://venturebeat.com/business/ubisoft-releases-its-own-list-of-ps4-games-that-wont-work-on-playstation-5"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ubisoft releases its own list of PS4 games that won’t work on PlayStation 5 Share on Facebook Share on X Share on LinkedIn Ubisoft says Star Trek: Bridge Crew won't work on PlayStation 5.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The list of PlayStation 4 games that may not work on PlayStation 5 is growing. On its own website , Ubisoft provided a roster of games that won’t work with the backward compatibility on the next-generation console. And none of these games are on the list Sony has on its support page.
“Most of our back catalog of games will have backward compatibility between the next and current generation of consoles, with a few exceptions,” reads a Ubisoft blog post from today.
“On Xbox Series X/S, all our Xbox One games will be backward compatible. On PlayStation 5, all games will be backward compatible except the following ones.” Here’s the list of PS4 Ubisoft games that the publisher claims won’t work on PS5: Assassin’s Creed: Syndicate Assassin’s Creed: Chronicles Trilogy Pack Assassin’s Creed: Chronicles India Assassin’s Creed: Chronicles China Assassin’s Creed: Chronicles Russia Risk Star Trek: Bridge Crew Werewolves Within Space Junkies Ubisoft didn’t say what this incompatibility looks like. Will the games not even boot up, or do they just have a couple of bugs that won’t affect most players? I’ve reached out to Ubisoft and Sony for clarification. I’ve also asked why these Ubisoft games aren’t on Sony’s list. I’ll update this story when either company gets back to me.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The weirdest standout on Ubisoft’s list is Star Trek: Bridge Crew. It supports PSVR, and it’s odd to think you’ll have to keep a PS4 around if you ever want to play it with your friends. But again, maybe it’s still mostly functional. We’ll have to all see for ourselves when the PS5 launches November 12.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,014 | 2,020 |
"Twitter Q3 2020 revenue smashes estimates with $936 million as user growth slows | VentureBeat"
|
"https://venturebeat.com/business/twitter-revenue-grows-14-to-936-million-in-q3-2020-monetizable-users-up-29"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter Q3 2020 revenue smashes estimates with $936 million as user growth slows Share on Facebook Share on X Share on LinkedIn Twitter's profile page on Twitter.com Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Twitter today reported strong revenue growth for Q3 2020, while monetizable users were up year-on-year (YoY) but fell short of estimates. The social networking giant announced its most recent financial and user metrics this afternoon, breaking a long-standing tradition of announcing earnings before the market opens.
The day after CEO Jack Dorsey faced a grilling from the U.S. Senate over how social media companies moderate content, Twitter revealed revenues of $936 million — a year-on-year (YoY) increase of 14% from the $824 million reported last year and a quarter-on-quarter (QoQ) increase of 37% over the $686 million for Q2 2020.
The company added that its net income for the quarter fell around 22% from last year to just under $29 million.
In the earnings press release , Twitter CFO Ned Segal noted that the revenue hike was largely due to advertisers increasing their spend around live sports and other events after holding back in previous quarters because of the pandemic.
In terms of users, Twitter reported 187 million monetizable daily users (mDAUs) for a 29% YoY increase on the 145 million it reported for Q3 2019. However, that figure was only a fraction higher than the previous quarter’s 186 million. Twitter stopped reporting its overall monthly active users last year, choosing instead to focus on the mDAU metric, which it defines as individuals who log in through Twitter.com or any of the mobile apps that are able to show advertisements. This excludes users who don’t log in or who use TweetDeck or other third-party clients.
As with the previous quarter, Twitter hadn’t provided any revenue guidance ahead of its Q3 2020 financials, due to the impact of COVID-19, but analysts had pegged Twitter’s revenue for the quarter at roughly $775 million, while mDAUs had been estimated to reach more than 196 million.
In short, Twitter smashed it on revenue but disappointed on user growth.
Twitter’s shares are sitting at roughly double their March value, having hit a five-year high of more than $52 this week. However, off the back of lower-than-expected user growth, Twitter’s shares plunged up to 12% in after-hours trading.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,015 | 2,020 |
"The DeanBeat: Why politics and games go together | VentureBeat"
|
"https://venturebeat.com/business/the-deanbeat-why-everybody-even-gamers-and-game-developers-needs-to-vote"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion The DeanBeat: Why politics and games go together Share on Facebook Share on X Share on LinkedIn Jam City CEO Chris DeWolfe wants you to vote.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Many gamers say that games and politics don’t mix well because people don’t like serious (or boring) politics woven into their entertainment. But I think that games and politics can elevate each other as they get into an inspiration cycle, where games can inspire political change and politics can inspire meaningful games.
The most benign and beneficial message that game companies can impart is to educate the public so they’re more likely to vote, and to vote intelligently. That’s what Jam City, the maker of mobile games like Cookie Jam and Disney Frozen Adventures , had in mind when it decided to make its Culver City, California headquarters into an official voting center where voters can drop their ballots.
Jam City CEO Chris DeWolfe said the team is passionate about getting people to vote and recognize the power of democracy. They started with voter registration campaigns in 2018 and got thousands of people to vote. This year, they wanted to do more to exercise their civic duty, considering the pandemic could deter people from voting, DeWolfe said. So they partnered with Los Angeles County so people can drop off their ballots and enjoy some food, drinks, and fun — all while wearing masks.
In an interview with GamesBeat, DeWolfe said this election is more important than ever, with so many issues at stake in both local and national politics. By mixing games and the election, Jam City can raise awareness in a way that other companies cannot. And it can take away some of the stress and anxiety that people feel during election season, DeWolfe said.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Jam City’s headquarters in Culver City, California, is an official voting center.
“People may be a little bit stressed out about the whole election itself,” DeWolfe said. “We wanted to create a safe yet fun and whimsical environment where they could look forward to going out and exercising their constitutional right to vote.” DeWolfe said the company isn’t partisan and that the effort is all about making resources and education available to people so they can make their own decisions.
“We think about how lucky we are to be able to vote for our elected officials. It’s a privilege, and it’s an important year to go out and vote,” DeWolfe said. “There’s a lot at stake for a lot of people. It feels personal from the standpoint that I think that there are a lot of emotions out there. And people need to deal with those emotions through voting. And it’s really important that you make it easy for everyone to vote.” Jam City also uses its games to encourage people to go vote, as it has a big platform to communicate with a lot of people. Games are cool, and they can make voting seem cool too. So yes, politics and games can work together well. As a medium, games are like newspapers. And nobody ever said that politics and newspapers don’t go together.
Specific warnings for voters DeWolfe could be more political with his message, but a whole spectrum of views has sprung up about how involved gamers and game developers should be in politics. Everybody’s got a cause, and sometimes that cause is to walk right down the middle. Companies can pursue various gradations of political involvement that game.
Outside of gaming, the same is true in tech circles. Steve Grobman, the chief technology officer at McAfee, goes one step further than DeWolfe, saying that voters should be aware that in the last week before the election, people will try to mislead them. The Hunter Biden situation is a good example.
Grobman said we should be wary of the “hack and leak” disinformation campaign. Some information about candidate Joe Biden’s son is legitimate. But he warns that “fabricated information can be intertwined with legitimate information that has been stolen.” He added, “Because the legitimate information can be independently validated, it gives a false sense of authenticity to the fabricated information.” We live in an age where it has become necessary not only to encourage people to go out and vote. It has also become necessary to warn them that they shouldn’t be victims of people who are trying to manipulate them into voting for the wrong candidate. We all know how big a problem this has become on social media. The answer is to go to reliable sources of information.
What games can teach us Above: Wolfenstein 2: The New Colossus showed a Nazified America.
Games aren’t the best source of breaking news. It takes perhaps five years for games like the seemingly political Wolfenstein 2: The New Colossus , to hit the market. That game depicted an alternate universe where the Nazis took over the United States and allied themselves with the Ku Klux Klan.
Who would have thought that some of this would have seemed eerily truthful? But while they’re often wrong or late when it comes to predicting the future, games can be as instructive as literature. They can speak through the ages, with ideas, visions, and messages that can inspire us to action.
As the election draws so close, it’s good to remember that games can help you form your political beliefs or understand politics better. I played The Political Machine 2020 back in March, and it taught me how hard it would be for a Democrat to unseat an incumbent Republican like Donald Trump. Given Trump’s superior war chest at the time, a Biden win was a long shot. This game told me we can’t be complacent.
Above: The Political Machine 2020 shows you which states you need to win to be president.
But that was before COVID-19 struck and changed the presidential race, like a massive comet hitting the world. All of the events that unfolded during that time led to a turning of the tables, where Biden is now the favorite with the bigger war chest.
I remember how Sim City taught me how taxes worked. If you raised your tax rates high, you could afford to create a lot more services for your residents. But you might also find that they would leave your city for nearby cities that didn’t have such high tax rates.
And games like Civilization can teach you what happens when you mix different political ideologies, religions, or economic policies. It’s like a petri dish for figuring out which political systems work best.
Is this adding too much politics into a game? I don’t think so, especially if the game developers have something meaningful to say. It doesn’t necessarily ruin the art, or the fun of the game. It can, in fact, turn the game into art. When Arthur Miller wrote The Crucible , the play about the Salem witch hunts, he was expressing his outrage about McCarthyism.
A powerful medium Above: Tell Me Why features the twins Tyler and Alyson. Tyler is transgender.
Games are the most powerful medium now in terms of their reach and the impact they have on people. They can be very powerful when it comes to normalizing attitudes and behaviors. If we see transgender represented as normal in games and media such as Dontnod’s Tell Me Why, then we can also envision a world where we treat them as normal.
“That’s exactly what we wanted to achieve,” Dontnod CEO Oskar Guilbert said in an interview with GamesBeat.
Dontnod and other developers, such as Naughty Dog with The Last of Us Part II, can turn inaccurate stereotypes on their heads through this normalization of characters in video games. And it can make a real difference to people, just as SimCity made an impression on me so many years ago.
I felt like the presence of Trump’s Wall in Life is Strange 2 was a profound statement by Dontnod, a French video game company. The point isn’t to be political, but to tell good stories, said Guilbert. But sometimes stories are good because they are political. The presence of the Wall in Life is Strange 2 represented a barrier to the freedom of two young boys, who were improbably seeking asylum in Mexico.
“We have our values and that’s something very important to convey,” Guilbert said.
Retreating from politics Above: Orwell is back with a dystopian story.
Ubisoft almost instilled political messages inside games such as Far Cry 5, which depicted armed and religious militia run by white males in the state of Montana, and in The Division, where a virus causes the apocalypse in the U.S. and brings down great cities such as New York and Washington, D.C.
But when the developers discussed those games, they ultimately retreated from taking political stances that would have hurt the market for the games. Ubisoft and Machine Games, both based overseas, didn’t want to get mixed up in U.S. politics, even though their games felt like they incorporated political thinking in their designs. I liked the games because they made me and other people think, and to relate the ideas in the fiction of the games to the ideas that we have to consider in everyday life. But they could have gone further.
By contrast, Insomniac CEO Ted Price took a stand against Trump’s Muslim ban in 2016, going so far as making a video expressing his company’s opposition to it. That was admirable. But Marvel’s Spider-Man, the popular game made by Price’s studio, didn’t have a ton of disguised contemporary political commentary. The stand had nothing to do with the games that Insomniac makes, and I don’t think they had an effect on Insomniac’s sales at all.
Sure, game companies may lose a lot of fans when they insert politics into a game. With half the country being Democrats and half Republican, it doesn’t pay to be so partisan that you alienate half the fan base. On the other hand, if you can deliver a powerful political message and weave it into the story in a way that is authentic and not ham-fisted, your political message can be powerful. And more memorable. It can help the game developer rise above the rest, the same way that George Orwell did with his novel 1984.
While Orwell didn’t write about the politics of the 1940s or side with any particular party in his book, he did convey lasting political messages about what happens to our freedoms when we embrace extreme ideas around surveillance, secret police, propaganda, and totalitarian government. This message resonated with Apple cofounder Steve Jobs, when he offered an alternative computer to the IBM PC, and unveiled the Macintosh with a memorable television commercial based on Orwell’s novel.
The genius of that commercial is recognized even today, and Epic Games paid homage to it when it used a satirical version of it as it filed an antitrust lawsuit against Apple because it wouldn’t permit Epic to sell goods directly to Fortnite fans within the iOS app. That satire was an astute recognition that the tech platforms, walled gardens, open-vs.-closed debates, and regulatory matters are inherently political battles.
1984 also inspired the creators of a 2017 game about mass surveillance, dubbed Orwell.
Worried about the trends in the U.S. and the National Security Agencies mass spying on all Americans, Osmotic Studios, a Hamburg, Germany-based independent game studio to create Orwell as a cautionary tale for an informed electorate.
I realize that some people will read 1984 and play Orwell and vote for the Democrats. And others will do the same and vote for Republicans. But at least it gets them thinking and motivates them to get out of the house and vote. And that’s what really matters. Where do I stand on politics and games myself? I want to see the whole spectrum of political involvement, from DeWolfe’s call for voting to the indies raging against surveillance in a game packed full with layers of meaning. I want politics and games to come out into the open.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,016 | 2,020 |
"SiFive unveils plan for Linux PCs with RISC-V processors | VentureBeat"
|
"https://venturebeat.com/business/sifive-unveils-plan-for-linux-pcs-based-on-risc-v-processors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SiFive unveils plan for Linux PCs with RISC-V processors Share on Facebook Share on X Share on LinkedIn SiFive's design for a RISC-V PC.
SiFive today announced it is creating a platform for Linux-based personal computers based on RISC-V processors. Assuming customers adopt the processors and use them in PCs, the move might be part of a plan to create Linux-based PCs that use royalty-free processors. This could be seen as a challenge to computers based on designs from Intel, Advanced Micro Devices, Apple, or Arm, but giants of the industry don’t have to cower just yet.
The San Mateo, California-based company unveiled HiFive Unmatched, a development design for a Linux-based PC that uses its RISC-V processors. At the moment, these development PCs are early alternatives, most likely targeted at hobbyists and engineers who may snap them up when they become available in the fourth quarter for $665.
SiFive CTO Yunsup Lee spoke about the new development at the online Linley Fall Processor Conference being held today. Lee explained that the company’s HiFive Unmatched development boards allow RISC-V developers to create the software they need for their platforms.
SiFive designs processors that can be customized for products ranging from the low end to the high end of the computing spectrum. These processors are based on RISC-V , a free and open architecture created by university researchers a decade ago.
The announcement is sure to cause some speculation. While it’s still early days, it’s not inconceivable that RISC-V processors could someday be alternatives to Intel-based PCs and PC processors. The RISC-V organization is run by an industrywide body of supporters that includes SiFive. In fact, RISC-V’s founders are all working for SiFive in some fashion.
SiFive raised $61 million in August from investors that included chip superpowers Intel and Qualcomm. The startup has raised $190 million to date, and former Qualcomm executive Patrick Little recently joined SiFive as CEO. His task will be to establish the company’s RISC-V processors as an alternative to Arm. This move comes in the wake of Nvidia’s $40 billion acquisition of the world’s leading processor architecture.
If Little is also looking to challenge Intel and AMD in PCs, he’ll have his work cut out for him. For starters, SiFive is currently focused on Linux-based PCs, not Microsoft Windows PCs. Secondly, SiFive wouldn’t build these processors or computers on its own. Its customers — anyone brave enough to take on the PC giants — would have to do that.
“It would be hard to imagine anybody overtly [taking on Intel and other PC makers],” Linley Group senior analyst Aakash Jani said in an interview with VentureBeat. “You may see companies in stealth mode try to do this. But the biggest [impediment] is the software ecosystem. It took a long time for anybody to even develop for Arm’s architecture, and now they have an x86 emulator. But the same software support doesn’t exist for the RISC-V platform currently. If anyone were to take on the giants of x86 and Arm, they would have to really go develop the software ecosystem.” Above: SiFive’s HiFive development board for Linux PCs.
Developers can use the boards to test code for real-time operating systems, custom Linux distributions, compilers, libraries, and applications.
The SiFive HiFive Unmatched board will have a SiFive processor, dubbed the SiFive FU740 SoC, a 5-core processor with four SiFive U74 cores and one SiFive S7 core.
The U-series cores are Linux-based 64-bit application processor cores based on RISC-V. These cores can be mixed and matched with other SiFive cores, such as the SiFive FU740. These components are all leveraging SiFive’s existing intellectual property portfolio, Jani said.
“I wouldn’t see this as SiFive moving out of the box. It’s more like they’re expanding their box,” Jani said. “They’re using their core architecture to enable other chip designers to build PCs, or whatever they plan to build.” The HiFive Unmatched board comes in the mini-ITX standard form factor to make it easy to build a RISC-V PC. SiFive also added some standard industry connectors — ATX power supplies, PCI-Express expansion, Gigabit Ethernet, and USB ports are present on a single-board RISC-V development system.
The HiFive Unmatched board includes 8GB of DDR4 memory, 32MB of QSPI flash memory, and a microSD card slot on the motherboard. For debugging and monitoring, developers can access the console output of the board through the built-in microUSB type-B connector. Developers can expand it using PCI-Express slots, including both a PCIe general-purpose slot (PCIe Gen 3 x8) for graphics, FPGAs, or other accelerators and M.2 slots for NVME storage (PCIe Gen 3 x4) and Wi-Fi/Bluetooth modules (PCIe Gen 3 x1). There are four USB 3.2 Gen 1 type-A ports on the rear, next to the Gigabit Ethernet port, making it easy to connect peripherals.
The system will ship with a bootable SD card that includes Linux and popular system developer packages, with updates available for download from SiFive.com. It will be available for preorders soon.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,017 | 2,020 |
"SearchUnify Announces Mamba '21 to Power Next-Gen Support & Self-Service Experiences with Its Cognitive Search Platform & Suite of Apps | VentureBeat"
|
"https://venturebeat.com/business/searchunify-announces-mamba-21-to-power-next-gen-support-self-service-experiences-with-its-cognitive-search-platform-suite-of-apps"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release SearchUnify Announces Mamba ’21 to Power Next-Gen Support & Self-Service Experiences with Its Cognitive Search Platform & Suite of Apps Share on Facebook Share on X Share on LinkedIn To deliver on the promise of augmented intelligence, Mamba ’21 offers a complete suite of AI apps built on top of its cognitive search platform MOUNTAIN VIEW, Calif.–(BUSINESS WIRE)–October 29, 2020– SearchUnify, a leading cognitive search platform, has announced the general availability of its fall release, Mamba ’21, from October 29, 2020. The release solidifies SearchUnify’s commitment to driving customer support and self-service with its cognitive search platform and complete suite of customer support apps.
“We’ve always been laser-focused on elevating customer support and self-service experiences,” said Vishal Sharma, CTO of SearchUnify. “Mamba ’21 empowers teams to fully realize the cognitive power with a host of new apps, features, and enhancements.” “One addition that I’m particularly excited about is the Escalation Predictor. As the name suggests, it analyzes various aspects of a ticket to predict the likelihood of its escalation, helping provide timely resolution ensuring a better CSAT. The app will further strengthen the platform’s existing suite of AI-powered apps aimed at streamlining, optimizing, and future-proofing the support ecosystem.” The SearchUnify Chatbot that’s hailed as “best of breed” by TSIA now comes packed with even more power. It now supports chat in multiple languages and comes with readymade and customizable stories, which further expedite its training and quicken its time to value.
“We’re expanding our suite of custom apps built on top of our cognitive search platform beyond customer support function to enable best-in-class experiences for sales, customer onboarding and success teams,” added Vishal Sharma. To that vision, SearchUnify has made major enhancements to its cognitive search platform to elevate content findability. These enhancements include significantly faster speed of search, better access control with a super admin, more relevant auto-synonym suggestions with Synonyms 3.0, new analytics reports and more powerful search analytics, cleaner and configurable search results display page, Regex-based keyword tuning, and more.
“Today, the need to distinctly identify and elevate customer experience is a burning business requirement,” said Alok Ramsisaria, CEO of Grazitti Interactive, SearchUnify’s parent company. “CX is the currency in this customer economy. If customers think your service isn’t up to the mark, they will switch. This opportunity cost of losing customers can be averted with an augmented system. Mamba ’21 enables us to rapidly advance our mission towards that end by furthering our ongoing commitment to customer and self-service.” “We’re going full steam ahead! Most of the new features and enhancements have been implemented not just for a persona but while working closely with our customers. They have always been our driving force. This is why the AI-fueled platform is constantly getting lauded for the sheer convenience and category-leading capabilities,” he added.
About SearchUnify SearchUnify is a cognitive search platform for enterprises that fuels multiple applications for various industries and functions. Some of these applications are Intelligent Chatbots, Agent Helper, Community Helper, KCS Enabler, and Escalation Predictor. SearchUnify was named the “youngest product” in The Forrester Wave: Cognitive Search, Q2, 2019.
It was honored with two Silver Stevies at 2020 Asia‑Pacific Stevie® Awards , a Silver and a Bronze at the Stevie® Awards for Sales and Customer Service 2020 and with the Product of the Year 2020 Award by Software Technology Parks of India (STPI). It’s been named a finalist for the “Best Technology Innovation” at The Global Contact CenterWorld Awards 2020 and the “Best New Technology Solution” at the ICMI Global Contact Center Awards 2020. Companies like Rubrik, Flexera, Databricks, Kronos, and Zuora trust SearchUnify for enhancing their customer and employee experience with revolutionized information discovery.
View source version on businesswire.com: https://www.businesswire.com/news/home/20201029005978/en/ Ajay Paul Singh Head of Marketing, SearchUnify [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,018 | 2,020 |
"ProBeat: Huawei sanctions are a win for Xiaomi, but a loss for the tech industry | VentureBeat"
|
"https://venturebeat.com/business/probeat-huawei-sanctions-loss-for-the-tech-industry"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Huawei sanctions are a win for Xiaomi, but a loss for the tech industry Share on Facebook Share on X Share on LinkedIn Xiaomi MI 9: front view Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The U.S. sanctions against Huawei are working. We learned last week that the Chinese company’s growth slowed sharply with Q3 2020 revenue of $32.5 billion, up just 3.7% year-over-year. Revenue growth for the first nine months of 2020 was 9.9%, compared to 24.4% for the same period in 2019. But market share figures from IDC and Canalys out yesterday show that Huawei’s losses are merely Xiaomi’s gains.
In Q3 2018, Huawei overtook Apple , marking the first time in seven years that Samsung and Apple were not the top two smartphone makers. Huawei and Apple went back and forth a few times for that second place spot. But in Q2 2020 , Huawei dethroned Samsung for the top spot, despite U.S. sanctions. It turns out the impact was merely delayed: In Q3 2020 , Huawei dropped like a rock, putting Samsung back in pole position. Meanwhile, Xiaomi overtook Apple for the first time.
Watching smartphone shipments in 2020 has been a useful gauge of how the broader tech economy is weathering the global pandemic. Smartphone shipments were down 11.7% in Q1 2020, down 16.0% in Q2 2020, and down 1.3% in Q3 2020.
Breaking down the numbers Apple aside, only Huawei shipped fewer units in Q3 2020 compared to Q3 2019. (Apple shipped fewer iPhones largely due to a late iPhone 12 compared to predecessors, which typically debut in the third quarter). Given the market only declined 1.3%, it’s easy to see that Xiaomi directly benefited from Huawei’s losses, per IDC: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The market share changes in percentage points show the movement: Samsung (+0.9%), Huawei (-3.9%), Xiaomi (+4.0%), Apple (-1.2%), Vivo (+0.5%), and Others (-0.3%). But IDC’s unit figures really tell the story: Samsung: +2.2 million units Huawei: -14.7 million units Xiaomi: +13.8 million units Apple: -5.0 million units Vivo: +1.3 million units Others: -2.5 million units Canalys’ unit figures confirm the trend: Samsung: +1.3 million units Huawei: -15.1 million units Xiaomi: +14.6 million units Apple: -0.3 million units Vivo: +1.7 million units Others: -6.6 million units Huawei shipped double-digit million smartphones less. Xiaomi shipped double-digit million smartphones more.
The tech industry loses We all know why this is happening. In May 2019, the U.S.
added Huawei to the “entity list,” barring suppliers of software and manufacturing equipment from doing business with the Chinese company without first obtaining a license. One major consequence of the U.S. trade ban was that Huawei couldn’t ship phones running Android with Google Mobile Services.
The U.S. has further been pressing governments around the world to squeeze Huawei out, arguing the company would hand over data to the Chinese government. Huawei denies it spies for China.
Huawei could very well be under China’s thumb. And many believe the real concern is around Huawei’s lead in 5G technology rather than its smartphones. But in the smartphone market, all the U.S. has achieved is swapping one Chinese company for another Chinese company.
I’m not sure this is a precedent the U.S. government wants to set. Imagine a scenario where the Chinese government sanctions Apple over iMessage concerns. Meanwhile, Mac sales take a hit and Dell in turn sells more laptops. Do we really want a world where instead of writing laws and policy to encourage or discourage market behaviors, governments senselessly attack individual companies ? There need to be rules of engagement, to be sure, but millions of consumers are now buying Xiaomi phones instead of Huawei phones, not because Xiaomi’s products are superior but because Huawei has been cut out the market. It’s easy to see how the U.S. sanctions will benefit all the other phone makers. But the story won’t end here — other governments will want to play kingmaker too. When all is said and done, the real loser won’t be Huawei — it will be the whole tech industry.
ProBeat is a column in which Emil rants about whatever crosses him that week.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,019 | 2,020 |
"McAfee CTO offers 6 cybersecurity warnings ahead of election | VentureBeat"
|
"https://venturebeat.com/business/mcafee-cto-offers-6-cybersecurity-warnings-ahead-of-election"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages McAfee CTO offers 6 cybersecurity warnings ahead of election Share on Facebook Share on X Share on LinkedIn Steve Grobman, CTO of McAfee, gives a keynote speech at RSA 2019 event.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
McAfee CTO Steve Grobman has six cybersecurity warnings for all of us as U.S. election day approaches. This is in keeping with what Grobman has consistently done over the years. He looks at the million little cyber threats that McAfee sees every day and tries to extract a big-picture warning for the rest of us, whether it’s about AI’s effect on cyberattacks or the dangers of deep fakes.
He has studied the impact of cyberattacks during the 2016 election, and he is once again concerned about how the American electorate could be swayed by false information. I talked with Grobman this week about his concerns.
He pointed to the Hunter Biden controversy as a good example. Grobman said we should be wary of the “hack and leak” disinformation campaign. Some information about candidate Joe Biden’s son is legitimate. But he warns that “fabricated information can be intertwined with legitimate information that has been stolen.” He added, “Because the legitimate information can be independently validated, it gives a false sense of authenticity to the fabricated information.” Be prepared for that disinformation to only grow in the coming days. Grobman wants us all to vote, but he wants us to do it wisely and with reliable sources of information.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here’s an edited transcript of our interview.
Above: Steve Grobman: I didn’t say that. Grobman did a demo of deep fakes at RSA in 2019.
VentureBeat: You had some thoughts about election-related matters today.
Steve Grobman: We’re moving into the home stretch. While we can’t predict exactly what the outcome is going to be over the next week, there’s definitely a number of things that we think people should be on heightened alert for from a cyber perspective, in order to maximize the ability to have a free and fair election. I’m happy to talk through some of the scenarios that we’re looking out for, and we encourage both media and voters to be on the lookout for.
VentureBeat: You had six examples? Grobman: We’ve broken it down to six key areas that are based on things we’ve seen and things that we think are high-probability events, or at least plausible scenarios that we need to be on the watch for.
The first one is what we’re calling hack and leak. It’s the need to be on the lookout for leaked data and not trusting leaked data. One of the problems with political information that comes to light from a data breach or a leak is, fabricated information can be intertwined with legitimate information that’s been stolen. Because the legitimate information can be independently validated, it gives a false sense of authenticity to the fabricated information.
In 2016 the Podesta emails were one type of leak, where some of that information could be validated, but there were also a number of things that were unclear as to whether they were fabricated. In this election, we’re seeing other types of leaked information or information that’s coming from questionable sources, such as the Hunter Biden laptop. It’s important that voters should distrust any information that’s coming from a leak unless all of the information can be independently validated. That’s the first scenario that we wanted to call out.
The second one is related to ransomware. We see ransomware as a major problem for consumers and organizations over the last few years, where ransomware is now impacting businesses. There are many types of ransomware, including not only holding data hostage, but also systems, and even extorting businesses with things like the threat of release of intellectual property, or re-enabling critical business systems.
One of the concerns we have is, given that ransomware is so common, it’s typically attributed to criminals, but it would be a reasonable way for a nation-state actor to disrupt the elections and have false attribution pointing more toward cybercrime motivation than an election manipulation or disruption scenario. We do need to look out for both state-sponsored ransomware campaigns, or even what I would call state-encouraged ransomware campaigns, where a nation-state might look the other way for criminal organizations within the country that are willing to execute these attacks against election infrastructure.
VentureBeat: On your first scenario, with the Hunter Biden material, what is theoretically an issue here is that there were some facts that were verifiable. It was his laptop, and there were emails on it. But the specific emails pointing to his father, that can be faked to go along with other correct information. Is that a kind of scenario that’s possible here, that you’re warning against? Grobman: Right. The warning — the way I’d say it more directly is, it’s important not to let verified information in a leak lend credibility to unverified information. It’s very easy and a common tactic for disinformation to use true, verifiable information to raise the credibility of false or disinformation. In the scenario you just laid out, it would be very reasonable for an adversary that wanted to create a narrative that was completely fabricated to intertwine that information along with content that could be verified. What people might not realize is, the logic of, “Oh, well, in one part of the story the facts check out, therefore the whole thing must be true,” that’s a very dangerous way of looking at information.
It’s critical that — I’d give three takeaways. One is, voters need to be skeptical of information that comes out of a leak. The press needs to be very careful in how they treat information that comes out of a leak, and not assume it’s legitimate unless it’s completely verified independently. And third, politicians should not point to leaked information as part of their political messaging, because the information ultimately can’t be verified. It’s a dangerous path to walk down if politicians start pointing to information that is very easily fabricated.
Above: Deep fakes are pretty easy to create.
VentureBeat: On ransomware, is there a scenario out there in the wild already that relates to the election? Grobman: We have seen state and local IT infrastructure impacted by ransomware attacks very recently. What’s a lot more difficult is to do direct attribution to a particular nation-state that might be using this tactic to disrupt the election. One of the challenges here is, whether it’s a nation-state, or criminal groups that are linked to a nation-state, or just cybercriminals, the evidence may look very similar. That’s the danger.
We’re seeing that ransomware is impacting state and local organizations.
In the third scenario, one of the differences between 2016 and 2020 is the sophistication of AI technology in the ability to create large volumes of compelling fake video. What we call deep fake. We need to recognize that just as voters are skeptical of photographs being subject to manipulation, video now can be manipulated such that there can be a video of a candidate saying or doing anything. The barrier to entry for building these videos has come way down since the last election cycle.
We need to be very careful in the way that we treat video, not only being skeptical but before spreading viral videos, they need to be verified. Not only by looking at them, but tracing them back to their source. It’s important that if there is video content related to a candidate’s words or actions, that it can be validated by a reputable news or media outlet, and not solely sourced off of social media.
One of the things McAfee is doing in this area is we’ve opened a deep fake forensics lab that is available to media sources, such that if a video comes in, before they run a story based on it, we can provide analysis as to whether we see markers or indications that it’s been fabricated or faked.
VentureBeat: Are you able to quickly identify deep fakes? Is that something you can keep up with? Grobman: I’d put it this way. We’re pretty good at detecting deep fakes that are created with the common tools that are publicly available. With that said, if a well-funded nation-state actor created a video using new algorithms, new techniques, that would be significantly more difficult for us to detect.
The other two points I like to make on our ability to do analysis — we’re able to detect deep fakes, but in scenarios where we don’t detect something as being fake, that doesn’t infer that it’s legitimate or authentic. If we detect that it’s fake, it’s almost definitely fake. If we don’t detect that it’s fake, that either means it’s authentic or it’s using new techniques that our deep fake detection capability is not yet able to recognize.
The other point I’d try to stress is, it is a cat and mouse game. There are going to be better deep fake creation techniques, and we’ll have better deep fake detection techniques. We can also use a wide range of deep fake detection techniques that look at different approaches. For example, we can look at markers for the altered video itself. Some of the algorithms are looking for inconsistencies in the video. But then there are other, more advanced solutions that track the mannerisms or gestures of certain candidates, so we can look for inconsistencies of — would this candidate have made these arm motions? Are they typical? The algorithms can track and create clusterings for the other videos on file for a candidate, and then determine whether the submitted video is an outlier.
Another thing we suggest to the media is if somebody submits a video that occurred in a public setting, to try to verify through multiple unique sources. If a candidate said something at a rally, get video from multiple cell phones. It’s going to be much harder to fabricate a video from multiple angles and get all of the physics exactly right when you have multiple cameras shooting the same event simultaneously. Putting all of these things together will help us authenticate whether or not we should trust video related to the campaign.
The next one we talk about is related to disinformation. We saw, about a week ago, the FBI reported that there are intimidation campaigns, where nation-states, per the FBI’s attribution, are intimidating voters, attempting to either change the way a voter votes or discredit the election process.
We’ve also seen that the websites that are hosting information about the election, run by local and state governments, are often lacking some of the most basic cyber-hygiene capabilities that we’d expect. For example, we ran a report that showed the vast majority of local election websites are not using .gov domain addresses, which means that it’s very difficult to tell whether you’re going to a legitimate local election site or you’re going to a fake site. A fake site could do very simple things to suppress votes, such as changing the time the polls are open, changing the polling locations, changing information on eligibility requirements for voting, or changing information on the candidates. There’s no way to tell, if you’re a typical voter, whether votededden.com or vote-dedden.com is the “correct” site, one giving fake information and the other giving real information.
The other hygiene element we saw severely lacking: About half the sites are not using HTTPS. HTTPS both encrypts data, so that if there’s personal information going from a voter to the site, or if the data is coming back from the site is something important, HTTPS can ensure that there’s an integrity to the data, that the data is not tampered with. There’s a number of attacks where you can impersonate a site and change the information with some of these integrity attacks. That’s much easier if a site is not using HTTPS.
Above: Ransomware was first detected in 1989.
VentureBeat: That sounds like a tough one to get around, especially if you’re just Google-searching for things.
Grobman: It’s the exact point. Instead of Googling, we recommend voters start from a trusted Secretary of State’s website. There’s typically going to be a list of all the local websites from the Secretary of State’s website. If you’re a resident of Texas, start at the Texas Secretary of State and find your county. There will be a link from the Secretary of State’s website to your county. That’s the link you should follow.
Voters also need to be very skeptical of email. Election boards are not typically going to email you with logistics information on where, when, and how to vote. If you get an email that says, “Reminder, tomorrow is election day. This year, due to COVID-19 we’ve moved the polling location 55 miles away,” stop before you drive 55 miles out into the country to vote. It’s likely a fake email. Those are the types of things voters need to be aware of as we get closer to November 3.
The fifth one is, we’ve talked a lot in the past about denial of service attacks, attacks on things like critical infrastructure. We need, as a nation, to be ready for a critical infrastructure attack that could target specific areas of the country in order to tilt the vote. A critical infrastructure attack in a rural area to suppress Republican votes, a critical infrastructure attack in urban areas to suppress Democratic votes — in a close election in a state that is going to be very close from a voting perspective, and given the fact that the Electoral College gives all electoral votes for a state — except for Maine and Nebraska — as winner take all, disrupting portions of a state and giving voters a reason to stay home because they need to wait for the heat to come back on, or creating traffic jams due to lights going out, those are types of things we need to be aware of.
The good news is, federal agencies like DHS are very much on alert looking for these types of attacks. We will hopefully be able to respond very quickly if anything like this does occur. But really, all federal, state, and local authorities need to be on their A game for the next week.
And finally, we want to remind people that attribution is difficult. When and if we see cyber activity during the election cycle, jumping to conclusions as to who is behind it is difficult. It’s something that needs to be left to trusted federal agencies. One of the things that’s unique about cyber is, given that your evidence is digital, it’s easy to fabricate fake evidence to point to some other entity than the one that executed the attack. We call this a false flag.
If country A wanted to make it look like country B was manipulating the election, going back historically and analyzing the way that country A had executed attacks in the past and setting up a scenario with some of the markers that have been used in the past is very possible. We’ve seen elements of this even recently called out by the FBI in the indictments of some of the Russian actors that came out a few a weeks ago, where some of those attacks were meant to look like China or North Korea at work. Given that we’re in an election cycle where different countries are inferred to be supporting different candidates, recognizing that attribution is something we need to be careful with, and generally using a combination of both digital forensic evidence and also information that would only be available to law enforcement and the U.S. intelligence community by investigating things that are not generally in the public domain.
VentureBeat: There is the problem that the president of the United States [or] his advisors are sometimes the source of the disinformation. I’m not so sure exactly how people check up on that, other than listening to reputable news sources.
Grobman: Relying on the media to fact-check all information and ensure that we can trace evidence back to the underlying source that is verifiable is incredibly important. Operating on conjecture, innuendo, or other information that is not verifiable is something that the media and voters should be very careful of. It’s important that we have a free and fair media that’s able to fact-check and dig into the data. That’s very important to supporting the U.S. democracy.
VentureBeat: When you think of more low-tech and simple disinformation campaigns and you compare it to things that are a lot more sophisticated, with the technology available now, what do you think about that? Do you think that those are still worth worrying about? Grobman: They’re worth worrying about. But what I will say is, we see with cyberattacks, generally, a cyber-adversary will use the simplest approach to achieve their goals. If you can steal somebody’s data with a very simple attack, like a spearphishing attack, you won’t go to the trouble of engineering a high-tech solution. Additionally, for some of these more elaborate attacks, where a nation-state might need to use vulnerabilities that only they are aware of, once you exploit a vulnerability you’ve burned it. You can’t use it in the future. Unless an adversary feels that they’re unable to meet their objective using the simpler approaches, there are incentives to keep in your back pocket the more sophisticated and elaborate techniques.
With that said, it’s certainly plausible that an adversary might see the stakes for this election cycle as being high enough that they’re willing to pull out some of their more powerful capabilities and use them. Unfortunately we don’t have any deterministic predictors of which of those scenarios will play out until after it happens.
Above: A deep fake of Tesla CEO Elon Musk.
VentureBeat: You’re saying this right before the election. Have you detected a lot more activity in recent days that makes it necessary to speak up? Grobman: McAfee has been focused on election security for more than two years. We started calling out concerns back in the 2018 midterm elections. We’ve been focused on educating the general public on what to look out for and how to think about election security. We’re moving into the final week of the election, and clearly, if adversaries wanted to create scenarios of disruption, this would be one of the higher-probability weeks that would occur. One of the key reasons we’re talking about it right now is just to make sure that voters understand what to look for, and that all of our state, local, and federal officials are preparing as strongly as they can for every possible scenario.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,020 | 2,020 |
"Intel launches Iris Xe Max graphics chip with 3 laptop makers | VentureBeat"
|
"https://venturebeat.com/business/intel-launches-iris-xe-max-graphics-chip-with-three-laptop-customers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel launches Iris Xe Max graphics chip with 3 laptop makers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Intel has launched its Iris Xe Max graphics chip and announced that Acer, Asus, and Dell have signed on as customers. The three laptop makers are debuting thin-and-light laptops with the chip today.
Intel’s first standalone graphics chip uses the same microarchitecture as its Iris Xe graphics, the integrated graphics in its 11th Gen Intel Core mobile processors. But this new graphics chip is more powerful, with 96 execution units, a frequency of 1.35GHz, and built-in encoding capability. The latter is good for the target market of content creators because it helps them process videos faster, marketing director Darren McPhee said in an interview with GamesBeat.
“We’re targeting this solution at mainstream thin light notebooks and specifically the mobile creator,” McPhee said. “It’s going to be a long-term strategic play with this first wave of the product. We’re focusing on how can we enhance the creator capabilities in this space.” The chip has been under design for a few years, and the new discrete graphic processing unit (GPU) will allow Intel to better compete with standalone graphics chipmakers Nvidia and Advanced Micro Devices. Intel says this is just the beginning of a strategic push into the graphics chip market in 2021.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Intel Iris Xe Max graphics capability.
The Iris Xe Max features Intel Deep Link technology, which helps Intel’s central processing units (CPUs) work well with its GPUs, using a common software framework to boost performance for things like video encoding. Intel said its GPU can do Hyper Encode for up to 1.78 times faster encoding than Nvidia’s GeForce RTX 2080 graphics card for desktops. The 2080 consumes more power than Intel’s graphics solution, but the 2080 is now being replaced by Nvidia’s new GeForce RTX 3080 GPUs. Intel said it hasn’t been able to test against the 3080.
Starting today, Intel Iris Xe MAX graphics are available in the Acer Swift 3x, Asus VivoBook Flip TP470, and Dell Inspiron 15 7000 2-in-1 laptops. These are the first devices to leverage 11th Gen Intel Core mobile processors, Intel Iris Xe MAX graphics, and Intel Deep Link technology.
Above: Intel’s graphics chip vs. Nvidia’s laptop graphics chip.
Intel said the combined tech can run AI-based creation software seven-times faster than similar laptops configured with third-party graphics.
As for games, Intel said the Iris Xe Max graphics also deliver 1080p graphics in a wide variety of popular games. Intel claimed its GPUs can outperform games on laptops that use Nvidia MX350 notebook GPUs. Those GPUs consume about the same amount of power as the Intel GPUs, while the 3000 series Nvidia GPUs consume a lot more.
Intel is offering promotions that bundle creator applications and games, starting on November 3.
After Intel expands the rollout of the GPUs in early 2021, it will launch its DG2 graphics chips, which are targeted at desktop graphics performance. Intel hasn’t shared performance levels or said when those chips will debut. But it did say it will bring Xe discrete graphics to value desktops in the first half of 2021. Intel believes the Xe graphics architecture will scale from the low-end to high-end graphics markets, spanning everything from gaming to datacenter graphics.
Intel’s 11th Gen Intel Core S-Series desktop processors, codenamed Rocket Lake, will debut in 2021.
“I’d say they are making solid progress. It’s not going to beat an Nvidia RTX 2060 on game performance, but they have a good solution for thin-and-light productivity notebooks,” Tirias Research analyst Kevin Krewell said in an email to GamesBeat. “It’s a better solution for creative workers that want the discrete GPU. I like the unified software stack with dynamic power-sharing. The Deep Link ability to have both integrated and discrete Iris Xe graphics working together for video and deep learning tasks is a plus. I’d say it’s a decent start, but it’s not for gaming laptops.” The Iris Xe Max chip is made with Intel’s 10-nanometer Super Fin manufacturing technology. Intel is also working on its Xe HPG high-performance graphics chips, which consume more power and can run high-powered desktops.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,021 | 2,020 |
"How technology and policy can mitigate climate effects in an age of colliding crises | VentureBeat"
|
"https://venturebeat.com/business/how-technology-and-policy-can-mitigate-climate-effects-in-an-age-of-colliding-crises"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How technology and policy can mitigate climate effects in an age of colliding crises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, the federal government spends less than $9 billion annually on energy innovation, which is less than a quarter of what it invests in health innovation and less than a tenth of what it invests in defense innovation. As we sit at a crossroads of an unprecedented confluence of challenges — from a public health crisis, to a leadership crisis , to a climate crisis, to a racial equity and social justice crisis — it’s time we look for new solutions to solve some of our most urgent problems. Our leaders must explore the ways that energy resiliency and climate action can help to see us through these critical times and create a new normal where a resilient, reliable, and affordable energy system powers our economy, safeguards our public health, and provides a path to social and economic mobility.
One part of the equation requires increased federal support for the growing climate technology sector aimed at creating a resilient energy system to support America’s people and economy. The Department of Energy estimates that weather-related power outages alone cost the U.S. economy $18 billion to $33 billion per year. (These estimates were made before the recent years of wildfires and public safety power shut-offs in California.) It’s not just the “coastal elites” who are suffering: Extreme weather is a threat to life, livelihoods, and the consistent supply of electricity in the Midwest and Rust Belt as well. To make energy resilience the centerpiece for our national recovery, we should push legislation through Congress that focuses on the following four areas: Creating a modern, self-healing smart grid Protecting the grid from cyber attack Fostering a series of microgrids to create local energy resilience Incentivizing restorative behaviors from large electricity consumers The first aspect of this legislation would be to create a federal energy resilience grant program that covers different aspects of resilience throughout the energy system. Such a program, an expanded successor to the Smart Grid Investment Grant from the American Reinvestment and Recovery Act, would help fund transformational efforts in each of those four major areas, with prioritization on projects that impact multiple areas. Federal grants could be awarded to state energy regulatory agencies or directly to utilities.
The second element would be creating an energy resilience data hub. This hub could be hosted by the Energy Information Administration and the Office of Cybersecurity, Energy Security, and Emergency Response (CESER) and would collect and organize information from around the country that could foster a better understanding of energy threats, responses, and best practices.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Finally, the legislation should establish a Presidential Award in each of these four areas to be awarded annually. This award (coordinated between the White House and CESER) would highlight the year’s best energy resilience efforts as a means of raising awareness of the issue and encouraging ambitious action from the research, development, and tech communities.
A resilient, low-carbon economy must also be built upon the foundation of justice and equality. According to the Asthma and Allergy Foundation of America, African Americans are almost three times more likely to die from asthma-related causes than their white counterparts. And nearly one in two Latinos in the U.S. live in counties where the air doesn’t meet EPA public health standards for smog. This type of environmental injustice is apparent across the U.S. and has only been exacerbated by the COVID-19 pandemic.
It’s clear that more stringent environmental regulations are needed to put an end to polluting industries’ disproportionate effects in poor and minority communities. Building on Congressman Raul Grijalva’s (D-Ariz.) Environmental Justice for All Act is a start. We should develop a formal scoring system that prioritizes environmental justice and frontline engagement over dollar-and-cents cost-benefit analysis, as proposed in the Climate Equity Act , which would use data to inform planning and balance the scales. Investment in microgrids and energy storage will also help to reduce the need to operate “peaker plants” in times of highest demand. These plants produce high levels of particulate emissions and other pollutants that exacerbate already-poor air quality and are disproportionately located near low-income communities and communities of color.
Along with new regulations, we need to merge the minds of community organizers, energy companies, renewable energy developers, and environmental organizations who are on the ground in these communities across the country. By working closely with those on the front line of these issues and leveraging their ideas and insights, we can effect policy with real, lasting change.
Progress can be made without congressional approval, massive investments, or new laws.
Finally, to help develop, scale, and fund a path to net zero, Wall Street, venture capitalists, and Big Tech need to be deeply engaged and committed. Old industries, which have been too slow to change, need new tools to tackle climate change. (High-temperature processes like steel and concrete manufacturing are one of the most difficult areas to decarbonize , yet the U.S. DOE spends only 6% of its R&D budget on “Industry.”) It’s imperative we increase both “technology push” policies that fund academic research and “market pull” policies that create a path for impact at scale. What this requires is early-stage investors to help mitigate risk, and a concerted effort from Wall Street and Big Tech, to support new ideas, technologies, and companies that are solving some of our toughest climate challenges. Real capital, real commitments, real culture change.
Startups aren’t waiting for federal action. Founders continue to develop new solutions across transportation, energy generation, and industry (which collectively make up ~80% of U.S. emissions).
Proterra is designing and manufacturing electric buses that operate at a lower overall cost than diesel, hybrid, or natural gas vehicles.
Roadbotics (an URBAN-X portfolio company) helps governments better administer their public infrastructure assets by unifying their data on a single cloud platform. Innovation precedes deployment; we need policy that links the two and provides sufficient funding for new solutions to reduce emissions in a major way.
Research by PwC indicates that approximately 6% of total capital invested in 2019 is focused on climate tech, reflecting an increase from $418 million in 2013 to $16.3 billion in 2019.
Major corporations, from BlackRock to Amazon to Softbank , also have the power to effect change, both through investment and deployment of forward-thinking climate technologies and by asserting a benevolent influence on Capitol Hill that demands transparent, long-term, and clear policies for an equitable climate agenda. And yes, large companies like these are increasingly committing to ambitious climate goals. However, it’s our role as citizens, investors, entrepreneurs, and shareholders to demand accountability that they live up to their word.
Today, in the midst of a hotly contested presidential election, we’ve seen the conversation on climate change grow in prominence across the nation. As part of Joe Biden’s $5 trillion “Build Back Better” plan, he calls for a $2 trillion investment and strong push for energy innovation to drive a low-carbon future. This type of attention to and investment in climate tech is critical. With it, we may finally be able to act on the promise our country holds to take the lead on climate action be at the forefront of the industries that will define the next century. We can create a robust pipeline for jobs in a low-carbon economy, rather than one pegged to oil and gas. We can have the tools to bridge the yawning equity and environmental justice divide that COVID-19 has laid bare. We can build new companies at scale that bring sustainability-forward solutions to age-old industries.
But to turn this vision into reality requires real leadership, a belief in science, and a true commitment to answer calls for climate action from across the nation. Without strong federal backing, we can’t possibly hope to meaningfully address the society-scale challenge we face. If the last four years are any indication, a second Trump term would mean more inaction, more uncertainty, and more cities and states left to fend for themselves in the face of unprecedented climate disasters.
This election season has been characterized by anxiety, misinformation, and interference from foreign actors.
But in my conversations with swing state voters, I’ve also experienced moments of energy, hope, and clarity. The two candidates couldn’t have more oppositional views on the future. Optimism does not come easy, but it’s a choice. And despite the suffering around us and the challenges to come, I’m optimistic that, know it or not, we’ve embarked on a new path that can meet this moment.
Micah Kotch is Managing Director of URBAN-X , an accelerator from MINI and Urban Us for startups that are reimagining city life. He’s a board member of Green City Force, an AmeriCorps program that engages young adults from New York City Housing Authority (NYCHA) communities in national service related to the environment.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,022 | 2,020 |
"Hands-on: Amazon Fresh grocery stores tease brick-and-mortar retail's future | VentureBeat"
|
"https://venturebeat.com/business/hands-on-amazon-fresh-grocery-stores-tease-brick-and-mortar-retails-future"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Hands-on: Amazon Fresh grocery stores tease brick-and-mortar retail’s future Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
There were no lines outside Irvine, California’s new Amazon Fresh grocery store on its opening day last week, despite the fact that it was only the second such location in the world — and the first to be open to the public on day one. But after early visitors discovered the store’s high-tech shopping carts, two lines formed over the weekend, stretching past Amazon’s front doors to adjacent retailers in the suburban plaza. One line was longer and moving slower than the other.
“Do you want to try the Dash Cart ?” an employee asked people near the end of the queue. “If not, you can move into the shorter line, and you’ll get in faster.” We were there specifically for the Dash Cart: Friends told us that it was worth the 10- to 20-minute wait to go hands-on with one of the 25 magical shopping carts, since their integrated touchscreens and cameras were the key to Amazon’s next-generation shopping experience. The Dash Cart felt like the future of brick-and-mortar retail, they said, even though the rest of the store wasn’t that amazing.
Our friends were correct, but there’s more to the Amazon Fresh story than just Dash Carts. Here’s what it’s like to visit the supermarket of the future, today, as it’s been dreamed up and implemented by Amazon.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A boxy, spartan layout, seemingly by design Unlike Whole Foods , the high-end supermarket chain Amazon acquired in 2017, Amazon Fresh stores look like small warehouses, and have all the charm of Walmart’s grocery sections, minus two-thirds the people and half the choices. From the signage to the aisles and specialty counters, Amazon’s latest store feels as if it was designed largely by engineers, and conceived to be as easy as possible to retrofit inside another retailer’s abandoned space — in this case, the 40,000 feet formerly occupied by a Babies R Us store.
Apart from the produce, nothing about the environment feels organic: Floors are spartan, displays are boxy, and everything looks to have been optimized for customers by computers, rather than humans. There are places to purchase whole cooked chickens for $4.97 and pizzas for $8.99, but nowhere to sit and eat them. A staffed customer service area is in the back, not the front, which instead allocates a lot of interior space to managing shopping carts.
Even the baked goods, which in other stores flow attractively off the edges of store shelves, seem to have been assigned to a specific corner of Amazon Fresh and told to stay firmly within the lines. If it wasn’t for a cadre of friendly greeters, walking through Amazon Fresh would feel more like visiting a warehouse or Costco than shopping in a typical supermarket of its size.
That feeling extends to how Amazon Fresh uses — and doesn’t use — people in its operations. Instead of having employees answer inventory-related questions, Amazon scatters Alexa terminals throughout the store, offering AI guidance on item locations, wine pairings, and measurement unit conversions. On our first visit, the Alexa terminals were both working and helpful, accurately pointing us towards items we wanted to locate. But on our second visit, all of the terminals were experiencing “connectivity issues,” perhaps the closest Amazon Fresh stores will get to a business-disrupting employee strike.
The Alexa terminals suggest that Amazon wants to staff Fresh stores as leanly as possible, even if it’s liberally using employees during the launch phase to address potential customer pain points. There were lots of Fresh staffers — too many, really — constantly restocking shelves while otherwise keeping to themselves, plus the aforementioned greeters at the front doors to help get people in and out of the store. In traditional supermarkets, all of these employees might be floaters who move from place to place as needed, alternating between helping customers and restocking shelves. But at Amazon Fresh, Alexa could help reduce the need for greeters as customers become familiar with the technology, and shelving recalibrations could reduce the need for such frequent stock replenishment.
Lower labor costs could translate directly into lower prices. And half of the new store’s appeal is reasonable pricing — that’s the single biggest problem with Whole Foods , which offers well-heeled customers an impressive selection of high-end foods and beverages that just aren’t affordable to the masses. By contrast, Amazon Fresh is clearly aimed at middle-income shoppers who still want to do some of their purchasing and browsing in person instead of on a computer screen. There are a handful of fancy items on the shelves, such as $10 pints of McConnell’s ice cream, but most of the signage is directed towards selling 15-cent bananas and 89-cent loaves of bread, rather than champagne and caviar.
Dash Cart as a solution and a problem The more exciting part of Amazon Fresh is the Dash Cart, a shopping cart that uses sensors and smartphone technology to replace checkout lanes and standalone produce scales. As mentioned above, you don’t have to use a Dash Cart to shop at Amazon Fresh, and despite its speedy name, you’ll likely get in and out of the store faster without waiting for one. But without a Dash Cart, the shopping experience isn’t hugely different from any old small suburban supermarket you’ve previously visited.
Once you make it through the Dash Cart waiting line, you’ll get a three-minute human tutorial that explains how to link your cart to your Amazon app with a QR code, scan packaged items by dropping them into one of two included paper bags, and add produce items by inputting four-digit PLU codes into the cart’s tablet screen. These steps are supposed to eliminate the need for employees to check you out and bag your purchases; instead, the cart’s cameras and scale track everything you place in the bags, so when you leave the store, your Amazon account is automatically charged for whatever you bagged yourself. It’s an evolution of what Amazon pioneered with much smaller Amazon Go stores years ago.
Dash Carts are cool in concept, but their execution leaves a lot to be desired. On a positive note, their cameras and software did a good job with accurately scanning items we placed in the bags, and automatically removing items if we pulled them from the cart. The tablet-like touchscreen worked as expected, and though the scale inside the cart wasn’t fast, it could — with practice — be faster than walking over to a standalone produce scale and printing out a label for each item.
On the other hand, the Dash Carts had limitations that beg to be resolved in future iterations. Each cart is limited to two bags, which restricts your ability to complete a full shopping trip, and limits Amazon’s maximum take per shopper. You can’t overfill the bags, lest the cart’s cameras become incapable of seeing what’s inside. Additionally, Amazon is so concerned about theft or damage that it swaps each Dash Cart for a regular one before customers leave for the parking lot, or hands you the bags to carry to whatever distant parking space you selected. These are the sorts of practical inconveniences that could kill Dash Cart’s utility for some people.
Real-world glitches also undermined our Dash Cart experience. One of our bags ripped and needed to be replaced during the cart-to-cart transfer. We also had to go through a manual checkout line — including rescanning and rebagging every item — because our cart’s integrated code scanner couldn’t recognize an Amazon coupon. Staff said that the carts were somewhat finicky and had been experiencing hiccups like this.
Whenever one of these issues with the Dash Cart popped up, we felt as if we were holding up people who were waiting behind us, even though the issues weren’t really our fault. Other delays, such as learning how to enter PLU codes and weigh produce, caused Dash Cart users to abruptly stop mid-aisle and fidget with the tablet’s screen. We noticed some customers without the high-tech carts becoming visibly frustrated with other customers’ Dash-inspired touch interactions, but cart users seemed to be too focused on their screens to notice.
Could data make all the difference for future retailers? It’s easy to overlook a key element of this retail experience — the intersection between Amazon.com and Amazon Fresh — because it’s so muddled at the moment. But it could wind up being a critical differentiator for Amazon’s brick-and-mortar ventures going forward.
As part of the initial onboarding experience, Amazon openly encourages Dash Cart users to digitally manage their shopping lists with the cart and browse current in-store specials using their phones. This is a mess for two reasons: The cart’s integrated shopping list management software is extremely limited, and the idea of asking users to check not just one but two touchscreens while they’re shopping is just straight-out crazy. No one wants to be stuck behind that guy who’s blocking shelves or freezers while browsing through lists and brochures. If Larry David ever visits Amazon Fresh, there’s enough material here for an entire Curb Your Enthusiasm sub-plot.
Yet there’s obvious value in tying the internet directly — and more thoughtfully — to a customer’s shopping cart. Your first visit to an Amazon Fresh store could conceivably be your last trip through its aisles: Amazon could just present you with a list of the items you purchased, offer to reorder them, and make them instantly available for either pickup or delivery. That could eliminate the need (and the premium people currently pay) for Instacart.
It also could reduce the footprints of future Amazon Fresh stores by lowering the number of people who simultaneously walk through them, enabling many customers to complete transactions using the equivalent of drive-through windows.
Amazon is technically doing some if not most of these things already, but it needs to refine its smartphone and cart software to make the end-to-end experience intuitive and frictionless for customers. Somewhat ironically, the sign that it has succeeded will be if its Fresh grocery stores aren’t packed with people but are still hugely profitable, which is to say that they’ll be moving tons of products without the packed aisles and long lines normally associated with successful supermarkets.
The best of the rest of Amazon, plus coupons One thing we loved at Amazon Fresh was an area labeled “Customer Service, Returns & Pick Up.” Normally, these things are found very close to the entrance of a supermarket, but at Amazon Fresh, they’re in the back, a decision that was likely made to get returns and pick-ups closer to the store’s storage areas and loading docks. Customers can pick up items from Amazon lockers and drop off Amazon returns — conveniences that simultaneously provide an incentive to do grocery shopping while eliminating the need to visit standalone Amazon shipping and return locations, something we can see ourselves using at least occasionally.
Amazon Fresh also includes a limited selection of the online retailer’s popular gadgets and books. We spotted the José Andrés cookbook Vegetables Unleashed and the Death & Co.
cocktail guide on shelves only a short distance away from Fire tablets and Echo speakers , none of which we were looking to purchase at a supermarket — but then, we had already bought some of them online in the past. Over time, new items will replace them, and we might have reason to consider buying non-grocery goods at Amazon Fresh, as well.
At this stage, it would be hard to describe Amazon Fresh as the guaranteed future of brick-and-mortar retailing; the experience currently feels closer to a public beta test than a fully formed and polished business. Visitors can certainly have a normal or even a unique experience in the store, but they’re actually guinea pigs in a grand experiment that runs smoothly — until it doesn’t.
To Amazon’s credit, the speed bumps aren’t too daunting. Moreover, the company is actively addressing problems by handing out coupons to apologize for technical issues, and on one of our two visits, was giving away free cans of sparkling water and refrigerator magnets to everyone exiting the store. Despite the glitches, we didn’t see anyone leaving the store angry, and between the coupons and the small number of bags we left with, we were already planning our next trip to the store as we walked out to our car.
Only Amazon knows whether such a mixed but positive impression counts as “mission accomplished” or whether its early Amazon Fresh grocery customers are just helping it refine a larger campaign to completely dominate the retail world.
Thanks to Amazon’s growing scale and unquestionable ambition, the Fresh grocery stores could either become very real challengers to traditional supermarkets — or fizzle out as experiments that made little difference to the company’s bottom line.
If you’re interested in seeing Amazon Fresh for yourself, you can visit the new store at 13672 Jamboree Road in Irvine, or the first location — open to the public since September — at 6245 Topanga Canyon Boulevard in Woodland Hills, California. Both stores are open from 7 a.m. to 10 p.m., seven days a week.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,023 | 2,020 |
"Google's AI converts webpages into videos | VentureBeat"
|
"https://venturebeat.com/business/googles-ai-converts-webpages-into-videos"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s AI converts webpages into videos Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Researchers at Google say they’ve developing an AI system that can automatically convert webpages into short videos. It extracts assets like text and images and their design styles including fonts, colors, and graphical layouts from HTML sources and organizes the assets into a sequence of shots, maintaining a look and feel similar to the source page as it does so.
Google envisions the system could be useful to businesses that host websites containing rich visual representations about their services or products. These assets, the company says, could be repurposed for videos, potentially enabling those without extensive resources to reach a broader audience. A typical video costs between $880 and $1,200 and can take days to weeks to produce.
URL2Video, which was presented at the 2020 User Interface Software and Technology Symposium, automatically selects key content from a page and decides the temporal and visual presentation of each asset. These presentations come from a set of heuristics identified through a study with designers, and they capture video editing styles including content hierarchy, constraining the amount of information in a shot and its time duration while providing consistent color and style for branding. Using this information, URL2Video parses a webpage, analyzes the content, selects visually salient text or images, and preserves design styles, which it organizes according to a user’s specifications.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! URL2Video extracts document object model information and multimedia materials on a per-webpage basis, identifying visually distinguishable elements as a candidate list of asset groups containing headings, product images, descriptions, and call-to-action buttons. The system captures both the raw assets (i.e., text and multimedia files) and detailed design specifications (HTML tags, CSS styles, and rendered locations) for each element and then ranks the asset groups by assigning each a priority score based on their visual appearance and annotations. In this way, an asset group that occupies a larger area at the top of the page receives a higher score.
URL2Video automatically selects and orders the asset groups to optimize the total priority score. To make the videos concise, the system presents only dominant elements from a page, such as a headline and a few multimedia assets, and constrains the duration of elements. Given an ordered list of assets based on the DOM hierarchy, URL2Video follows the heuristics obtained from the design study to make decisions about both the temporal and spatial arrangement. The system transfers the layout of elements into the video’s aspect ratio and applies the style choices including fonts and colors, adjusting the presentation timing of assets and rendering the content into an MPEG-4 video.
Google says that in a user study with designers at Google, URL2Video effectively extracted elements from a webpage and supported the designers by bootstrapping the video creation process. “While this current research focuses on the visual presentation, we are developing new techniques that support the audio track and a voiceover in video editing,” Google research scientists Peggy Chi and Irfan Essa wrote in a blog post.
“All in all, we envision a future where creators focus on making high-level decisions and an ML model interactively suggests detailed temporal and graphical edits for a final video creation on multiple platforms.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,024 | 2,020 |
"Google parent Alphabet returns to sales growth in Q3 2020 as advertising recovers from the pandemic | VentureBeat"
|
"https://venturebeat.com/business/google-parent-alphabet-returns-to-sales-growth-in-q3-2020-as-advertising-recovers-from-the-pandemic"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google parent Alphabet returns to sales growth in Q3 2020 as advertising recovers from the pandemic Share on Facebook Share on X Share on LinkedIn Alphabet CEO and Google CEO Sundar Pichai Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
(Reuters) — Google parent Alphabet on Thursday powered back to sales growth in the third quarter, beating analysts’ estimates as businesses initially hobbled by the coronavirus pandemic resumed advertising with the internet’s biggest supplier of ads. Alphabet shares rose 8% after ending regular trading at $1,556.88, up 13% on the year.
Wall Street had expected a rebound from Alphabet because the company said in July that advertiser spending was inching back, following a March plummet due to lockdowns. Google competitors Snap and Microsoft also reported third quarter revenue ahead of expectations in recent days. Alphabet’s third quarter revenue growth reflected a bump in spending across each of its key ads businesses, including search, YouTube, and partner properties.
CFO Ruth Porat said the company saw upticks from advertisers across all regions and industries. But she did not say whether the trends showed signs of slipping as Europe and other areas tackle significant increases in coronavirus infections. “While we’re pleased with our performance in the third quarter, there is obviously uncertainty in the external environment,” Porat said. She told financial analysts the company would not slow down spending on its cloud computing unit and other areas, even if another round of COVID-19 lockdowns hit ad demand.
Google’s search engine and YouTube video service are gateways to the internet for billions of people and have become more essential as users transact and entertain online to avoid the virus. Advertisers turned to Google’s ad system to let shoppers know about deals and have adjusted service offerings as the economy has begun to chug along again.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! EMarketer principal analyst Nicole Perrin said YouTube’s year-over-year sales growth, which was 32% compared with about 6% in the second quarter, pointed to “advertisers’ continued desire for video inventory, the return of brand spending, and notable increases in political ad spending” amid the U.S. presidential election.
Google’s cloud business was about flat with the second quarter, as were the company’s sales of apps, hardware, and content subscriptions. Alphabet said it would elevate cloud into a separate reporting unit starting in the fourth quarter, effectively dropping cloud sales and expenses from its Google unit. In recent months, Google had aimed to stoke advertising by not charging merchants for some promotional space and issuing grants to help other businesses buy ads. The efforts followed the company’s first sales decline (compared with a year-earlier period in the second quarter) since going public in 2004.
But the dominance of Google services has become a liability for the company too. The U.S. government last week sued the company for operating a search monopoly and stifling competition. Other regulators in the United States and elsewhere have ongoing investigations into similar allegations. The various cases could lead to Google having to divest some of its ad business in the coming years, though financial analysts doubt it will happen.
Google’s ad business accounted for 80% of Alphabet’s $46.2 billion in revenue in the third quarter. Analysts had expected $42.9 billion in revenue, or 5.9% growth from a year ago. Alphabet’s profit was $11.2 billion, or $16.40 per share, compared with the average estimate of $7.698 billion, or $11.18 per share, among analysts tracked by Refinitiv.
Google competitors Facebook , Amazon.com , and Twitter also released financial results on Thursday that were above expectations, showing how internet companies have fared well through the pandemic. Facebook shares on Thursday were up 30% this year, Amazon up 71%, and Twitter up 51%.
Alphabet’s total costs and expenses rose 12% from a year ago to $35 billion in the third quarter, compared with a 7% jump a quarter ago. Capital expenditures dropped 20% to $5.4 billion, compared with a 12% drop last quarter.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,025 | 2,021 |
"Far Cry 6 and Rainbow Six: Quarantine delayed past Q1 2021 | VentureBeat"
|
"https://venturebeat.com/business/far-cry-6-and-rainbow-six-quarantine-delayed-past-q1-2021"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Far Cry 6 and Rainbow Six: Quarantine delayed past Q1 2021 Share on Facebook Share on X Share on LinkedIn "El Presidente" Anton Castillo (left) and his son talk about power in Far Cry 6. Actor Giancarlo Esposito plays Castillo.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Far Cry 6 had a release date of February 18, 2021 , but Ubisoft now plans to release the open-world adventure during its next fiscal year. That begins April 2021 and ends March 2022. The unfortunately named Rainbow Six: Quarantine is also still coming, but the publisher is scheduling for that same period. Ubisoft announced these delays in its first-half financial report today. This move comes as a result of production delays due to the complications of the pandemic and work-from-home.
“Despite having moved Far Cry 6 and Rainbow Six Quarantine to 2021-22 to leverage their full potential in the context of production challenges caused by COVID-19, our new non-IFRS operating income targets for 2020-21 remain within the boundaries we set back in May,” Ubisoft chief financial officer Frédérick Duguet said. “Being able to maximize the long-term value of our IPs while at the same time maintaining solid financial targets highlights the increasing recurring nature of our revenues, the strength of our portfolio of franchises, confidence in our holiday season release slate, and current supportive industry dynamics.” If you’ll allow me to translate that — Duguet is saying that Ubisoft is making a lot of extra cash right now due to the pandemic. People are turning to games for safe, at-home entertainment. And this has led to a surge in revenue for the publisher’s ongoing live-service games like Rainbow Six: Siege. But it’s also led to an increase in sales for catalog releases like Assassin’s Creed: Odyssey.
People are also buying more games digitally. This increases the profit margin of each game sold, which is yet another example of the trends moving favorably for gaming publishers.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! This enables Ubisoft to delay games without taking a major hit.
Ubisoft still has Watch Dogs: Legion, Assassin’s Creed: Valhalla, and more coming before the end of March 2021.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,026 | 2,020 |
"Facebook beats analyst estimates for Q3 2020 revenue despite ad boycotts | VentureBeat"
|
"https://venturebeat.com/business/facebook-beats-analyst-estimates-for-q3-2020-revenue-despite-ad-boycotts"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook beats analyst estimates for Q3 2020 revenue despite ad boycotts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
(Reuters) — Facebook Thursday warned of a tougher 2021, despite beating analysts’ estimates for quarterly revenue as businesses adjusting to the global coronavirus pandemic continued to rely on the company’s digital ad tools. The world’s biggest social media company said in its outlook that it faced “a significant amount of uncertainty,” citing impending privacy changes by Apple and a possible reversal in the pandemic-prompted shift to online commerce.
“Considering that online commerce is our largest ad vertical, a change in this trend could serve as a headwind to our 2021 ad revenue growth,” it said. Shares of the company were flat in extended trading.
Facebook’s financial results and those of Google and Amazon demonstrate how resilient tech giants have been, even as the pandemic devastated other parts of the economy. The success has earned them extra scrutiny in Washington, where the companies face multiple antitrust investigations. Facebook’s total revenue, which primarily consists of ad sales, rose 22% to $21.47 billion from $17.65 billion in the third quarter ended September 30, beating analysts’ estimates of a 12% rise, according to IBES data from Refinitiv.
A July ad boycott over Facebook’s handling of hate speech, which saw some of the social media giant’s biggest individual spenders press pause, barely made a dent in its sales, which mostly come from small businesses. Revenue growth at Facebook, the world’s second-biggest seller of online ads after Google, has been cooling steadily as its business matures, although it came in at more than 20% throughout 2019.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Still, compared to expectations, the company has had a bumper year due to surging use of its platforms by users stuck at home amid virus-related lockdowns, which cushioned online ad sales even as broader economic activity suffered. Facebook continued to grow its user base, with monthly active users rising to 2.74 billion, compared with estimates of 2.70 billion, according to the IBES data, although user numbers declined in North America compared to the second quarter. The company projected that trend would continue for the rest of the year, with user numbers either flat or slightly down in the fourth quarter compared to the third quarter. “It appears that investors are disappointed that despite user growth jumping across most regions during the quarter, the social media platform reported a decrease in users in North America, which covers the U.S. and Canada — its most lucrative ad market,” Investing.com senior analyst Jesse Cohen said.
Total expenses increased 28% to $13.43 billion, with costs continuing to grow as Facebook tries to quell criticism that its handling of user privacy and abusive content is lax. The company has been under especially strong pressure ahead of next week’s U.S. presidential election and is aiming to avoid a repeat of 2016, when Russia used its platforms to spread election-related misinformation.
EMarketer principal analyst Debra Aho Williamson said Facebook remains “a go-to for advertisers” seeking to reach a broad set of consumers, despite its content moderation issues, but said that may change in 2021. “We expect that more advertisers will take a hard look at their reliance on Facebook and will ask themselves whether the environment is safe for their brands,” she said.
Net income came in at $7.85 billion, or $2.71 per share, compared with $6.09 billion, or $2.12 per share, a year earlier. Analysts had expected a profit of $1.90 per share, according to IBES data from Refinitiv.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,027 | 2,020 |
"Comlinkdata Acquires ShareTracker, Creating Global Provider of Consumer and Business Telecom Insights | VentureBeat"
|
"https://venturebeat.com/business/comlinkdata-acquires-sharetracker-creating-global-provider-of-consumer-and-business-telecom-insights"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Comlinkdata Acquires ShareTracker, Creating Global Provider of Consumer and Business Telecom Insights Share on Facebook Share on X Share on LinkedIn Combined firm will have unique data and insights to help network operators acquire, retain and delight customers BOSTON–(BUSINESS WIRE)–October 30, 2020– Comlinkdata, the leader in telecom market analytics, today announced it has acquired ShareTracker, a US-based telecom research and analytics firm. The addition of ShareTracker assets will allow Comlinkdata to help network operators acquire, retain, and delight customers across wireless, wireline, video, broadband, device, and many other segments. The acquisition comes one year after Comlinkdata acquired Tutela, a global crowdsourced mobile network and customer experience data company.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20201030005106/en/ The combined company is uniquely positioned to provide global telecom insights in a dynamic market changed by 5G, edge computing, and increasingly remote workforces. The firm’s real-time insights are enabled by an easy-to-use online platform and software tools that enable customers to perform their own analysis, complemented by the knowledge and expertise of Comlinkdata’s client analytics team.
Comlinkdata is backed by Alpine Investors, a private equity firm based in San Francisco.
Charles Rutstein, CEO of Comlinkdata, comments: “This transaction represents a further broadening of Comlinkdata’s capabilities, and an important step in our mission to provide the most clear and insightful market data to wireless and broadband network operators, handset OEMs, and others around the world.” Will Adams, Partner at Alpine Investors, comments: “Alpine is committed to building enduring companies by working with, learning from, and developing exceptional people. We seek to leverage our prior experience in tech-enabled services by investing in world-class companies that deliver value for customers via unique technology and software. In this transaction, we are uniting leading teams with complementary, tech-enabled offerings in data services to provide innovative solutions for the global telecom industry.” Telegraph Hill Advisors served as the financial advisor to ShareTracker and Wilson Sonsini Goodrich & Rosati served as legal counsel to Comlinkdata.
About Comlinkdata Comlinkdata is the leading provider of telecom market data and insights in North America. It provides unique, real-time market performance data and pairs this with a team of telecom-savvy analysts to help clients identify, understand and execute on growth opportunities. The Boston-based company was founded in 2010 and is owned by Alpine Investors.
For more information, please visit www.comlinkdata.com About Alpine Investors Alpine is a people-driven private equity firm committed to building enduring companies by working with, learning from, and developing exceptional people. Alpine specializes in middle-market companies in the software and services industries. Its PeopleFirst™ strategy includes a CEO-in-Training™ and CEO-in-Residence program where Alpine recruits and places high-caliber executives into companies as part of the transaction. This provides a distinct solution for situations where additional or new management is desired post-transaction.
For more information, please visit www.alpineinvestors.com View source version on businesswire.com: https://www.businesswire.com/news/home/20201030005106/en/ For Media Inquiries: Audrey Harris [email protected] 415-591-1334 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,028 | 2,020 |
"Canalys: Samsung led Q3 smartphone shipments, Xiaomi knocked Apple out of the top 3 | VentureBeat"
|
"https://venturebeat.com/business/canalys-samsung-led-q3-smartphone-shipments-xiaomi-knocked-apple-out-of-the-top-3"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canalys: Samsung led Q3 smartphone shipments, Xiaomi knocked Apple out of the top 3 Share on Facebook Share on X Share on LinkedIn Samsung Store in London Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Samsung reclaimed its “top smartphone vendor” title from Huawei in Q3 2020, according to the latest figures from Canalys, while Apple slipped into fourth place behind Xiaomi — the first time the iPhone maker has missed the top three in at least 12 years.
While smartphone shipments were down 1% from the corresponding period last year , Canalys figures suggest they increased by 22% from Q2 2020 , a period in which much of the world was still in lockdown. Digging into the details reveals some interesting insights about the market and the state of the world in 2020.
Trade turmoil Huawei held the top spot in Q2, the first quarter in nine years that a company other than Samsung or Apple had been in pole position. This was particularly notable given the U.S. trade ban that has prevented Huawei from using the Google version of Android. However, Huawei’s rise was more indicative of a domestic sales boost in China , where it has never used Google’s Android anyway.
Now it seems Huawei’s international woes have caught up with it, as its newer flagship devices have gone to market minus Google Mobile Services.
Anyone wishing to buy a Huawei flagship online in 2020 will likely see a warning message similar to this: Above: Huawei warning message on third-party retailer website Chinese smartphone maker Xiaomi grew its shipments by 45% year-on-year (YoY), shifting 14.5 million more units, while Huawei’s shipments fell by 15.1 million (23%) to 51.7 million. In Europe, which has historically been a key market for Huawei, Xiaomi shipments grew by 88% while Huawei fell 25%.
In the process, Xiaomi reduced the market share deficit between the two companies from around 10 percentage points to less than two percentage points.
Above: Worldwide smartphone shipments and annual growth Canalys Smartphone Market Pulse: Q3 2020 Xiaomi also joined the ranks of the top three smartphone vendors for the first time, nudging Apple into fourth place in the process. Canalys senior tech analyst Ben Stanton noted that the last time Apple missed the top three was back in the days when Nokia ruled the roost and Apple was battling Motorola for third place.
“Apple did spend some quarters outside of the top three when iPhone was fairly nascent, but it has been a top three vendor for every quarter since Q3 2008, until now,” Stanton told VentureBeat.
Samsung has remained reasonably steady from year to year, with shipments up 2% and its market share growing marginally. However, the quarter-on-quarter (QoQ) story is interesting, as Samsung’s shipments increased by 50% across the two periods. This can be attributed to a number of factors, including the fact that Samsung leans heavily on offline retail channels that were decimated in Q2 due to the global lockdown. The Korean technology titan saw its smartphone shipments fall by an incredible 30% YoY in Q2, compared to just -5% and -10% for Huawei and Xiaomi respectively, while Apple bucked the downward trend by growing its shipments by 25%.
This time around, stores have largely reopened, even if footfall isn’t at prepandemic levels. Moreover, Canalys analyst Shengtao Jin attributed Samsung’s recovery to “pent-up demand” from Q2 spilling over into Q3, while growth in India also helped, as did adding a handful of low- to mid-range smartphones and shifting to more online sales.
Apple announced its latest flagship phones slightly later this year, in October rather than at its usual September launch event , which could have had some impact on these figures. However, it’s difficult to draw any direct correlations between this delay and Apple’s drop to fourth place, as iPhone shipments only declined marginally YoY. These figures are slightly more pronounced when compared to the previous quarter, but Apple had launched a cheaper (and smaller) iPhone SE back in April , which played a big part in its Q2 surge.
The takeaway from this Canalys report is that Samsung’s smartphone brand remains strong, while Xiaomi appears to be capitalizing on Huawei’s U.S. trade tussles. The next quarter will be particularly interesting to watch, as Apple’s latest flagships will have several months to attract users and Huawei will continue efforts to convince consumers they don’t need Google.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,029 | 2,020 |
"Apple reports record $64.7 billion revenue in Q4 2020 despite iPhone delay | VentureBeat"
|
"https://venturebeat.com/business/apple-reports-record-64-7-billion-revenue-in-q4-2020-despite-iphone-delay"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple reports record $64.7 billion revenue in Q4 2020 despite iPhone delay Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
While 2020 won’t be remembered fondly in the history books, it has been quite good for Apple , which is ending its fiscal 2020 with record revenues for three of its four quarters. Today, Apple announced record fourth quarter revenues of $64.7 billion. Apple once again cited strength in its growing services business, along with an all-time record for Mac sales, offset by the atypical absence of first weekend iPhone sales to buoy the numbers.
On average, analysts expected Apple’s revenues to be $63.7 billion, a drop of 0.5% from the year-ago quarter , when the company reached a record $64.04 billion in sales despite falling iPhone and Mac earnings. At that point, the company reported accelerated growth in services, as well as healthy sales of iPads and wearables. But Apple beat the predictions by a billion dollars — a 1% increase over the prior year, with earnings per diluted share of $0.73.
Apple says it sold $26.444 billion in iPhones, $9.032 billion in Macs, and $6.797 billion in iPads during the quarter, with combined “wearables, home, and accessories” sales of $7.876 billion and services at $14.549 billion. That’s up across most categories from the year-ago quarter, when Macs were at $6.991 billion, iPads at $4.656 billion, wearables at $6.52 billion, and services at $12.511 billion. But it was a sharp drop for iPhones, from $33.362 billion in the same quarter last year, due to COVID-19-related shipping delays.
“Despite the ongoing impacts of COVID-19,” Apple CEO Tim Cook said, “Apple is in the midst of our most prolific product introduction period ever, and the early response to all our new products, led by our first 5G-enabled iPhone lineup, has been tremendously positive.” Initial sales of the iPhone 12 and iPhone 12 Pro began in mid-October, with additional mini and Pro Max models scheduled to hit stores in mid-November, so their sales will boost the company’s first 2021 fiscal quarter revenues.
Geographically, Apple’s net sales grew year-over-year from $29.322 billion to $30.698 billion in the Americas, $14.946 billion to $16.9 billion in Europe, $4.982 billion to $5.023 billion in Japan, and $3.656 billion to $4.131 billion in the Asia Pacific region. But they fell sharply in Greater China from $11.134 billion to $7.946 billion, again likely reflecting the delayed arrival of new iPhones in one of their most popular international territories. International sales constituted 59% of the quarter’s revenue, down only slightly from 60% a year earlier.
Apple had a quietly busy fourth fiscal quarter, using a largely remote workforce to finish new mobile operating systems unveiled at an all-digital Worldwide Developers Conference in June. The company also launched new Apple Watches and iPads in September and revealed a new Fitness+ interactive service that relies on the Watch for workout tracking and either Apple TVs, iPads, or iPhones for video playback. Both Fitness+ and Apple One, a bundle of Apple services, will launch during this quarter. Apple is also expected to officially announce the first Macs based on Apple-developed ARM processors during the holiday quarter, alongside macOS Big Sur.
Although the pandemic wrecked countless businesses throughout the year, delaying key Apple products and impacting second quarter earnings , the company weathered the storm impressively, using its strong online retail infrastructure to offset the closures of its brick-and-mortar stores. Sales of Macs and iPads surged to meet new demand for work-from-home and study-from-home devices, and in the third quarter, Apple reported growth across all its geographic segments.
During the July 30 announcement of its third quarter results , Apple said it would split its stock by a 4:1 ratio on August 31, spurring a $2 trillion market capitalization in mid-August and a peak share price of $134.18 on September 1. While the market cap has since fallen from its high of over $2.3 trillion, it has floated around the $2 trillion mark.
Apple also declared a $0.205 per share cash dividend for the quarter, seemingly down from the typical $0.77 per share but actually adjusted upwards, given the 4:1 stock split. It will be payable on November 12 to shareholders on record as of November 9, 2020. Due to continued COVID-19-related unpredictability, the company is not offering guidance for the holiday quarter.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,030 | 2,020 |
"AI Weekly: In a chaotic year, AI is quietly accelerating the pace of space exploration | VentureBeat"
|
"https://venturebeat.com/business/ai-weekly-in-a-chaotic-year-ai-is-quietly-accelerating-the-pace-of-space-exploration"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: In a chaotic year, AI is quietly accelerating the pace of space exploration Share on Facebook Share on X Share on LinkedIn A Kinéis nanosatellite Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The year 2020 continues to be difficult here on Earth, where the pandemic is exploding again in regions of the world that were once successful in containing it. Germany reported a record number of cases this week alongside Poland and the Czech Republic, as the U.S. counted 500,000 new cases. It’s the backdrop to a tumultuous U.S. election, which experts fear will turn violent on election day. Meanwhile, Western and Southern states like Oregon, Washington, California, and Louisiana are reeling from historically destructive wildfires, severe droughts, and hurricanes.
Things are calmer in outer space, where scientists are applying AI to make exciting new finds. Processes that would have taken hours each day if performed by humans have been reduced to minutes, a testament to the good AI can achieve when used in a thoughtful way. While not necessarily groundbreaking, unprecedented, or state-of-the-art with regard to technique, the innovations are inspiring stories of discovery at a time when there isn’t a surfeit of hope.
Earlier this month, researchers at NASA’s Jet Propulsion Laboratory in California announced they had fed an algorithm 6,830 images taken by the Context Camera on NASA’s Mars Reconnaissance Orbiter (MRO) to identify changes to the Martian surface. Given 112,000 images taken by the Context Camera, the AI tool spotted a cluster of craters in the Noctis Fossae region of Mars, including 20 new areas of interest that might have formed from a meteor impact between March 2010 and May 2012. NASA hopes to use similar classification technology on future Mars orbiters, which might provide a more complete picture of how often meteors strike Mars.
In August, researchers at the University of Warwick built a separate AI algorithm to dig through NASA data containing thousands of potential planet candidates. The team trained the system on data collected by NASA’s now-retired Kepler Space Telescope, which spent nine years in deep space searching for new worlds. Once it learned to separate planets from false positives, it was used to analyze datasets that hadn’t yet been validated, which is when it found 50 exoplanets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And last week, Intel, the European Space Agency (ESA), and startup Ubotica detailed what they claim is the first AI-powered satellite to orbit Earth: the desktop-sized PhiSat-1.
It aims to solve the problem of clouds obscuring satellite photos by collecting a large number of images from space in the visible, near-infrared, and thermal-infrared parts of the electromagnetic spectrum and then filtering out cloud-covered images using AI algorithms. Future versions of the PhiSat-1 could look for fires when flying over areas prone to wildfire and notify responders in minutes rather than hours. Over oceans, which are typically ignored, they might spot rogue ships or environmental accidents, and over ice, they could track thickness and melting ponds to help monitor climate change.
AI is problematic in many respects; it’s biased, discriminatory, and harmful at its worst. We have written about how facial recognition algorithms tend to be less accurate when applied to certain racial and ethnic groups. Natural language processing models embed implicit and explicit gender biases, as well as toxic theories and conspiracies. And governments are investigating the use of AI and machine learning to wage deadly warfare.
This being the case, some AI — like that applied to Martian landscapes, telescope snapshots, and cloudy satellite images — can be a force for good. And in a year marked by tragedy and general skepticism about technology (and the tech industry), this positivity isn’t just encouraging, but sorely needed.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,031 | 2,020 |
"Activision Blizzard reports $1.95 billion in revenue for Q3 2020, needs to hire 2,000 people | VentureBeat"
|
"https://venturebeat.com/business/activision-blizzard-beats-expectations-with-1-95-billion-in-revenue-for-q3-2020"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Activision Blizzard reports $1.95 billion in revenue for Q3 2020, needs to hire 2,000 people Share on Facebook Share on X Share on LinkedIn Call of Duty: Warzone is one of Activision Blizzard's big games.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Activision Blizzard reported earnings today that beat expectations. Its Call of Duty franchise continued to outperform last year’s results as people played a lot more games during the pandemic.
The Santa Monica, California-based game publisher said its non-GAAP revenues for the third quarter ended September 30 were $1.95 billion, up 38% from $1.28 billion a year ago, while non-GAAP earnings per share were 88 cents, compared to 38 cents a year earlier.
Activision Blizzard CEO Bobby Kotick said the company is raising its expectations for full-year results. Analysts expected Activision Blizzard’s earnings per share to be 65 cents a share, with revenue at $1.7 billion.
In an interview with GamesBeat, Kotick said the company needs to hire more than 2,000 people to meet its production demands. Activision Blizzard has more than 10,000 employees now.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The company launched its Tony Hawk Pro Skater remake in the quarter, and existing titles delivered good results. Activision Blizzard has big games in the current fourth quarter.
Crash Bandicoot 4: It’s About Time launched October 2. It’s the first original Crash Bandicoot title in decades. The company hopes both of those games will sell well during the holiday season, but it did not report any specific sales numbers for either Tony Hawk or Crash Bandicoot.
Call of Duty: Black Ops — Cold War will launch on PC, Xbox, and PlayStation platforms November 13 as a cross-play, cross-generation game. World of Warcraft’s next expansion, Shadowlands, is launching November 23.
The big video game publisher said that both Call of Duty: Warzone , the battle royale mode for Call of Duty: Modern Warfare; and Call of Duty: Mobile drove demand for the quarter, with both driving more than three times as many monthly active users (MAUs) as compared to the titles that came out last year. Activision had 111 million monthly active users. Kotick said that Warzone reached 80 million downloads.
“What we’ve seen with Call of Duty is an amazing transformation,” Kotick said. “The phenomenon we had with Warzone is so many people are buying the premium Call of Duty: Modern Warfare.” Above: The Shadowlands can be a scary place.
Kotick said that the company remains enthusiastic about growth prospects next year and attributed the performance to good execution of Activision Blizzard’s teams. During the lockdown, other forms of entertainment (like sports and movie theaters) remain stalled because of social distancing and shelter-in-place orders during the pandemic. With few other options, more people than ever are turning to gaming.
When Call of Duty: Warzone launched March 11 , the U.S. was just going into a pandemic lockdown. In its first two months, the free-to-play game got more than 60 million downloads in its first 52 days. Overall, Activision had 111 million monthly active users in the third quarter, down from 125 million in the second quarter. PC players on Call of Duty grew over 10 times compared to the same quarter a year ago, and combined console and PC players grew seven times compared to a year ago.
Players in the free-to-play Warzone also upgraded to Call of Duty: Modern Warfare experience, enabling that game to reach the highest premium sales in the franchise’s history. Two-thirds of all copies were sold digitally. In-game net bookings were four times as high as a year ago.
Call of Duty: Black Ops — Cold War is testing well, with the public preview drawing far more players than a year ago. And Call of Duty: Mobile is now in final large-scale testing in China, where more than 50 million players have “preregistered” to date. The company didn’t say how many users the mobile title had, but it is reaching “impressive levels of reach and engagement compared with the prior quarter.” Blizzard had 30 million monthly active users in the quarter, down from 32 million MAUs in the prior one. But the company said that engagement with World of Warcraft is at the highest level for this stage of an expansion in a decade, with Shadowlands presales ahead of prior expansions. Blizzard recently shut down an office in France with a few hundred employees, as part of shifting from retail to digital sales. Because of French law, the exact number of jobs lost in France won’t be known for a while.
Hearthstone grew year-over-year in the quarter, and Overwatch had 10 million MAUs.
King had 249 million MAUs in the quarter, down from 271 million in the previous one. Candy Crush Saga MAUs grew from a year ago, and Candy Crush was the top-grossing franchise in U.S. app stores.
Outlook As far as outlook goes, the company said it is raising expectations for the fourth quarter and the full year. The company expects non-GAAP earnings per share of $3.08 (previously $2.87 a share) on revenues of $7.67 billion (previously $7.25 billion) for the full year, and non-GAAP EPS of 63 cents on revenue of $2.00 billion in the fourth quarter.
Activision Blizzard said the Diablo PC and mobile games are on track in terms of ongoing development.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,032 | 2,020 |
"A.D.A.M.'s Advisory Board Being Joined by the Top Experts to Spearhead Bone Engineering | VentureBeat"
|
"https://venturebeat.com/business/a-d-a-m-s-advisory-board-being-joined-by-the-top-experts-to-spearhead-bone-engineering"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release A.D.A.M.’s Advisory Board Being Joined by the Top Experts to Spearhead Bone Engineering Share on Facebook Share on X Share on LinkedIn FARMINGTON, Conn.–(BUSINESS WIRE)–October 30, 2020– First paragraph, first sentence of the release should read A Connecticut-based bone printing startup announced that a former director of the Defense Advanced Research Projects Agency (DARPA), Dr. Anthony Tether, and Executive Chairman of Mjalli Investment Group , Dr. Adnan M. Mjalli, joined the advisory board as Chief Scientific and Chief Development Advisor respectively (instead of … Chief Medical and Development Advisor respectively).
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20201029005093/en/ Top experts join A.D.A.M.’s advisory board taking the development process to a new level (Graphic: Business Wire) The updated release reads: A.D.A.M.’S ADVISORY BOARD BEING JOINED BY THE TOP EXPERTS TO SPEARHEAD BONE ENGINEERING A Connecticut-based bone printing startup announced that a former director of the Defense Advanced Research Projects Agency (DARPA), Dr. Anthony Tether, and Executive Chairman of Mjalli Investment Group , Dr. Adnan M. Mjalli, joined the advisory board as Chief Scientific and Chief Development Advisor respectively. The partnership is intended to further A.D.A.M.’s footprint in cutting-edge healthcare technology and foster rapid commercialization of 3D-printing of biological osseous structures in hospitals on demand.
Dr. Anthony Tether will advise A.D.A.M. leadership on innovative research techniques in tissue and bone engineering to unleash the full potential of the technology for the healthcare industry. He believes that A.D.A.M.’s unique approach will allow them to fulfill an unmet need for complex artificial tissues, including biological osseous structures. “Escalating worldwide cases require that we take regenerative medicine to a new level” – explains Dr. Anthony Tether.
As Chief Development Advisor and an internationally recognized entrepreneur in the biotech industry, Dr. Adnan Mjalli will share his in-depth knowledge of life-science business growth and assist the company’s expansion into other markets and assist the company in all upcoming business decisions. He has also become a project’s lead investor in the recently announced investment round for A.D.A.M. “The fact that FDA responded positively regarding A.D.A.M.’s 510(k) eligibility brings substantial acceleration in the core product’s go-to-market process,” Mjalli explained. “The company’s technology provides a possibility to disrupt existing healthcare models and be beneficial for all stakeholders.” A.D.A.M.’s management believes that this new collaboration will help deliver the product of immense strategic importance to society. “Saving lives and improving the quality of life for patients remains a priority for us. And with this advisory board we are closer to fulfilling this mission than we have ever been,” said Denys Gurak, CEO at A.D.A.M.
About A.D.A.M.
A.D.A.M. is a Connecticut-based bio-printing startup that is pivoting the production of customized bone transplants made of ceramic bio-glass and modified biopolymer. With materials at its essence, A.D.A.M.’s solutions enable the fusing of an implant with a skeleton. As a natural bone is being healed, the degradable materials are slowly disappearing without the need for any additional surgical intervention. Starting with bone printing, A.D.A.M. is planning to launch a platform that will allow people to create their digital atlases, download them, and get their tissues and bones printed in hospitals at an affordable price.
View source version on businesswire.com: https://www.businesswire.com/news/home/20201029005093/en/ Denys Gurak, CEO [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,033 | 2,020 |
"Trump faces executive order lawsuit as critical race theory fuels AI research | VentureBeat"
|
"https://venturebeat.com/ai/trump-faces-executive-order-lawsuit-as-critical-race-theory-fuels-ai-research"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Trump faces executive order lawsuit as critical race theory fuels AI research Share on Facebook Share on X Share on LinkedIn Illustration by Pavlo Conchar/SOPA Images/LightRocket via Getty Images Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, civil rights groups — including the NAACP Legal Defense Fund — filed a lawsuit against the Trump administration on the grounds that a Trump executive order violates free speech rights and will “undermine efforts to foster diversity and inclusion in the workplace.” The lawsuit follows opposition to the executive order from a range of groups, including the U.S. Chamber of Commerce, as well as a federal agency’s recent intervention in a Microsoft diversity initiative launched amid calls for racial justice.
The executive order was part of the administration’s sweeping attack on diversity training and critical race theory. Trump has called critical race theory “ toxic propaganda” that will “destroy our country.
” He also claimed diversity training is designed to divide Americans and said students should instead receive a “ patriotic education.
” In early September, a Department of Labor memo directed federal agencies to cancel contracts with vendors who cover critical race theory or “white privilege” in their work, calling the intellectual movement “un-American” and “anti-American propaganda.” If you watched the presidential debates between U.S. President Donald Trump and challenger Joe Biden, the words “artificial intelligence” and “tech” never came up, but the related subject of critical race theory did. During the debate, Trump reiterated his previous position, calling racial sensitivity training “racist” and claiming it teaches people to hate the United States. When given an opportunity to denounce racist views, he instead told the white supremacist group Proud Boys to “stand by.” Biden responded by calling Trump a racist and asserted that racial sensitivity training can make a big difference in fighting systemic racism.
The executive order , which Trump signed a week before the debate, threatened to cut federal funding to agencies and grant recipients that fail to comply. This led to confusion within federal agencies, and the University of Iowa temporarily paused diversity events.
University of Michigan president Mark S. Schlissel objected to the order, arguing that diversity training is intended to bring people together. He called the executive order an attempt to prevent people from “confronting blind spots” and said his university remains committed to dismantling structural oppression.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Though Trump has made efforts to expel critical race theory, some AI researchers are advocating the lens as a way to assess the fairness of AI models.
History, politics, and critical race theory Following these arguments requires a clear understanding of critical race theory, which invites scholars to consider the impacts of race, racism, and power. The term came into being in the late 1970s and 1980s as writers like NYU School of Law professor Derrick Bell sought to understand why the civil rights movement had stalled and worked to address what activists and scholars saw as the rollback of progress. According to the book Critical Race Theory: An Introduction by Richard Delgado and Jean Stefancic, critical race theory draws lessons from civil rights, Black Power, and Chicano movements, as well as the work of individuals like Frederick Douglass, Sojourner Truth, Cesar Chavez, and Martin Luther King, Jr.
Critical race theory provided a sociological framework that first touched law and education but grew to encompass other fields, like public health, education, and ethnic studies. It includes the premise that racism has become normalized in the U.S. and is therefore more difficult to address. Critical race theory asserts that racial categories are a social construction and considers the problem of white privilege and the importance of intersectionality. Made popular by Kimberlé Crenshaw, intersectionality proposes that a person’s identity includes overlapping concepts of race, class, gender, religion, and sexual identity.
California has a history of leading the way in ethnic studies education. The first College of Ethnic Studies was created in California in 1969, following the longest student-led protests in U.S. history. Students of color have called ethnic studies vitally important to their education, and a 2016 Stanford University study found that ethnic studies classes improved attendance and grades for students at risk of dropping out of high school. These findings are particularly important since the average child born in America today is not white.
Governor Gavin Newsom signed a bill last summer that made ethnic studies a California State University undergraduate degree requirement, making California the first state in the nation to do so. But in early October, citing an “insufficiently balanced” model curriculum, Newsom vetoed a bill that would have required high school students to take at least one semester of ethnic studies in order to obtain a diploma. Bill author Assemblymember Jose Medina, also a Democrat, called the veto “a failure to push back against the racial rhetoric and bullying of Donald Trump.” Google AI dives into sociology President Trump may be on a campaign to suppress critical race theory, but the idea is taking hold at Google. One of the largest employers of AI research talent, Google is incorporating critical race theory into its tech development and fairness analysis processes, Google AI ethics co-lead Meg Mitchell told VentureBeat in a meeting with journalists last week.
“This was a bit of an intervention with our first social scientists. So our team has been able to get three social scientists, which are the first research scientists, ethnographers, people who have a lot of knowledge about gender and identity at Google looking at critical race theory to literally make this part of our development process,” she said.
This effort includes a research paper titled “Towards a Critical Race Methodology in Algorithmic Fairness,” which was published in December 2019 by four members of Google Research. Mitchell said the paper is some of Google’s first work on critical race theory.
The paper, which was presented earlier this year at the Fairness, Accountability and Transparency ( FAccT ) conference, urges the AI ethics research community to employ critical race theory when evaluating fairness research. The problem, coauthors of the paper argue, is that many modern algorithmic fairness frameworks lack historical and social context and use racial categorization in nondescript or decontextualized ways.
“While we acknowledge the importance of measuring race for the purposes of understanding patterns of differential performance or differentially adverse impact of algorithmic systems, in this work, we emphasize that data collection and annotation efforts must be grounded in the social and historical contexts of racial classification and racial category formation,” the paper reads. “To oversimplify is to do violence, or even more, to reinscribe violence on communities that already experience structural violence.” Measuring fairness vs. seeking justice While tracking race can be useful to verify the degree or absence of discrimination, simply deciding that algorithms should ignore race does not solve the issue, lead author and Google senior research scientist Alex Hanna told VentureBeat in an interview. Leaders in the field like Dr. Safiya Noble warn that attempts to remove race from the equation can actually perpetuate existing social hierarchies built on inequity.
“What I find so valuable about critical race theory is that it puts [race] at the center of algorithmic fairness in a way that algorithmic fairness often obviates it or ignores it,” Hanna said. “One of the things that I worry about most in this field is that there’s a conversation that gets had about fairness as a kind of metric that can be solved, rather than an invitation to an inquiry about justice and human flourishing and well-being and the destruction of white supremacist structures. And so that’s sort of the biggest thing I think we lose when we don’t adopt a critical race theory lens.” Generally speaking, Hanna believes research at the intersection of race and technology is among some of the most important work to come out of the algorithmic fairness community. A notable example is the Gender Shades project , created by Google AI co-lead Timnit Gebru , Algorithmic Justice League founder Joy Buolamwini, and Deb Raji. Their landmark research found that facial recognition technology performs poorly on women with dark skin. Gender Shades has shaped perceptions of algorithmic fairness in Congress , as well as in cities that have implemented facial recognition bans, like San Francisco and Portland , Oregon.
Like fairness, race is a contested concept, which is why researchers would do well to adopt a multidimensional approach.
“We encourage algorithmic fairness researchers to explore how different racial dimensions and their attendant measurements might reveal different patterns of unfairness in sociotechnical systems,” the Google Research paper reads. “It is critical to expand the scope of analysis beyond the algorithmic frame and interrogate how patterns of racial oppression might be embedded in the data and model and might interact with the resulting system.” Since the algorithmic fairness community first emerged, researcher Arvind Narayanan has identified 21 different ways to measure fairness.
There’s statistical bias, group fairness, individual fairness, and a range of binary classification fairness metrics, but deciding how to prioritize one metric over another is far from simple.
Hanna agrees that metrics come with inherent tradeoffs and believes these measures appeal to computer scientists’ desire to quantify problems. Instead, she believes people should ask what justice and remediation look like and consider how to address the harms of an aggrieved population.
Reimagining what’s possible with critical race theory Perhaps the most influential work to directly incorporate critical race theory into AI is Race After Technology , written by African American Studies associate professor Dr. Ruha Benjamin. The book considers the concept of a “New Jim Code” and warns that algorithms are automating bias and that engineers must guard against the use of design practices that amplify racial hierarchies. Her call to reimagine technology stems from a critical race theory tenet that encourages multi-narrative storytelling.
While delivering a speech at the International Conference on Learning Representations (ICLR) earlier this year, Benjamin urged deep learning practitioners to consider social and historical context or risk becoming like IBM workers who played a role in the Holocaust.
The Google Research paper argues that algorithmic fairness frameworks must begin from the perspectives of oppressed groups. In doing so, it joins a long line of works in algorithmic bias research, as businesses and governments explore ways to put AI principles into practice.
In June, Microsoft Research conducted an analysis of the existing body of NLP bias research and implored the algorithmic fairness community to consider social hierarchies like racism when evaluating language models. This summer, drawing on Benjamin’s work, University of Oxford researchers introduced a paper titled “The Whiteness of AI ,” in which they applied critical race theory to depictions of AI in science fiction and pop culture and concluded that these works tend to erase people of color.
In July, a trio of researchers from Google’s DeepMind presented a paper exploring ways to create decolonial AI and prevent the spread of algorithmic oppression.
And earlier this month, a paper by Abeba Birhane and Olivia Guest found that decolonization of computer science requires seeing things from the perspective of Black women and other women of color. Doing so, the coauthors argue, can lead to fewer cases of machine learning research that is rooted in pseudoscience like eugenics or physiognomy, which infers characteristics from a person’s physical appearance.
Final thoughts Critical race theory belongs to a U.S. tradition of critical examination that has informed abolitionist and feminist movements. In Critical Race Theory: An Introduction , Delgado and Stefancic posit that critical race theory sprang from critical legal studies and radical feminism. The book draws a line from historical figures like W.E.B. DuBois and Ida B. Wells to more recent social movements. It also cites spinoff works from Latinx and queer critical scholars.
It seems a similar line can be drawn to protests of historic size in recent years, including Black Lives Matter protests and the #MeToo movement. Beyond AI and tech policy, it’s virtually impossible to consider the major issues of our time — COVID-19 deaths, continuing economic fallout, inequality in education , and the fate of essential workers during the pandemic — without viewing them through the prism of race.
When it comes to AI regulation and the impact of policy on people’s lives, ignoring historical and social context has been used to justify savage behavior and deny justice to those calling for an end to systemic inequality.
NYU’s Bell called racism a form of control for both Black and white people in the United States and said that “telling the truth as you see it is empowering.” He believes that calling the U.S. a white supremacist country is critical to healing, much as an alcoholic must admit they have a problem before beginning down the road to recovery.
Critical race theory acknowledges the existence of racism and power dynamics that are older than the United States and continue to shape American history. It’s also what Bryan Stephenson, a man Desmond Tutu calls “ America’s Mandela ,” refers to when he encourages an honest accounting of our past and critical self-examination as a way to reconcile past transgressions.
From antitrust law reform to facial recognition regulation and other thorny tech policy issues facing the next leaders in Washington, D.C., the harms AI can inflict will be front and center. Whether we call that framework restorative justice or critical race theory, attempts to address the negative impact algorithms can have on human lives will rely on a critical grasp of social and historical context. Rather than being a tool of division, such critical examination is essential to building what Thomas Jefferson termed “a more perfect union.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,034 | 2,020 |
"PowerTransformer uses AI to rewrite text to correct gender biases in character portrayals | VentureBeat"
|
"https://venturebeat.com/ai/powertransformer-uses-ai-to-rewrite-text-to-correct-gender-biases-in-character-portrayals"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PowerTransformer uses AI to rewrite text to correct gender biases in character portrayals Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Unconscious biases are pervasive in text and media. For example, female characters in stories are often portrayed as passive and powerless while men are portrayed as proactive and powerful. According to a McKinsey study of 120 movies across 10 markets, the ratio of male to female characters was 3:1 in 2016, the same it’s been since 1946.
Motivated by this, researchers at the Allen Institute for Artificial Intelligence and the University of Washington created PowerTransformer , a tool that aims to rewrite text to correct implicit and potentially undesirable bias in character portrayals. They claim that PowerTransformer is a major step toward mitigating well-documented gender bias in movie scripts, as well as other scripts in other forms of media.
PowerTransformer is akin to GD-IQ, a tool that leverages AI developed at the University of Southern California Viterbi School of Engineering to analyze the text of a script and determine the number of male and female characters and whether they’re representative of the real population at large. GD-IQ also can discern the numbers of characters who are people of color, LGBTQ, experience disabilities, or belong to other groups typically underrepresented by Hollywood storytelling.
But PowerTransformer goes one step further and tackles the task of controllable text revision, or rephrasing text to a style using machine learning. For example, it can automatically rewrite a sentence like “Mey daydreamed about being a doctor” as “Mey pursued her dream to be a doctor,” which has the effect of giving the character Mey more authority and decisiveness.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers note that controllable rewriting systems face key challenges. First, they need to be able to make edits beyond surface-level paraphrasing, as simple paraphrasing often doesn’t adequately address overt bias (the choice actions) and subtle bias (the framing of actions). Second, their debiasing revisions should be purposeful and precise and shouldn’t make unnecessary changes to the underlying meaning of the text.
PowerTransformer overcomes these challenges by jointly learning to reconstruct partially masked story sentences while also learning to paraphrase from an external corpus of paraphrases. The model recovers masked-out agency-associated verbs in sentences and employs a vocab-boosting technique during generation to increase the likelihood it uses words with a target level of agency (i.e., ability to act and make choices). For instance, “A friend asked me to watch her two year old child for a minute” would become “A friend needed me to watch her two year old child for a minute,” lowering agency, while “Allie was failing science class” would become “Allie was taking science class.” During experiments, the researchers investigated whether PowerTransformer could mitigate gender biases in portrayals of 16,763 characters from 767 modern English movie scripts. Of those characters, 68% were inferred to be men and only 32% women; they attempted to re-balance the agency levels of female characters to be on par with male characters.
The results show that PowerTransformer’s revisions successfully increased the instances of positive agency of female characters while decreasing their negative agency or passiveness, according to the researchers. “Our findings on movie scripts show the promise of using controllable debiasing to successfully mitigate gender biases in portrayal of characters, which could be extended to other domains,” they wrote. “Our findings highlight the potential of neural models as a tool for editing out social biases in text.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,035 | 2,020 |
"Piaggio's personal cargo robot Gita seeks new life in B2B | VentureBeat"
|
"https://venturebeat.com/ai/piaggios-personal-cargo-robot-gita-seeks-new-life-as-a-b2b-buddy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Piaggio’s personal cargo robot Gita seeks new life in B2B Share on Facebook Share on X Share on LinkedIn Gita robots in different colors Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Piaggio Fast Forward (PFF), a subsidiary of Italian two-wheeled vehicle maker Piaggio, has announced that it’s making its Gita service robots available to businesses as part of a new B2B program.
Piaggio, best known for its Vespa-branded scooters, established its Boston-based PFF offshoot back in 2015, and two years later gave a glimpse of its first products : small and large autonomous robots called Gita and Kilo, respectively. PFF spent several years refining the smaller incarnation ahead of its official launch last October , at which point it revealed that anyone would be able to buy their very own Gita for $3,250. Now, PFF is looking to increase Gita’s utility in a variety of public settings, thanks to partnerships with Cincinnati’s CVG International Airport, a retirement community in Florida, a food delivery company in Kentucky, and a retail mall in Turkey.
Demand for professional service robots continues to rise, with data from the International Federation of Robotics (IFR) revealing this week that sales increased by 32% to $11.2 billion globally in 2019. The IFR also anticipates that COVID-19 will only serve to accelerate this upward trend, with robotics disinfection, logistics, and delivery serving to help people remain distanced from each other. Moreover, mass market service robots for personal and domestic use are also on the rise, according to IFR, including floor-cleaning and lawn-mowing robots, with sales growing 20% to $5.7 billion in 2019.
Follow the leader Gita’s basic raison d’être is to follow its owner around and carry their stuff, with the ability to travel at up to 6 miles per hour. Using on-board cameras as sensors, Gita pairs with its owner through recognizing their shape and size, but it also recognizes other human forms so it can move around them and continue following the correct person.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Gita following a person Gita measures just 27in (L) x 22.3in (W) x 24in (H) on the outside, and can carry up to 40 pounds of cargo — this could be anything from gym gear and kids’ toys to groceries.
Above: Gita carrying cargo Business as usual As part of its new pilot program, Cincinnati / Northern Kentucky’s CVG international airport will use Gita across a variety of use cases, including providing contactless concierge services for travelers.
Elsewhere, a retirement community * in Florida is also shaping up to adopt Gita and help residents with their shopping and even golfers during tournaments, though this isn’t yet a done deal. And Delivery Co-op, a restaurant delivery service in Lexington, Kentucky, will also use Gita for contactless deliveries. In Turkey, one of PFF’s only international pilot programs, the Doğan Group will trial Gitas at one of its retail malls and a waterfront marina, where the two-wheeled bot could serve people beverages, bring them their shopping, and more.
While it’s still very early days for both PFF and Gita, it’s entering an increasingly busy field. The COVID-19 crisis in particular has proven to be a catalyst for businesses seeking safe ways to continue operating. In the months that followed the big global lockdown, countless examples emerged from the public and private spheres showing how robots could play a role in the so-called “new normal,” for hospitals, airports, offices, coffee shops , and more.
At more than $3,000 a pop, Gita is likely to be a tough sell for most consumers, which is why a B2B program makes a great deal of sense. Deeper-pocketed businesses can dole out cash for several Gitas, which they can then offer to their own customers as value-added services or monetize directly in the form of short-term rentals to carry people’s stuff.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,036 | 2,020 |
"MIT researchers say their AI model can identify asymptomatic COVID-19 carriers | VentureBeat"
|
"https://venturebeat.com/ai/mit-researchers-say-their-ai-model-can-identify-asymptomatic-covid-19-carriers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT researchers say their AI model can identify asymptomatic COVID-19 carriers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Researchers at MIT say they’ve developed an algorithm that can diagnose COVID-19 by the sound of someone’s cough, even if that person is asymptomatic. In a paper published in the IEEE Journal of Engineering in Medicine and Biology , the team reports that their approach distinguishes between infected and healthy individuals through “forced-cough” recordings contributed via smartphones, laptops, and other mobile devices.
Applying AI to discern the cause of a cough isn’t a new idea. Last year, a group of Australian researchers developed a smartphone app that could ostensibly identify respiratory disorders like pneumonia and bronchitis by “listening” to a person’s exhalations. The potential for bias exists in these systems — algorithms trained on imbalanced or unrepresentative datasets can lead to worse health outcomes for certain user groups — but studies suggest they could be a useful tool on the front lines of the coronavirus pandemic.
The MIT researchers, who had been developing a model to detect signs of Alzheimer’s from coughs, trained their system on tens of thousands of samples of coughs as well as spoken words. Prior research suggests the quality of the sound “mmmm” can be an indication of how weak or strong a person’s vocal cords are, and so the team trained a model on an audiobook dataset with more than 1,000 hours of speech to pick out the word “them” from words like “the” and “then.” They then trained a second model to distinguish emotions in speech on a dataset of actors intonating emotional states such as neutral, calm, happy, and sad. And they trained a third model on a database of coughs in order to discern changes in lung and respiratory performance.
The coughs came from a website launched in April that allowed people to record a series of coughs and fill out a survey, which asked things like which symptoms they were experiencing, whether they had COVID-19, and whether they were diagnosed through an official test. It also asked contributors to note any relevant demographic information including their gender, geographical location, and native language.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers collected more than 70,000 recordings amounting to some 200,000 forced-cough audio samples. (Around 2,500 recordings were submitted by people who were confirmed to have COVID-19, including those who were asymptomatic.) A portion of these — 2,500 COVID-19-associated recordings, along with 2,500 recordings randomly selected from the collection to balance the dataset — were used to train the third model.
After combining the model trained on the audiobook snippets, the emotional state detector, and the cough classifier into one, the team tested the ensemble on 1,000 recordings from the cough dataset. They claim it managed to identify 98.5% of coughs from people confirmed with COVID-19 and accurately detect all of the asymptomatic coughs.
The MIT researchers stress that the model isn’t meant to diagnose symptomatic people. Rather, they hope to use it to develop a free prescreening app based on their AI model, and they say they’re partnering with several hospitals to collect larger, more diverse sets of cough recordings to train and strengthen the model’s accuracy.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,037 | 2,020 |
"How to make sure your 'AI for good' project actually does good | VentureBeat"
|
"https://venturebeat.com/ai/how-to-make-sure-your-ai-for-good-project-actually-does-good"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to make sure your ‘AI for good’ project actually does good Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence has been front and center in recent months. The global pandemic has pushed governments and private companies worldwide to propose AI solutions for everything from analyzing cough sounds to deploying disinfecting robots in hospitals. These efforts are part of a wider trend that has been picking up momentum: the deployment of projects by companies, governments, universities, and research institutes aiming to use AI for societal good. The goal of most of these programs is to deploy cutting-edge AI technologies to solve critical issues such as poverty, hunger, crime, and climate change, under the “AI for good” umbrella.
But what makes an AI project good ? Is it the “goodness” of the domain of application, be it health, education, or environment? Is it the problem being solved (e.g. predicting natural disasters or detecting cancer earlier)? Is it the potential positive impact on society, and if so, how is that quantified? Or is it simply the good intentions of the person behind the project? The lack of a clear definition of AI for good opens the door to misunderstandings and misinterpretations, along with great chaos.
AI has the potential to help us address some of humanity’s biggest challenges like poverty and climate change.
However, as any technological tool, it is agnostic to the context of application, the intended end-user, and the specificity of the data. And for that reason, it can ultimately end up having both beneficial and detrimental consequences.
In this post, I’ll outline what can go right and what can go wrong in AI for good projects and will suggest some best practices for designing and deploying AI for good projects.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Success stories AI has been used to generate lasting positive impact in a variety of applications in recent years. For example, Statistics for Social Good out of Stanford University has been a beacon of interdisciplinary work at the nexus of data science and social good. In the last few years, it has piloted a variety of projects in different domains, from matching nonprofits with donors and volunteers to investigating inequities in palliative care. Its bottom-up approach, which connects potential problem partners with data analysts, helps these organizations find solutions to their most pressing problems. The Statistics for Social Good team covers a lot of ground with limited manpower. It documents all of its findings on its website, curates datasets, and runs outreach initiatives both locally and abroad.
Another positive example is the Computational Sustainability Network, a research group applying computational techniques to sustainability challenges such as conservation, poverty mitigation, and renewable energy. This group adopts a complementary approach for matching computational problem classes like optimization and spatiotemporal prediction with sustainability challenges such as bird preservation, electricity usage disaggregation and marine disease monitoring. This top-down approach works well given that members of the network are experts in these techniques and so are well-suited to deploy and fine-tune solutions to the specific problems at hand. For over a decade, members of CompSustNet have been creating connections between the world of sustainability and that of computing, facilitating data sharing and building trust. Their interdisciplinary approach to sustainability exemplifies the kind of positive impacts AI techniques can have when applied mindfully and coherently to specific real-world problems.
Even more recent examples include the use of AI in the fight against COVID-19. In fact, a plethora of AI approaches have emerged to address various aspects of the pandemic, from molecular modeling of potential vaccines to tracking misinformation on social media — I helped write a survey article about these in recent months. Some of these tools, while built with good intentions, had inadvertent consequences. However, others produced positive lasting impacts, especially several solutions created in partnership with hospitals and health providers. For instance, a group of researchers at the University of Cambridge developed the COVID-19 Capacity Planning and Analysis System tool to help hospitals with resource and critical care capacity planning. The system, whose deployment across hospitals was coordinated with the U.K.’s National Health Service, can analyze information gathered in hospitals about patients to determine which of them require ventilation and intensive care. The collected data was percolated up to the regional level, enabling cross-referencing and resource allocation between the different hospitals and health centers. Since the system is used at all levels of care, the compiled patient information could not only help save lives but also influence policy-making and government decisions.
Unintended consequences Despite the best intentions of the project instigators, applications of AI towards social good can sometimes have unexpected (and sometimes dire) repercussions. A prime example is the now-infamous COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) project, which various justice systems in the United States deployed. The aim of the system was to help judges assess risk of inmate recidivism and to lighten the load on the overflowing incarceration system. Yet, the tool’s risk of recidivism score was calculated along with factors not necessarily tied to criminal behaviour, such as substance abuse and stability. After an in-depth ProPublica investigation of the tool in 2016 revealed the software’s undeniable bias against blacks, usage of the system was stonewalled. COMPAS’s shortcomings should serve as a cautionary tale for black-box algorithmic decision-making in the criminal justice system and other areas of government, and efforts must be made to not repeat these mistakes in the future.
More recently, another well-intentioned AI tool for predictive scoring spurred much debate with regard to the U.K. A-level exams. Students must complete these exams in their final year of school in order to be accepted to universities, but they were cancelled this year due to the ongoing COVID-19 pandemic. The government therefore endeavored to use machine learning to predict how the students would have done on their exams had they taken them, and these estimates were then going to be used to make university admission decisions. Two inputs were used for this prediction: any given student’s grades during the 2020 year, and the historical record of grades in the school the student attended. This meant that a high-achieving student in a top-tier school would have an excellent prediction score, whereas a high-achieving student in a more average institution would get a lower score, despite both students having equivalent grades. As a result, two times as many students from private schools received top grades compared to public schools, and over 39% of students were downgraded from the cumulative average they had achieved in the months of the school year before the automatic assessment. After weeks of protests and threats of legal action by parents of students across the country, the government backed down and announced that it would use the average grade proposed by teachers instead. Nonetheless, this automatic assessment serves as a stern reminder of the existing inequalities within the education system, which were amplified through algorithmic decision-making.
While the the goals of COMPAS and the UK government were not ill-intentioned, they highlight the fact that AI projects do not always have the intended outcome. In the best case, these misfires can still validate our perception of AI as a tool for positive impact even if they haven’t solved any concrete problems. In the worst case, they experiment on vulnerable populations and result in harm.
Improving AI for good Best practices in AI for good fall into two general categories — asking the right questions and including the right people.
1. Asking the right questions Before jumping head-first into a project intending to apply AI for good, there are a few questions you should ask. The first one is: What is the problem, exactly? It is impossible to solve the real problem at hand, whether it be poverty, climate change, or overcrowded correctional facilities. So projects inevitably involve solving what is, in fact, a proxy problem: detecting poverty from satellite imagery, identifying extreme weather events, producing a recidivism risk score. There is also often a lack of adequate data for the proxy problem, so you rely on surrogate data, such as average GDP per census block, extreme climate events over the last decade, or historical data regarding inmates committing crimes when on parole. But what happens when the GDP does not tell the whole story about income, when climate events are progressively becoming more extreme and unpredictable, or when police data is biased? You end up with AI solutions that optimize the wrong metric, make erroneous assumptions, and have unintended negative consequences.
It is also crucial to reflect upon whether AI is the appropriate solution. More often than not, AI solutions are too complex, too expensive, and too technologically demanding to be deployed in many environments. It is therefore of paramount importance to take into account the context and constraints of deployment, the intended audience, and even more straightforward things like whether or not there is a reliable energy grid present at the time of deployment. Things that we take for granted in our own lives and surroundings can be very challenging in other regions and geographies.
Finally, given the current ubiquity and accessibility of machine learning and deep learning approaches, you may take for granted that they are the best solution for any problem, no matter its nature and complexity. While deep neural networks are undoubtedly powerful in certain use cases and given a large amount of high-quality data relevant to the task, these factors are rarely the norm in AI-for-good projects. Instead, teams should prioritize simpler and more straightforward approaches, such as random forests or Bayesian networks, before jumping to a neural network with millions of parameters. Simpler approaches also have the added value of being more easily interpretable than deep learning, which is a useful characteristic in real-world contexts where the end users are often not AI specialists.
Generally speaking, here are some questions you should answer before developing an AI-for-good project: Who will define the problem to be solved? Is AI the right solution for the problem? Where will the data come from? What metrics will be used for measuring progress? Who will use the solution? Who will maintain the technology? Who will make the ultimate decision based on the model’s predictions? Who or what will be held accountable if the AI has unintended consequences? While there is no guaranteed right answer to any of the questions above, they are a good sanity check before deploying such a complex and impactful technology as AI when vulnerable people and precarious situations are involved. In addition, AI researchers must be transparent about the nature and limitations of the data they are using. AI requires large amounts of data, and ingrained in that data are the inherent inequities and imperfections that exist within our society and social structures. These can disproportionately impact any system trained on the data leading to applications that amplify existing biases and marginalization. It is therefore critical to analyze all aspects of the data and ask the questions listed above, from the very start of your research.
When you are promoting a project, be clear about its scope and limitations; don’t just focus on the potential benefits it can deliver. As with any AI project, it is important to be transparent about the approach you are using, the reasoning behind this approach, and the advantages and disadvantages of the final model. External assessments should be carried out at different stages of the project to identify potential issues before they percolate through the project. These should cover aspects such as ethics and bias, but also potential human rights violations, and the feasibility of the proposed solution.
2. Including the right people AI solutions are not deployed in a vacuum or in a research laboratory but involve real people who should be given a voice and ownership of the AI that is being deployed to “help'” them — and not just at the deployment phase of the project. In fact, it is vital to include non-governmental organizations (NGOs) and charities, since they have the real-world knowledge of the problem at different levels and a clear idea of the solutions they require. They can also help deploy AI solutions so they have the biggest impact — populations trust organizations such as the Red Cross, sometimes more than local governments. NGOs can also give precious feedback about how the AI is performing and propose improvements. This is essential, as AI-for-good solutions should include and empower local stakeholders who are close to the problem and to the populations affected by it. This should be done at all stages of the research and development process, from problem scoping to deployment. The two examples of successful AI-for-good initiatives I cited above (CompSusNet and Stats for Social Good) do just that, by including people from diverse, interdisciplinary backgrounds and engaging them in a meaningful way around impactful projects.
In order to have inclusive and global AI, we need to engage new voices, cultures, and ideas. Traditionally, the dominant discourse of AI is rooted in Western hubs like Silicon Valley and continental Europe. However, AI-for-good projects are often deployed in other geographical areas and target populations in developing countries. Limiting the creation of AI projects to outside perspectives does not provide a clear picture about the problems and challenges faced in these regions. So it is important to engage with local actors and stakeholders. Also, AI-for-good projects are rarely a one-shot deal; you will need domain knowledge to ensure they are functioning properly in the long term. You will also need to commit time and effort toward the regular maintenance and upkeep of technology supporting your AI-for-good project.
Projects aiming to use AI to make a positive impact on the world are often received with enthusiasm, but they should also be subject to extra scrutiny. The strategies I’ve presented in this post merely serve as a guiding framework. Much work still needs to be done as we move forward with AI-for-good projects, but we have reached a point in AI innovation where we are increasingly having these discussions and reflecting on the relationship between AI and societal needs and benefits. If these discussions turn into actionable results, AI will finally live up to its potential to be a positive force in our society.
Thank you to Brigitte Tousignant for her help in editing this article.
Sasha Luccioni is a postdoctoral researcher at MILA , a Montreal-based research institute focused on artificial intelligence for social good.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,038 | 2,020 |
"How the U.S. patent office is keeping up with AI | VentureBeat"
|
"https://venturebeat.com/ai/how-the-u-s-patent-office-is-keeping-up-with-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the U.S. patent office is keeping up with AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Technology keeps creating challenges for intellectual property law. The infamous case of the “ monkey selfie ” challenged the notion of not just who owns a piece of intellectual property, but what constitutes a “who” in the first place. Last decade’s semi-sentient monkey is giving way to a new “who”: artificial intelligence. The rapid rise of AI has forced the legal field to ask difficult questions about whether an AI can hold a patent at all , how existing IP and patent laws can address the unique challenges that AI presents, and what challenges remain.
The answers to these questions are not trivial; stakeholders have poured billions upon billions of dollars into researching and developing AI technologies and AI-powered products and services across academia, government, and industry. Patent ownership stands as a proxy for who holds the money and the power in AI.
For example, there are huge ramifications depending on whether an employee who worked on a project is the patent holder, or if it’s the company that employs that worker, or if it’s the AI itself. It’s also difficult to balance the transparency and auditability of an AI for the purpose of gaining a patent with the danger of exposing trade secrets.
These novel challenges come at a time when applications for AI-related patents continue to soar. According to a new report from the U.S. Patent and Trademark Office (USPTO) called “ Inventing AI: Tracing the diffusion of artificial intelligence with U.S. patents ,” annual AI patent applications increased 100%, from 30,000 to 60,000, from 2002-2018. Over the same time period, the percentage of applications that contained AI in some way grew from 9% to almost 16%. The report does not share data from the past two years, but given the blisteringly hot AI summer we’re currently in, it’s likely that those numbers have only increased.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How current laws succeed (or not) at handling AI patents The USPTO is aggressively working to gain clarity and consensus around how it should handle AI. In 2019, the agency set up two requests for comments (RFCs): One was about AI and patent law , and the other addressed the impact of AI on intellectual property (IP) policy. The two aforementioned RFCs had nearly 100 respondents each from a variety of fields, including law, trade groups, academia, and more.
In a recent report , the agency summarized the responses it received.
Largely, the respondents agreed enough that some general consensus emerged on several items: There is no universal definition of AI State-of-the-art AI is still narrow — that is, it really only works well in service of narrow applications — and thus artificial general intelligence (AGI) remains theoretical Because AI’s capabilities are still narrow, AI can’t invent or author anything without human intervention When it comes to current intellectual property (IP) laws, most respondents agreed that they’re “correctly calibrated” to deal with evolution of AI, and that existing gaps can be filled by existing commercial law principles like contract law. They further mostly agreed that although existing fair use law doesn’t need to change, using copyrighted material to train AI may be problematic.
They also urged the USPTO to more deeply examine how AI may create an overwhelming amount of prior art (meaning the body of knowledge that exists at the time of an application filing), and how to mitigate the difficulty of discovering prior art in light of that increased volume.
Using AI for AI in patent law The agency has made strides in solving the volume issue, and it started by creating internal working definitions for AI to guide its processes.
In its “Inventing AI” report, the USPTO brought up, then rejected as insufficient, the definition of AI that the U.S. National Institute of Standards and Technology (NIST) uses. According to the USPTO report, NIST defines AI as “software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action.” “Although carefully constructed, this definition is not specific enough for a patent level analysis,” the report reads. “For patent applications and grants, we define AI as comprising one or more of eight component technologies. These components span software, hardware, and applications, and a single patent document may contain multiple AI component technologies.” Those eight component technologies are: Knowledge processing — representing and deriving facts about the world and using this information in automated systems Planning/control — contains processes to identify, create, and execute activities to achieve specified goals Computer vision — extracts and understands information from images and videos Speech recognition — techniques to understand a sequence of words given an acoustic signal AI hardware — physical computer components designed to meet the need for considerable computing power for AI through increased processing efficiency and/or speed Evolutionary computation — contains a set of computational routines using aspects of nature and, specifically, evolution Natural language processing — understanding and using data encoded in written language Machine learning — contains a broad class of computational models that learn from data Those definitions exist not only for the purpose of clarity, but as a foundation for the USPTO to develop and deploy machine learning to improve its processes.
“We are working on adding AI tools to help route applications to examiners more quickly and to help examiners search for prior art,” Andrei Iancu, U.S. Under Secretary of Commerce for intellectual property, wrote in an emailed response to VentureBeat. “We’ve also been active on the Trademarks side, exploring the use of AI to help find prior similar images and to identify what we call fraudulent specimens. We are exploring using AI to improve the accuracy and integrity of the trademark register.” The agency is using AI to more efficiently process patent applications — a process that involves “patent landscaping.” Usually, patent landscaping is a manual and time-consuming task that involves humans punching in keywords to find relevant patents and prior art in a database. Simply put, the agency automated this process by modifying a machine learning technique that Google employees Aaron Abood and Dave Feltenberger detailed in their 2018 paper called (appropriately enough) “ Automated patent landscaping.
” The USPTO authors added a manual validation step to that approach. They trained an ML model for each of the eight AI categories that the USPTO created, and then trained them on the text from patent application abstracts and claims. Then they validated those results with the manual review process.
Separately, the USPTO’s staff is using bots to reduce toil. According to a report from the Federal News Network, this summer the acting director of USPTO’s Office of Organizational Policy and Governance, Rajeev Dolas, said that agency staff had begun experimenting with using bots to aid them in their work — but they were running on individuals’ laptops, with little governance.
That’s now changed: A USPTO spokesperson told VentureBeat that the agency is now using a “leading RPA provider” to centralize its bot efforts and ensure a proper process and governance model that includes use cases, development, testing, and security before bots are deployed. “We are in the nascent stages of bot adoption – learning from our pilot bot mistakes and scaling on success with the goal to reduce manual tasks and save staff time in lost productivity due to mundane [and/or] monotonous keystrokes,” the spokesperson said.
Iancu said that the USPTO is hiring AI and machine learning experts to double down on its efforts to streamline data generation and analysis, and to generally provide better information about its operations to a curious public. “By providing users with greater insight into the patent and trademark processes, we hope to increase the number of people involved in innovation and the protection of their intellectual property. In the end, that is what the IP system is all about,” he said.
Lingering issues Despite the apparent overall agreement among RFC respondents and the USPTO’s confidence in its definition and use of AI, significant issues linger. And of course, given how quickly technology changes, no one can assume anything is completely settled for long.
“It is correct that the majority of commenters stated that no changes should be necessary to the current U.S. law,” Iancu said. “However, there also appears to be agreement that AI poses certain unique and interesting policy questions that we should all be watching closely.” One of those is (still) whether or not an inventor can only be a “natural person.” Iancu allows that although some believe AI can innovate on its own, at least to an extent, he’s firm on the idea that the machines require humans. “Only humans can commercialize those innovations,” he said. “Additionally, the IP system incentivizes innovation in a way that matters greatly to humans, but does not, at least yet, appear to matter to machines.” Dennis Crouch, an associate professor of law at the University of Missouri, sees this debate around patent-holding personhood “as part of the larger shift toward corporate control over the fruits of employee labor.” He said that although ownership rights around employment and contract laws have favored the employer, state and federal laws limit them. “At the minimum, they are barred by the 13th Amendment’s prohibition on slavery. [But] if we can point to an AI or ‘corporate’ inventor, those human-rights issues no longer carry weight, and the corporate owner is free to fully control,” Crouch said. He further noted , “For the past 200 years we have always focused on the inventor as the patentee, but that has now changed.” There’s a tricky issue at the intersection of transparency and patent law. In an emailed response to VentureBeat, Tessa Schwartz, a managing partner at the law firm of Morrison and Foerster, pointed to the “tension between the desire to maintain confidentiality and ownership of data and algorithms and the necessity of making available data inputs and outputs and AI developments to third parties.” “If not properly handled, access can result in a loss of trade secret protection and control over data and AI. That’s because a trade secret needs to be a secret, i.e., you have to use reasonable efforts to keep the information secret and not ‘readily ascertainable,'” Schwartz said.
She said that regulators and law professionals may need to find new ways to address those issues. “For example, we may need to develop standard terms or see common market terms for making data sets available — including through open source models. Those terms will of course have to take into consideration privacy and other regulatory issues,” Schwartz continued. “Also, we may see reliance on respected, neutral third parties under confidentiality obligations to ‘audit’ data and algorithms using standard protocols.” Andrew Burt, managing partner at the AI-focused law firm Bnh.ai , raised a more ground-level problem in response to questions from VentureBeat. “On AI transparency, this is becoming a major issue we see with our clients, largely because of the number of third parties that are involved in the AI lifecycle — from training data to model development, deployment infrastructure and more,” he said. “In practice, every party involved tends to insert its own IP claims and inject new sources of risks into the process of deploying AI. So the aims of transparency and accountability often conflict with IP claims in real-world deployment.” While allowing that vigilant regulators are keeping a watchful eye on where AI pushes the boundaries of law and regulation, and that present guidelines are able to handle most of the curveballs AI throws their way, Burt is concerned about the process of solving those novel challenges when they do inevitably arrive. “Just like other areas in the realm of IP and beyond, there’s no clear consensus on how to address these challenges, which creates uncertainties that make AI liabilities hard to address for companies in practice,” he said.
According to Iancu, the USPTO is working on that, though. “We will use the RFCs and our recently issued AI report for continued exploration of other measures the agency may take to bolster the understanding and reliability of IP rights for emerging technologies such as AI,” he said. “These steps may include further engagement with the public, additional guidance for stakeholders, continued training for examiners on emerging technologies, and consultations with foreign IP offices.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,039 | 2,020 |
"Google says its Parallel Tacotron model generates synthetic voices 13 times faster than its predecessor | VentureBeat"
|
"https://venturebeat.com/ai/google-says-its-parallel-tacotron-model-generates-synthetic-voices-13-times-faster-than-its-predecessor"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google says its Parallel Tacotron model generates synthetic voices 13 times faster than its predecessor Share on Facebook Share on X Share on LinkedIn Google Home Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In December 2016, Google released Tacotron 2 , a machine learning text-to-speech (TTS) system that generates natural-sounding speech from raw transcripts. It’s used in user-facing services like Google Assistant to create voices that sound humanlike, but it’s relatively compute-intensive. In a new paper, researchers at the search giant claim to have addressed this limitation with what they call Parallel Tacotron , a model that’s highly parallelized during training and inference to enable efficient voice generation on less-powerful hardware.
Text-to-speech synthesis is what’s known as a one-to-many mapping problem. Given any snippet of text, multiple voices with different prosodies (intonation, tone, stress, and rhythm) could be generated. Even sophisticated models like Tacotron 2 are prone to errors like babble, cut-off speech, and repeating or skipping words as a result. One way to address this is to augment models by incorporating representations that capture latent speech factors. These representations can be extracted by an encoder that takes ground-truth spectrograms (a visual representation of speech frequencies over time) as its input; this is the approach Parallel Tacotron takes.
In experiments, to train Parallel Tacotron, the researchers say they used a dataset containing 405 hours of speech including 347,872 utterances from 45 speakers in 3 English accents (32 U.S. English speakers, 8 British English, and 5 Australian English speakers). Training took a day using Google Cloud TPUs, application-specific integrated circuits developed specifically to accelerate AI.
The researchers had human reviewers look at 1,000 sentences in order to evaluate Parallel Tacotron’s performance, which were synthesized using 10 U.S. English speakers (5 male and 5 female) in a round-robin style (100 sentences per speaker). While there’s room for improvement, the results suggest that Parallel Tacotron “did well” compared with human speech. Moreover, Parallel Tacotron was about 13 times faster than Tacotron 2.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A number of models have been proposed to synthesize various aspects of speech (e.g., speaking styles) in a natural sounding way,” the researchers wrote. “Parallel Tacotron matched the baseline Tacotron 2 in naturalness and offered significantly faster inference than Tacotron 2.” The release of Parallel Tacotron, which is available on GitHub , comes after Microsoft and Facebook detailed speedy text-to-speech techniques of their own. Microsoft’s FastSpeech features a unique architecture that not only improves performance in a number of areas but eliminates errors like word skipping and affords fine-grained adjustment of speed and word break. As for Facebook’s system , it leverages a language model for curation to create voices 160 times faster compared with a baseline.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,040 | 2,020 |
"AWS releases models and datasets to help predict COVID-19's spread | VentureBeat"
|
"https://venturebeat.com/ai/aws-releases-models-and-datasets-to-help-predict-covid-19s-spread"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS releases models and datasets to help predict COVID-19’s spread Share on Facebook Share on X Share on LinkedIn A computer image of the type of virus linked to COVID-19.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon Web Services (AWS) today open-sourced a new simulator and machine learning toolkit for anticipating and mitigating the spread of COVID-19. AWS says that the suite, which comprises a disease progression simulator and models to test the impact of various intervention strategies, can help to accurately capture many of the complexities of the virus in the world.
While there have been a number of breakthroughs in understanding COVID-19, such as how soon an exposed person will develop symptoms, building an all-encompassing epidemiological model remains an uphill battle. Challenges in model building include identifying variables that influence disease spread across cities, countries, and populations. A performant model must also combine intervention strategies such as closures and stay-at-home orders and explore hypotheticals by incorporating trends from COVID-19-like diseases.
The machine learning models in AWS’ suite bootstrap by estimating disease progression and comparing the outcomes to historical data. Data scientists can run a simulator to play out what-if scenarios for different interventions and use templates for the state level in the U.S., India, and countries in Europe. In these templates, the toolkit draws on data sources that frequently publish the number of new COVID-19 cases worldwide.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For the U.S., AWS’ suite uses the Delphi Epidata API from Carnegie Mellon University to access various datasets, including but not limited to the Johns Hopkins Center for Systems Science and Engineering, survey trends from Google search and Facebook, and historical data for H1N1 from 2009 to 2010. The toolset models the disease progression for each individual in a population and then reports the aggregate state of the population.
AWS’ simulator can assign a probability distribution to disease variables for each individual. For example, users can set parameters like whether individuals will develop symptoms within 2 to 5 days after exposure or 14 to 21 days after exposure. The simulator also captures population dynamics, such that the transition from one state to the next for an individual is influenced by the states of the others in the population. For example, a person transitions from a “susceptible” to “exposed” state in the model based on factors like whether the person is vulnerable due to preexisting conditions and interventions such as social distancing.
“Our open-source code simulates COVID-19 case projections at various regional granularity levels. The output is the projection of the total confirmed cases over a specific timeline for a target state or a country, for a given degree of intervention,” AWS explains in a blog post. “Our solution first tries to understand the approximate time to peak and expected case rates of the daily COVID-19 cases for the target entity (state/country) by analysis of the disease incidence patterns. Next, it selects the best (optimal) parameters using optimization techniques on a simulation model. Finally, it generates the projections of daily and cumulative confirmed cases, starting from the beginning of the outbreak [to] a specified length of time in the future.” Beyond AWS, Google Cloud has released models and datasets to help develop mitigation measures around COVID-19.
Facebook, too, has released models predicting the spread of COVID-19 in countries including the U.S.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,041 | 2,020 |
"Chromecast with Google TV attempts to revive Android TV for the streaming wars | VentureBeat"
|
"https://venturebeat.com/media/chromecast-google-tv-android-streaming-wars"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Chromecast with Google TV attempts to revive Android TV for the streaming wars Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
At its Launch Night In event today, Google officially unveiled Chromecast with Google TV. The new HDMI dongle is available starting today in the U.S. for $50 (it’s coming to Australia, Canada, France, Germany, Ireland, Italy, Spain, and the U.K. on October 15). That’s quite cheap given that it comes with its own remote control and dedicated user interface. The latter is so important to Google that the company put it in the product name. “Think of Google TV as your personal entertainment content curator,” Google TV general manager Shalini Govil-Pai said. “We will be bringing the Google TV experience to many more streaming devices in the Android TV ecosystem. Today, Google TV is making its debut on the all new Chromecast.” Like all the Google products announced today, Chromecast with Google TV has been leaking for months. The real news here is the confirmation that Google is giving Android TV another go by bringing back Google TV. (Android TV launched in June 2014, succeeding an even earlier smart TV effort from October 2010 called Google TV. Yes, Google rebrands as often and as poorly as Microsoft.) Android TV will continue to exist, as Google TV is an additional interface that runs on top.
The smart TV space is largely dominated by Tizen and webOS globally, with Roku and Amazon’s Fire TV winning over U.S. consumers. While Google saw plenty of success with its cheap Chromecast TV dongle early on ( 55 million sold from 2013 to 2016 ), consumers did not convert to Android TV as they upgraded their TV experience. Google is hoping it can use the successful Chromecast to revive its smart TV ambitions.
With the streaming wars heating up, the game streaming battle igniting, and the pandemic dumping fuel on both, this is Google’s latest attempt to claim the biggest screen in your home. Its Google Home smart speakers (slowly being rebranded under the Google Nest name ) are already in millions of homes, but we are visual creatures, and screens are still the golden goose.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Software and hardware Again, Google is highlighting the software for this Chromecast because it actually has a user interface. Previous Chromecasts only let you cast whatever you were already streaming on your phone or PC — a simplicity that was key to its success but also ultimately led to users looking for a replacement.
Still, Govil-Pai highlighted today what exactly Google TV lets you do. You can search for content, but Google TV’s main home page also shows titles pulled from all your streaming services like Netflix, Disney+, HBO Max, Peacock, France.tv, Rakuten, Viki, and YouTube. (It also shows options from other channels that you don’t have, to tease what you’re missing out on). A big chunk of the user interface is dedicated to suggestions of movies and TV shows that Google thinks you might want to watch.
The hardware is also keeping up with this paradigm shift. If users are going to be doing more than casting, Google has made sure Chromecast with Google TV ships with its own remote control, unlike its predecessors. The remote features a directional pad and dedicated buttons for Google Assistant, YouTube, and Netflix. It also has programmable TV controls for power, volume, and input so you (maybe) can use just the one remote for your TV.
Chromecast with Google TV supports 4K HDR at up to 60 frames per second. It works with multiple Google accounts, as well as Bluetooth devices and USB-to-Ethernet adapters.
Finally, the price tag further confirms Google is aggressively pursuing the space again. At $50, Chromecast with Google TV costs $20 less than the Chromecast Ultra.
Stadia So far 2020 has seen plenty of game streaming services debut in some form, including Microsoft xCloud , Nvidia GeForce Now , and as of last week Amazon Luna.
Google launched Stadia in November 2019, but the response has been lukewarm at best.
It’s thus surprising that Chromecast with Google TV doesn’t support Stadia at launch (support will come sometime in the first half of 2021). At today’s event, Google reserved all Stadia talk for the Pixel 5 and Pixel 4a 5G section.
Presumably, Google will talk up Stadia for the new Chromecast when the company deems it ready for your TV. Maybe a “Chromecast with Stadia” bundle (that includes the Stadia Controller ) is in the works.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,042 | 2,020 |
"Apple’s 'privacy' changes undermine international advertising best practices | VentureBeat"
|
"https://venturebeat.com/marketing/apples-privacy-changes-undermine-international-advertising-best-practices"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Apple’s ‘privacy’ changes undermine international advertising best practices Share on Facebook Share on X Share on LinkedIn Apple store seen in Hong Kong. (Photo by Budrul Chukrut/SOPA Images/LightRocket via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Apple’s decision to require users to opt-in to its IDFA tracking has understandably disrupted the ad tech ecosystem.
Its new measures, albeit now delayed until “early” 2021, ignore current initiatives from the IAB around transparency and privacy-first best practices.
The IAB-led initiatives I’m referring to include ads.txt, app-ads.txt, sellers.json, the GDPR Transparency and Consent Framework (TCF), and the Open Measurement SDK. Each of these solutions was created to help standardize marketing practices around the world. And in doing so, the IAB managed to help simplify digital advertising processes while making them more open and transparent to all parties.
Privacy first, transparency second The new approach Apple plans to roll out as part of iOS 14 will fragment those worldwide practices and vastly reduce transparency. The IAB Europe recently urged Apple to consider adhering to its TCF standards in order to promote interoperability as opposed to shutting vendors out. The TCF was designed to ensure compliance with European privacy laws when processing personal data or accessing and/or storing information on a user’s device. Unfortunately, Apple took a different approach on privacy with its decision to essentially deprecate the IDFA.
In its July statement, the IAB Tech Lab explained that Apple’s plans regarding iOS 14 conflict with the TCF standards. For example, on Apple devices, users can opt-in or opt-out of services such as geolocation data on the operating system (OS) level. But if the user chooses to do so, app publishers will not be notified. As a result, apps would still be showing an opt-in request pop-up and annoying the user, while being unable to signal the user’s choice to its vendors.
On the other hand, if a user is using an app that meets TCF standards but does not opt-in to ad tracking, the publisher will not be able to synchronize the user’s choice with Apple’s OS. Therefore, iOS isn’t able to register the user’s choice on the system level.
Both scenarios regarding iOS 14 hinder user-centric transparency. The ideal solution would be for Apple to join global privacy standards, like the IAB’s, rather than develop its own proprietary methods. But let’s dig deeper.
Comparing App-ads.txt and SKAdNetwork/Info.plist App-ads.txt is just one of several measures the IAB has taken to reduce ad fraud in the industry and promote more transparency — but it’s also the most applicable when comparing its goals to that of Apple’s IDFA update (i.e., SKAdNetwork). With app-ads.txt, publishers maintain a text file on their developer URL, which lists all authorized vendors of their inventory. This information is readily available to anyone who wants to access it. And in doing so, brands and agencies can ensure that their marketing dollars only go to authorized and reputable vendors.
Apple’s SKAdNetwork, on the other hand, requires publishers to enter the registered IDs of each of their vendors (i.e., ad networks) into the Info.plist file within the app’s configuration data in the App Store. Now you might be thinking, so where is the lack of transparency from Apple? Well, the problem is that only Apple is able to view the ad network partners listed.
The two concepts, app-ads.txt and Info.plist, share similar features, but when it comes to real transparency they are far from the same. Here’s a more detailed breakdown: App-ads.txt SkAdNetwork and Info.plist Initiated by IAB Tech Lab Apple Launched March 13, 2019, for mobile and OTT, with wide adoption across the industry March 29, 2018 (iOS 11.3 update), but only a handful of industry insiders even paid attention Purpose To enable buyers to distribute programmatic spend only through channels that are explicitly trusted and authorized by the originating publisher and combat ad fraud through illegitimate inventory arbitrage Info.plist is part of SKAdNetwork. It is a property list file in a publisher’s app that contains the app’s configuration data in the App Store Implementation for Publishers Publish authorized sellers and monetization platforms in plain text on the developer URL, including the domain name of the advertiser, the seller account ID, the type of relationship, and certification authority ID Update Info.plist and include a purpose string in the system prompt NSUserTrackingUsageDescription with a custom message describing why you’d like to track a user Add the authorized ad network ID to the Info.plist by updating the SKAdNetworkItems key with the additional dictionary You can find an open list of SKAdNetwork IDs here Strengths Unified/standardized open solution that can be crawled by all tech vendors IAB Tech Lab offers access to an aggregation of ads.txt files published around the internet (for IAB members only) According to Pixalate , apps that lack an app-ads.txt file have 13% more invalid traffic versus apps that have one in place Fewer insights into data stream from third-party vendors means less likelihood that users can be triangulated and individually identified Apple confirms the authenticity of all apps, so the likelihood of ad fraud is reduced Weaknesses Apple and Roku do not support the IAB standard for crawling for app-ads.txt yet. For Apple, authorized sellers need to be identified via a public search API (BidSwitch, 42matters, and Apptopia offer public mapping of developer URLs as well) Implementing app-ads.txt is not mandatory, so adoption was slow (at least at the beginning) Only allows five parameters to be transmitted Data will be exchanged between Apple and ad networks directly — neither the app nor any other third party will be able to collect, verify, or act on the data Even though the implementation of these two solutions is relatively simple, the implications are very different. It seems like Apple is implementing new measures in the name of privacy while simultaneously building new walls around its user data. This, in turn, is undermining a mobile advertising ecosystem that is trying to keep apps free for end users. Meanwhile, the IAB has been working with partners across the industry to champion solutions for greater transparency.
To be more straightforward, Apple’s dramatic changes in the name of privacy conflict with more sensible transparency moves already underway by the IAB. While Apple first introduced SKAdNetwork and Info.plist in March 2018, only a handful of industry insiders even batted an eye at the time. But now with the future of the IDFA in limbo, Apple’s SKAdNetwork and Info.plist may very well be the future.
While everyone agrees that privacy-first approaches represent the next phase of digital advertising, there are many paths to achieving this goal for users. It’s time for all parties to take the extra time Apple has granted us in order to come together with user experience in mind. Let’s resolve the conflicts and start building an open, transparent, and privacy-centric future within the digital advertising ecosystem.
Ionut Ciobotaru is Chief Product Officer at Verve Group.
He founded mobile monetization platform PubNative and has 15+ years of experience in the ad tech industry. He previously held leading roles at Applift, Weebo, and EA.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,043 | 2,020 |
"Spider-Man’s PS5 remaster gives Peter Parker a new face | VentureBeat"
|
"https://venturebeat.com/games/spider-mans-ps5-remaster-gives-peter-parker-a-new-face"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Spider-Man’s PS5 remaster gives Peter Parker a new face Share on Facebook Share on X Share on LinkedIn The new Peter Parker.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Fans of Insomniac’s Spider-Man for PlayStation 4 are going to find a big surprise when they play the remastered version of the open world web-slinging game on PlayStation 5.
Peter Parker has a new face, which you can see above.
This isn’t part of some kind of simple optimization. Insomniac has recast the character, basing the model on a new actor.
“We loved working with John Bubniak on the original game; however, to get a better match to Peter Parker/Spider-Man actor Yuri Lowenthal’s facial capture, we have cast Ben Jordan to be the face model for Peter Parker on the PS5 console,” Insomniac Games community manager James Stevenson notes in a new PlayStation Blog post.
“He looks incredible in-game, and Yuri’s moving performances take on a new life.” Yuri Lowenthal is the voice actor for Peter Parker/Spider-Man in the game, while John Bubniak served as the original face model for the character. Now Peter gets his face from Ben Jordan.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! What the hell happened to Peter Parker's face lol Before vs After pic.twitter.com/5uGn6nzdng — Vishhal Bhattman (@vbhatt08) September 30, 2020 It’s a bizarre change to see in a game that millions of people have already played. It’s not every day that a game just recasts its lead’s face. The explanation is also strange. Is Ben Jordan really such a better match for Lowenthal’s facial capture work that it was worth all this work? You have to wonder if the fact that Jordan looks more like the current movie actor for Spider-Man, Tom Holland, influenced this decision.
It also brings into question just how much games should change after release. Patches gives developers an opportunity to tweak their titles long after launch. A big remaster like this one for Spider-Man gives a studio a chance to make even more changes. But is replacing the hero’s face going too far? I know that if I saw a Spider-Man movie two years after it first came out and Peter Parker had a new face, I’d be beyond distracted.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,044 | 2,020 |
"You can't solo security: COVID-19 game security report | VentureBeat"
|
"https://venturebeat.com/business/you-cant-solo-security-covid-19-game-security-report"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored You can’t solo security: COVID-19 game security report Share on Facebook Share on X Share on LinkedIn The gaming industry shot up quickly as a prime target for cybercriminals. In two years, 152 million attacks hit gaming.
COVID-19 added fuel to the fire, and now cybercriminals have a target-rich environment.
See the latest attack trends in gaming in our newly released report. You’ll get exclusive insights based on Akamai’s 24 months of data up through June 2020.
Report highlights: Survey results from over 1,200 gamers (in partnership with DreamHack) Top web attack vectors in gaming and across all industries Credential stuffing and phishing attacks targeting players Emerging markets selling boosted accounts and hacked information Most targeted countries for malicious login attempts and where they’re coming from Name * First Last Business Email * Job Title * Company * Communication Agreement Yes I agree with getting occasional updates from Akamai and VentureBeat.
Δ document.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() ); Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,045 | 2,020 |
"Verizon debuts 5G Internet Gateway with augmented reality self-setup | VentureBeat"
|
"https://venturebeat.com/business/verizon-debuts-5g-internet-gateway-with-augmented-reality-self-setup"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon debuts 5G Internet Gateway with augmented reality self-setup Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Though 5G is now largely associated with smartphones, Verizon originally launched the next-generation cellular technology as a high-speed broadband solution, promising cable modem-like 300Mbps averages and 1Gbps peak data rates using millimeter wave 5G modems. Unfortunately, network buildout and local installation challenges limited Verizon’s footprint , so the company is addressing one of those pain points today with the 5G Internet Gateway.
The all-in-one wireless broadband device enables small business owners and individual users to access Verizon’s highest-speed 5G network without help from an installer, another welcome step forward for millimeter wave technology. Customers who might have needed scheduled visits from network technicians to set up 5G broadband service can now handle installation on their own using an augmented reality self-setup app. It’s as close to a turnkey enabling solution for mmWave “fixed 5G” service as has yet been seen.
Designed to minimize hardware footprints within a small office or home, the 5G Internet Gateway combines the 5G modem and Wi-Fi router in one white plastic box, rather than using separate pieces connected via cabling.
After loading Verizon’s installation app, users are guided to mount the single box indoors on either a wall or window, confirming that a 5G signal is available before using an included adhesive bracket to secure the Gateway in a location. The box can be pivoted on the bracket for optimal mmWave signal strength, then locked into a specific angle for guaranteed coverage.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Verizon and its partners have worked steadily for the past two years to simplify the sometimes challenging process of transforming the 5G network’s high-speed mmWave signals into Wi-Fi that can be used across homes and small businesses. While mmWave has the potential to deliver download speeds comparable to fiber-based internet, its signals can be blocked by walls and some windows, making proper installation critical — and sometimes forcing the 5G receiving hardware to sit in an unobstructed outdoor location, connected to an indoor Wi-Fi router. New mmWave antenna hardware and network tweaks promise to bring the cellular technology to more people over the next year.
The new 5G Internet Gateway promises to work from a longer range than prior Verizon 5G hardware and will be available to customers in eight U.S. cities, including the recent addition of Minneapolis and St. Paul. That’s not a huge footprint — only twice the size of the company’s 2018 initial rollout — but Verizon promises that the 5G Home network will reach “at least 10 cities” in the U.S. by year’s end. It remains to be seen how much of those cities are actually covered by its millimeter wave service, which has been uneven within purported areas of availability.
Verizon is preserving the same $50 price point for existing customers and $70 pricing for new customers originally announced for its 5G Home service , while adding a number of perks: one year of Disney+ service , a free Stream TV device for access to multiple over-the-top cable channels, and a collection of Amazon devices ranging from an Echo Show 5 and Echo Dot to a Ring Stick Up Cam and Amazon Smart Plug. New users must sign up for the service by October 30 to qualify for the free year of Disney+ access.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,046 | 2,020 |
"Strigo raises $8 million to help software companies train their customers remotely | VentureBeat"
|
"https://venturebeat.com/business/strigo-raises-8-million-to-help-software-companies-train-their-customers-remotely"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Strigo raises $8 million to help software companies train their customers remotely Share on Facebook Share on X Share on LinkedIn Strigo Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Strigo , a platform that helps companies deliver software training to their clients remotely, has raised $8 million in a series A round of funding led by Greycroft and Velvet Sea Ventures. The Israeli startup also said it has tripled its customer base during the COVID-19 crisis.
The pandemic has been a boon for online communication tools like Zoom and Microsoft Teams , as it has forced companies to embrace remote working. These platforms have also gotten a whole new lease on life through applications they were never intended to address, such as dating and virtual social gatherings. But a “one-size fits all” ethos is often a detriment in the technology realm, where specific problems tend to require dedicated tools.
Founded in 2017, Tel Aviv-based Strigo has built a platform that enables software companies to onboard customers and teach them how to use their software. This is something companies have traditionally done in person — sending specialists out to provide time-consuming, resource-intensive training. That approach is now either impossible or fraught with friction, but Strigo’s unified platform allows trainers to communicate, share content, and collaborate in real scenarios involving the software.
“It’s really about providing hands-on training in which customers learn by practicing within actual product environments,” Strigo cofounder and CEO Nevo Peretz told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Strigo training cloud: One-on-one with attendee panel and dashboard Strigo can also be used for in-person software training sessions on customers’ own premises since the platform doesn’t rely on local IT teams to facilitate sessions or book dedicated lab spaces. Employees can turn up to a standard meeting room with their laptops and Strigo will enable full access to the software from their browser — no installations required.
Fluid Strigo had previously raised $2.5 million, and with another $8 million in the bank it’s well-positioned to grab a bigger piece of the $368 billion corporate training market and add to its existing roster of clients, which includes VMWare, Sage, and Docker.
The company is also working on a bunch of new tools over the next year, including “hands-on group collaboration,” which will enable learners to complete assignments as part of a team exercise.
“This will help greatly enrich the training experience, as it allows students to learn from each other through collaborative hands-on sessions, helping drive better knowledge retention while helping trainers manage their sessions more efficiently,” Peretz said.
While it is possible to run training sessions by combining tools and sharing screens, this approach usually involves switching between multiple applications — such as video tools, remote support, and virtual lab platforms.
“The patching together of these tools creates a poor training experience, inability to see the training operation as a whole and identify problems, and it is also difficult to scale,” Peretz added. “With Strigo, students who are working on a lab exercise can call for assistance, and trainers can seamlessly enter the student’s lab, open a one-on-one communication channel, and work together to address the question or issue. This experience mirrors a real classroom, where a trainer can come to a student, look over their shoulder, and work with them.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,047 | 2,020 |
"Square adopts QR codes to bring self-serve ordering to restaurants | VentureBeat"
|
"https://venturebeat.com/business/square-adopts-qr-codes-to-bring-self-serve-ordering-to-restaurants"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Square adopts QR codes to bring self-serve ordering to restaurants Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Square has introduced a new self-serve ordering feature for restaurants that allows dine-in customers to order and pay for their food through their phones, minimizing physical contact with staff.
The restaurant industry has been among the hardest hit by the global pandemic, though this has led to a surge in demand for meal deliveries.
As the world eases out of lockdown, brick-and-mortar eateries have embraced technology as a way to entice customers back through their doors — the humble QR code has proved particularly popular , with many outlets turning to the matrix barcode format to deliver menus directly to customers’ mobile devices.
In this vein, Jack Dorsey’s Square, which is perhaps better known for its point-of-sale (POS) system that allows merchants to accept card payments through a smartphone or tablet, has been expanding and adapting its services to this “new normal.” Merchants that are signed up to Square Online , Square’s ecommerce offering, can now create QR codes and place them at tables or booths. Using the camera on their phones, customers scan the QR code and the menu opens on their device, where they can select items and pay in one fell swoop.
This is different from other third-party QR-related services that simply direct customers to an online menu or website. Indeed, each Square QR code is linked to a specific table or collection area — this helps restaurant staff know exactly where a customer is located.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Square’s new self-serve feature limits contact between diner and merchant.
Although Square’s new self-serve tool is designed primarily to reduce contact between the buyer and seller, it should also bring additional efficiencies by reducing waiting times. Diners no longer have to hang around for someone to serve them or bring the check — everything is done in a single transaction initiated entirely by the customer.
However, one potential downside here lies in tipping — normally, a diner would prefer to tip after they have received service. With this system, the entire payment is taken in advance, leaving consumers in a dilemma about how much they should (or shouldn’t) tip.
This launch comes just a few months after PayPal launched QR code payments in 28 markets globally, in a move that further extended PayPal’s reach beyond its traditional focus on online payments. In contrast, Square has increasingly been moving into the online sphere from its brick-and-mortar roots, so it has been interesting to see the paths each company has taken toward covering “all bases.” Back in May, Square rolled out a new online checkout designed to help small businesses looking to rapidly transition to ecommerce, enabling them to accept online card payments through any website, social media profile, messaging app, or SMS.
Both PayPal and Square’s shares have hit all-time highs in the past month, which is testament to the surge in demand for digital payment services.
Square’s self-serve ordering is available now for sellers in the U.S., U.K., Canada, and Australia.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,048 | 2,020 |
"Seattle greenlights minimum wage for Uber and Lyft drivers | VentureBeat"
|
"https://venturebeat.com/business/seattle-greenlights-minimum-wage-for-uber-and-lyft-drivers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Seattle greenlights minimum wage for Uber and Lyft drivers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — The Seattle City Council passed a minimum pay standard covering drivers for companies like Uber and Lyft on Tuesday.
Under the ordinance, effective January, the drivers will now earn at least $16.39 per hour — the minimum wage in Seattle for companies with more than 500 employees.
Seattle’s law, modeled after a similar regulation in New York City, aims to reduce the amount of time drivers spend “cruising” without a passenger by paying drivers more during those times.
City officials argue this should prevent Uber and Lyft from oversaturating the market at drivers’ expense, but the companies say it would effectively force them to block some drivers’ access to the app. Both Uber and Lyft have locked out drivers in response to the NYC law.
“The City’s plan is deeply flawed and will actually destroy jobs for thousands of people — as many as 4,000 drivers on Lyft alone — and drive rideshare companies out of Seattle,” Lyft said in a statement.
Uber did not immediately respond to request for comment.
Researchers at the University of California, Berkeley, and New York’s New School, who analyzed the Seattle ride-hailing market using city data and a driver survey, found drivers net only about $9.70 an hour, with a third of all drivers working more than 32 hours per week.
But a study of data provided by Uber and Lyft showed most ride-hailing workers in Seattle are part-time drivers whose earnings are roughly in line with the city’s median, defying some perceptions of drivers working full-time for little pay.
( Reporting by Tina Bellon in New York and Rama Venkat in Bengaluru, editing by Simon Cameron-Moore.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,049 | 2,020 |
"LinkedIn open-sources GDMix, a framework for training AI personalization models | VentureBeat"
|
"https://venturebeat.com/business/linkedin-open-sources-gdmix-a-framework-for-training-ai-personalization-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn open-sources GDMix, a framework for training AI personalization models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
LinkedIn recently open-sourced GDMix , a framework that makes training AI personalization models ostensibly more efficient and less time-consuming. The Microsoft-owned company says it’s an improvement over LinkedIn’s previous release in the space — Photon ML — because it supports deep learning models.
GDMix trains fixed effect and random effect models, two kinds of models used in search personalization and recommender systems. They’re normally challenging to train in isolation, but GDMix accelerates the process by breaking down large models into a global model (fixed effect) and many small models (random effects) and then solving them individually. This divide-and-conquer approach allows for swifter training of models with commodity hardware, according to LinkedIn, thus eliminating the need for specialized processors, memory, and networking equipment.
GDMix taps TensorFlow for data reading and gradient computation, which LinkedIn says led to a 10% to 40% training speed improvement on various datasets compared with Photon ML. The framework trains and evaluates models automatically and can handle models on the order of hundreds of millions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DeText , a toolkit for ranking with an emphasis on textual features, can be used within GDMix to train natively as a global fixed effect model. (DeText itself can be applied to a range of tasks, including search and recommendation ranking, multi-class classification, and query understanding.) It leverages semantic matching, using deep neural networks to understand member intents in search and recommender systems. Users can specify a fixed effect model type and DeText and DMix will train and evaluate it automatically, connecting the model to the subsequent random effect models. Currently, GDMix supports logistic regression models and deep natural models DeText supports, as well as arbitrary models users design and train outside of GDMix.
The open-sourcing of GDMix comes after LinkedIn released a toolkit to measure AI model fairness: LinkedIn Fairness Toolkit (LiFT).
LiFT can be deployed during training to measure biases in corpora and evaluate notions of fairness for models while detecting differences in their performance across subgroups. LinkedIn says it has applied LiFT internally to measure the fairness metrics of training datasets for models prior to their training.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,050 | 2,020 |
"HP's Reverb G2 Omnicept VR headset adds heart, eye, and face tracking | VentureBeat"
|
"https://venturebeat.com/business/hps-reverb-g2-omnicept-vr-headset-adds-heart-eye-and-face-tracking"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HP’s Reverb G2 Omnicept VR headset adds heart, eye, and face tracking Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Back in May, HP announced the $600 Reverb G2 VR headset — a collaboration with Microsoft and Valve that appealed to both consumers and enterprises. Now it’s chasing progressive businesses with an upgraded Omnicept Edition of the hardware, adding multiple biometric sensors that will make the already capable Reverb G2 more useful for employee training and data analysis.
You can think of the Reverb G2 Omnicept Edition as HP’s comparatively overwhelming response to HTC’s Vive Pro Eye , which used Tobii eye tracking sensors to boost an already competent premium VR headset.
The Omnicept model goes beyond incorporating Tobii’s eye tracking and pupillometry sensors, adding a heart rate sensor and facial tracking camera system, plus software support so third-party developers can make apps using all the new sensors.
Businesses will be able to use the eye and pupil sensors to track user focus, enabling both attention analysis and foveated rendering.
In time, HP expects the facial tracking system will enable VR users’ avatars to share live facial expressions during meetings and other social applications, reducing if not eliminating their typical stiffness.
Initially, the idea of tracking a VR user’s heartbeat might seem like either overkill or overreaching on biometric data , but HP’s business training justifications for the heart rate sensor are actually compelling. For instance, developers are already building public speaking educational apps and flight simulators for the Omnicept Edition that will help users manage and eventually reduce their heart rate-measured stresses under situations of cognitive load.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Because the privacy implications of monitoring and gathering this data are so significant, HP says it designed the Omnicept Edition’s firmware to secure data during capture and share it subject to GDPR requirements. Enterprises and partners apparently only get access to the data in de-identified, aggregated, and secured form, and the headset doesn’t store any data itself. It remains to be seen whether these measures will be adequate to protect users, though in enterprise and training contexts biometric data gathering may seem — or actually be — less onerous than in broader consumer applications.
Most of the Omnicept Edition’s specifications and hardware — including 2,160×2,160 eye displays, Valve-designed lenses and speakers, and twin wireless controllers — are virtually identical to the Reverb G2. But the new model includes an improved headband with a ratcheting tightness knob, plus all the new sensors, which together will likely cost a pretty penny. Final pricing has yet to be announced, but just as the Vive Pro Eye added hundreds of dollars to the Vive Pro’s price, customers can expect the Omnicept Edition to sell for a premium.
HP is targeting the Omnicept Edition specifically at enterprises with identified needs for the biometric hardware, rather than individual consumers. There will be an option to buy the headset outright, but average users won’t be able to do anything with the biometric sensors without supporting software. That said, the new model will be SteamVR-compatible and therefore capable of running plenty of software beyond purpose-built training applications.
The standard Reverb G2 will ship to distributors in October and should reach actual customers in early- to mid-November. Reverb G2 Omnicept Edition is planned for release in spring 2021.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,051 | 2,020 |
"Google says Pixel's Hold for Me feature records and stores audio on-device | VentureBeat"
|
"https://venturebeat.com/business/google-says-pixels-hold-for-me-feature-records-and-stores-audio-on-device"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google says Pixel’s Hold for Me feature records and stores audio on-device Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
One of the just-announced Pixels’ most intriguing features is Hold for Me, a Google Assistant-powered service that waits on hold when you call a retailer, utility, or other business’ toll-free support number. When a human comes on the line, Hold for Me — which will launch in preview in English in the U.S. before expanding to other regions and devices — notifies you with sound, vibration, and a prompt on your screen.
Hold for Me was announced today at Google’s annual hardware event, and the company responded to a list of VentureBeat’s questions afterward. According to a spokesperson, Hold for Me is powered by Google’s Duplex technology, which not only recognizes hold music but also understands the difference between a recorded message — for example, “Hello, thank you for waiting” — and a representative on the line. (That said, a support page admits Hold for Me’s detection accuracy might not be high “in every situation.”) To design the feature, Google says it gathered feedback from a number of companies, including Dell and United, as well as from studies with customer support representatives.
“Every business’ hold loop is different, and simple algorithms can’t accurately detect when a customer support representative comes onto the call,” Google told VentureBeat. “Consistent with our policies to be transparent, we let the customer support representative know that they are talking to an automated service that is recording the call and waiting on hold on a user’s behalf.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hold for Me is an optional feature that must be enabled in a supported device’s settings menu and activated manually during each call. In the interests of privacy, Google says any audio processing Google Assistant uses to determine when a representative is on the line is done entirely on-device and doesn’t require a Wi-Fi or data connection. Effectively, audio from the call is not shared with Google or saved to a Google account unless a user explicitly shares it to help improve the feature. (Call data like recordings, transcripts, phone numbers, greetings, and disclosures are stored on Google servers for 90 days before deletion.) If the user doesn’t opt to share audio, interactions between Hold for Me and support representatives are wiped after 48 hours. Returning to a call when a customer support person becomes available stops audio processing.
Google claims its embrace of techniques like on-device processing and federated learning minimize the exchange of data between its servers. For instance, its Now Playing feature on Pixel phones , which identifies songs playing nearby, leverages federated analytics to analyze data in a decentralized way. Under the hood, Now Playing taps an on-device database of song fingerprints to identify music near a phone without the need for an active network connection.
Google’s Call Screen feature, which screens and transcribes incoming calls, also happens on-device, as do Live Caption , Smart Reply , and Face Match.
That’s thanks in part to offline language and computer vision models that power, among other things, the Google Assistant experience on smartphones like the Pixel 4 , Pixel 4a and 4a (5G), and Pixel 5.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,052 | 2,020 |
"Glassdoor launches employee reviews for diversity and inclusion practices at companies | VentureBeat"
|
"https://venturebeat.com/business/glassdoor-launches-employee-reviews-for-diversity-and-inclusion-practices-at-companies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Glassdoor launches employee reviews for diversity and inclusion practices at companies Share on Facebook Share on X Share on LinkedIn Employees can now rate companies based on their diversity and inclusion practices.
Glassdoor is now letting employees write diversity and inclusion reviews for companies in a way to make employers more transparent. Employees will be able to rate and review companies on how they treat employees based on race, ethnicity, gender identity, sexual orientation, and other underrepresented groups.
In a poll , Mill Valley, California-based Glassdoor , which has ratings on more than a million companies, found that job seekers and employees trust the employees already working at a company when it comes to understanding the state of diversity and inclusion there. Glassdoor said that 76% of job seekers and employees today report that a diverse workforce is an important factor when evaluating companies and job offers. The company said these new features are part of its public commitment to leveraging its product and resources to help achieve equity in and out of the workplace.
The Glassdoor survey was conducted by The Harris Poll. It found that job seekers and employees report that disparities still exist within companies with respect to experiences with and perceptions of diversity, equity, and inclusion in the workplace. The company undertook the effort after this year’s racial unrest.
“In recent months, many of Glassdoor’s more than 50 million users have been telling us they want more insight into what the current state of diversity & inclusion is like at a company,” Scott Dobroski, spokesman for Glassdoor, wrote in an email. “Then, after the murder of George Floyd, we saw employee reviews talking about diversity, racial justice, and related topics rise by 63% on Glassdoor.” Because of this, the company knew people wanted more data. “We believe we have a responsibility as a platform and as an employer to help drive equity in society, and we can help to create more equitable workplaces,” Dobroski said.
Job seekers and employees also say they want to work at companies that truly value diversity and inclusion as part of their culture. The survey shows that nearly half of Black (47%) and Latinx (49%) job seekers and employees have quit a job after witnessing or experiencing discrimination at work, significantly higher than white (38%) job seekers and employees.
In addition, 32% of job seekers and employees today say they would not apply to a job at a company where there is a lack of diversity among its workforce. This is significantly higher for Black job seekers and employees (41%) compared to white job seekers and employees (30%).
“Glassdoor publicly committed to leveraging its product, resources and platform to help drive societal change toward equality at scale,” Dobroski said.
Diversity & Inclusion rating Above: Glassdoor’s D&I ratings are now available.
This rating is Glassdoor’s sixth and newest workplace factor that allows employees to rate how satisfied they are with diversity and inclusion at their current or former company, based on a five-point scale. The rating will appear alongside the five existing workplace factor ratings.
While the product was in stealth mode, employees across 12 companies started to rate their satisfaction with their company’s Diversity & Inclusion (D&I). So far, Salesforce has the highest D&I rating among this group according to its employees, with a 4.6 rating. Other companies currently rated in terms of their employee satisfaction with D&I include: Google: 4.4, Accenture: 4.2, Amazon: 4.1, Apple: 4.0, Deloitte: 4.0, Facebook: 4.2, McDonald’s: 3.7, Starbucks: 4.1, Target: 4.1, Uber: 3.6, and Walmart: 3.7.
More than 4,000 employees have rated their companies so far.
Demographic information Above: Employees have the option of describing their demographic info on Glassdoor.
Glassdoor now enables U.S.-based employees and job seekers to voluntarily and anonymously share their demographic information to help others determine whether a company is actually delivering on its diversity and inclusion commitments.
Glassdoor users can now also provide information regarding their race and ethnicity, gender identity, sexual orientation, disability status, parental status, and more, all of which can be shared anonymously through their Glassdoor user profile.
Glassdoor will soon display company ratings, workplace factor ratings, salary reports and more, broken out by specific groups at specific companies. This information will equip employers with further data and insights to create and sustain more equitable workplaces. Sharing demographic information with Glassdoor will be optional and displayed anonymously.
Diversity FAQ across companies Glassdoor is also debuting a new Company FAQ resource, offering a list of the most popular questions job seekers ask about companies, including a section dedicated to diversity and inclusion. Responses to the FAQs are taken from the employee reviews appearing on Glassdoor. The tool provides easier access to relevant reviews about D&I at specific companies.
According to the Glassdoor survey, 63% of job seekers and employees say their employer should be doing more to increase the diversity of its workforce. But significantly more Black and Latinx job seekers and employees feel this way (71% and 72% respectively) than white job seekers and employees (58%).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,053 | 2,020 |
"GitHub launches code scanning to unearth vulnerabilities early | VentureBeat"
|
"https://venturebeat.com/business/github-launches-code-scanning-to-unearth-vulnerabilities-early"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub launches code scanning to unearth vulnerabilities early Share on Facebook Share on X Share on LinkedIn A GitHub logo seen displayed on a smartphone.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
GitHub is officially launching a new code-scanning tool today, designed to help developers identify vulnerabilities in their code before it’s deployed to the public.
The new feature is the result of an acquisition last year when GitHub snapped up San Francisco-based code analysis platform Semmle ; the Microsoft-owned code-hosting platform revealed at the time that it would make Semmle’s CodeQL analysis engine available natively across all open source and enterprise repositories.
After several months in beta , code scanning is now rolling out to all developers.
Breaches It’s estimated that some 60% of security breaches involve unpatched vulnerabilities. Moreover, 99% of all software projects are believed to contain at least one open source component, meaning that dodgy code can have a significant knock-on impact for many companies.
Typically, fixing vulnerabilities requires a researcher to first find the vulnerability and disclose it to the repository maintainer, who fixes the issue and alerts the community, who then update their own projects to the fixed version. In a perfect world, this process would take minutes to complete, but in reality it takes much longer than that — it first requires someone to find the vulnerability, either by manually inspecting code or through pentesting , which can take months. And then comes the process of finding and notifying the maintainer and waiting for them to roll out a fix.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GitHub’s new code-scanning functionality is a static application security testing ( SAST ) tool that works by transforming code into a queryable format, then looking for vulnerability patterns. It automatically identifies vulnerabilities and errors in code changes in real time, flagging them to the developer before the code goes anywhere near production.
Above: GitHub: Vulnerability found Fixes Data suggests that only 15% of vulnerabilities are fixed one week after discovery, a figure that rises to nearly 30% within a month and 45% after three months. According to GitHub, during its beta phase it scanned more than 12,000 repositories more than 1 million times, unearthing 20,000 security issues in the process. Crucially, the company said that developers and maintainers fixed 72% of these code errors within 30 days.
There are other third-party tools out there already designed to help developers find faults in their code. London-based Snyk, which recently raised $200 million at a $2.6 billion valuation, targets developers with an AI-powered platform that helps them identify and fix flaws in their open source code.
This helps to highlight how automation is playing an increasingly big role in not only scaling security operations, but also plugging the cybersecurity skills gap — GitHub’s new code-scanning smarts go some way toward freeing up security researchers to focus on other mission-critical work. Many vulnerabilities share common attributes at their roots, and GitHub now promises to find all variations of these errors automatically, enabling security researchers to hunt for entirely new classes of vulnerabilities. Moreover, it does so as a native toolset baked directly into GitHub.
GitHub’s code scanning hits general availability today, and it is free to use for all public repositories. Private repositories can gain access to the feature through a GitHub Enterprise subscription.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,054 | 2,020 |
"Fnatic Anda Seat review: Gaming comfort | VentureBeat"
|
"https://venturebeat.com/business/fnatic-anda-seat-review-gaming-comfort"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Fnatic Anda Seat review: Gaming comfort Share on Facebook Share on X Share on LinkedIn Anda Seat Fnatic Edition Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
If you put the word “gaming” in front of something enough, I’m going to end up reviewing it. This is what I have learned, and this is how I ended up with the Fnatic Anda Seat. This gaming seat looks a lot like the trendy and sporty office chair that streamers have made popular over the last few years. And I’m impressed with the Fnatic Anda’s comfort and features to the point that I can see why they’re so popular.
The Fnatic Anda gaming chair is available now from Anda’s website for approximately $500 (£400). As the name suggests, it features the branding of the Fnatic esports organization. Anda embroidered the “Fnatic” name along the sides of the headrest and on the lumbar-support pillow. But otherwise, this chair is all-business, and it’s good at what it does.
The Fnatic Anda Seat is comfortable and easy to adjust I won’t pretend that I’m some office furniture expert. I’ve used a well-worn cheap desk chair for five years, and — unsurprisingly — the Anda Seat is better than it. But I’m a human with a body like you, and I’ll try to convey my experience using the Anda.
The first thing I noticed is that the chair’s material was both soft and supportive. I didn’t feel like my body was pressing into metal, which is a sensation I often had with the seat I got from Office Depot. But at the same time, the chair has a firmness that encourages a better posture. I’m having an easier time sitting up straight in this chair, which is something I really notice and appreciate.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! I’m also loving the Anda’s leather material. Even during long sitting sessions in shorts, the chair stayed cool. It also never stuck to my skin.
The pleasant experience also extended the ease of adjusting. You’ll find a number of levers and dials on the Anda, and they’re all easy to reach, with the exception of the rocker knob. But reclining and the height adjuster are all simple to grab.
Above: It’s a big seat if you’re only 30 pounds.
It’s also heavy and durable The other thing I noticed immediately was the Anda Seat’s mass. It is a heavy beast. This is mostly a good thing, but it does mean that you might struggle to move the chair around. Also, if you try to grab the seat by the armrests, they can feel flimsy as you’re more likely to adjust their angle than to actually roll the seat — especially on carpet.
But the heft makes the chair feel stable and durable. Anda uses a lot of steel in its frame, which should ensure it can withstand years of use. But it also makes it so that even when I lay back with my 230-pound 6-foot body, the chair doesn’t feel tipsy. It stays right in place. I can even stick my arms or feet out, and the seat never feels like I’m really challenging its center of gravity.
The result here is that I feel like I don’t have to be careful with the chair. If I want to lean back and take a nap, this chair will ensure I won’t have to worry about tipping over. It’s also easily comfortable enough for falling asleep, which was dangerous during my all-night RTX 3080 review.
Is it actually good for gaming? I think that the chair does earn that “gaming chair” designation. This is marketing, but I don’t think it’s marketing alone. Gamers are a demanding audience, and they’ll speak up if something doesn’t meet their needs.
With that in mind, I find that the Anda Seat does a fine job of fitting into what I need as a gamer — especially on PC. It’s tall enough that I can sit up at my desk with my arms down at my side. I haven’t had proper posture like this while using a mouse and keyboard maybe ever.
I’m also able to scoot the seat in close to the desk thanks to the very adjustable armrests.
Then, when it is time to grab a controller, the chair reclines into an equally comfortable laid-back position, and I’m enjoying that just as much.
The Fnatic Anda Seat is available now for $500. Anda provided a sample unit to GamesBeat for the purpose of this review.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,055 | 2,020 |
"ExamSoft's remote bar exam sparks privacy and facial recognition concerns | VentureBeat"
|
"https://venturebeat.com/business/examsofts-remote-bar-exam-sparks-privacy-and-facial-recognition-concerns"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis ExamSoft’s remote bar exam sparks privacy and facial recognition concerns Share on Facebook Share on X Share on LinkedIn Kiana Caton in front of the computer and desk light she plans to use for the California Bar exam October 5-6 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Sometimes the light Kiana Caton is forced to use gives her a headache. On top of common concerns that come with taking a state bar exam — like passing the test — Caton has to deal with challenges presented by facial recognition technology. She’s a Black woman, and facial recognition tech has a well-documented history of misidentifying women with melanin. Analysis by the federal government and independent research like the Gender Shades project have proved this repeatedly.
The European Conference on Computer Vision also recently found algorithms don’t work as well on Black women as they do on other people.
Ok @ExamSoft support told me to “sit directly in front of a lighting source such as a lamp.” I’m receiving the same issue preventing me from completing the NY UBE mock exam. Facial recognition technology is racist.
@DiplomaPriv4All do y’all think I have “adequate lighting”? pic.twitter.com/7tFdwfpyHB — Alivardi Khan (@uhreeb) September 11, 2020 To ensure her skin tone doesn’t lead ExamSoft’s remote test monitoring software to raise red flags, Caton will keep a light shining directly on her face throughout the two-day process, a tactic she picked up from fellow law school graduates with dark skin.
“If someone has to shine a light in their face, they’re probably going to get a headache. Or if they have sensitivity to light or are susceptible to migraines or anything like that, it’s going to affect their performance, and that’s something I’m really concerned about,” Caton said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Next week, law school graduates from 20 states — including Caton, who is in California — will simultaneously take the bar exam from remote locations using ExamSoft. In order to participate, they must first surrender biometric data like an iris scan or facial scan.
To administer the test, ExamSoft will collect and store the biometric data of a generation of legal professionals. More than 30,000 law school graduates will participate, a National Conference of Bar Examiners (NCBE) spokesperson told VentureBeat. This appears to be the largest attempt to remotely administer state bar exams in U.S. history.
Delays caused by COVID-19 mean job offers previously extended to law school graduates may already have passed their intended start dates. For Caton and many others taking the test next week, a job may hang in the balance.
Above: The light Caton will use during the California bar exam next week. Data privacy concerns led her to buy a new laptop for the test, also pictured here.
Security is also in question: On July 27, remote proctoring software company ProctorU was hacked, a data breach that exposed the personally identifiable information of 400,000 people.
A day later, a remote Michigan state bar exam administered by ExamSoft was hit with a distributed-denial-of-service (DDOS) attack. Investigations into the reported ExamSoft attack are ongoing.
The NCBE, which developed state bar exams, requires all remote testing to be conducted via proctoring software. An NCBE spokesperson told VentureBeat that all jurisdictions administering the remote exam will use ExamSoft. But concerns are still swirling around the software.
“I don’t understand how we can possibly be judged by these people in our own competency when it kind of seems like they need to worry about whether they can actually do this exam. It’s less than a week away now, and people are having tons of issues,” Caton said. “So I’m just really concerned about this exam, and I’m wondering if it’s going to go forward like it’s supposed to and whether or not I’m going to be delayed any further in starting my job.” On top of data privacy and racial bias concerns, Caton and other bar exam applicants have to worry about whether ExamSoft will answer the phone if things go wrong. Caton said her mock exam went off without a hitch, but people who took a test in the state of New York earlier this month reported long delays when they called ExamSoft after encountering issues.
Use of AI-driven remote testing software has gone up during the pandemic, despite continued concerns about surveillance, biometric data collection, and bias. Each state bar association and state supreme court has dealt with pandemic-induced uncertainty in different ways. In Texas, legal professionals took exams in hotel rooms as monitors walked from room to room to check on them. In the state of Washington, the bar association is waving bar exams altogether. Concerns over racial discrimination, tech issues, and historically disruptive wildfires led deans of 15 major California law schools to request the state supreme court make the bar exam an open book test.
After ExamSoft was chosen to administer the test in California, the ACLU shared its opposition to the use of remote testing software that uses facial recognition.
In a letter to the California Supreme Court, the organization argued that these conditions have the potential to exacerbate historical inequity in the legal profession.
“Given the invasive and discriminatory nature of facial recognition technology, the proposed use of software that collects biometric data for the administration of the bar examination would be antithetical to the State Bar’s mission of protecting the public and increasing access and inclusion in the legal system,” the letter reads. “In an unprecedented moment that requires innovative, equitable pathways to attorney licensure due to the myriad challenges posed by COVID-19 and the ongoing movement for racial justice, the deployment of facial recognition threatens to further entrench racial and economic inequities that have long created barriers to the legal profession.” Facial recognition flags raised while a person is taking a bar exam will not halt a test, an ExamSoft spokesperson told VentureBeat. But because facial recognition tech is less likely to recognize Caton, it may raise red flags that cause human reviewers to assess her exam. Similar issues have been reported with other exam software, like that from Proctorio.
ExamSoft declined to share the name of the company that created the facial recognition technology it uses.
State Bar of California interim executive director Donna Hershkowitz responded to the ACLU letter last week. Espousing a commitment to an inclusive legal profession, she said any facial recognition issues flagged by the software will be analyzed by four human reviewers, part of a series of steps that she believes will eliminate facial recognition bias.
However, in addition to concerns about facial recognition technology, a participant leaving the frame of view at any time or the occurrence of sounds — including the sounds of voices — can also trigger ExamSoft flags that require humans to review a bar exam. Nearby sounds could prove a potentially common problem, particularly as the pandemic has kept many households working and schooling from home. A survey of Maryland bar exam applicants shared with VentureBeat found that more than 40% lack access to a quiet place where they can take the bar exam without interruption.
The Electronic Frontier Foundation (EFF) raised concerns about the perpetuation of inequity in a letter to the California Supreme Court last month. Additionally, the EFF expressed concerns about California Consumer Privacy Protection Act (CCPA) violations and warned that the State Bar of California is making data collected by ExamSoft alluring to hackers.
“It is well known that storing large collections of private or personally identifiable information (PII) creates the risk of a security breach, and ExamSoft’s retention of data is no different,” the EFF letter reads.
As a way to further reduce risk, the State Bar of California requested that ExamSoft delete all biometric data associated with the test after human reviewers have sifted through instances flagged by predictive AI. Exactly when ExamSoft will be required to delete biometric data it gathers while administering the test is unclear, but it could be at least a few months after the test. In a letter sent Friday, California Supreme Court clerk and executive officer Jorge Navarrete said the State Bar of California has 60 days to submit a timetable for when ExamSoft will delete all collected biometric data.
In a separate discrimination matter related to the California bar exam, a number of recent law school graduates have filed lawsuits against the NCBE and State Bar of California alleging state and federal disability law violations. People with disabilities have been told they must take the test in person at designated testing locations. In response to the suit, Hershkowitz said in a statement shared with VentureBeat that appropriate COVID-19 measures will be taken to safeguard in-person administration of the test for people with disabilities and declared that “there is no unlawful discrimination of the October bar examination.” ExamSoft hack in Michigan Concerns about privacy and other issues are not without precedent. On July 28, about an hour into the bar exam in the state of Michigan, some test takers experienced login issues. In a statement shared on Twitter later that day , ExamSoft said its login process was targeted by a DDOS attack. In the statement, the company said this marked the first time ExamSoft had experienced such an attack on the network level and that no data was compromised during the attack.
The Michigan Board of Law Examiners and ExamSoft emphasized that all bar exam applicants were able to complete the exam that day and test takers impacted by login delays were allotted extra time.
A day later, Michigan Supreme Court Chief Justice Bridget McCormack ordered an inquiry into the login issues experienced by some Michigan test takers. Results of that investigation are still outstanding, a Michigan Supreme Court spokesperson told VentureBeat. About a week after the incident, ExamSoft asked the Department of Homeland Security and FBI to open investigations. ExamSoft and its network provider have put additional redundancies in place to make sure these kinds of delays don’t happen again, a company spokesperson told VentureBeat in an email.
Despite ExamSoft’s assurances, law school graduates continue to express security concerns. More than 50 state bar applicants taking the test next week in Pennsylvania requested a fraud investigation earlier this month , claiming they experienced an uptick in compromised passwords after ExamSoft downloads.
Under the guidance of the NCBE, ExamSoft will administer all remote October bar exams. But some remote test monitoring software companies were not interested in participating. Speaking with the American Bar Association for a recent article , ExamSoft cofounder and current Extegrity CEO Greg Sarab said his company was one of three that backed out of remote proctoring services for state bar exams. He feels it’s risky to use the technology at this point, as evidenced by inconsistent performance in mock and live tests. Sarab also expressed concern about risks related to reliable internet connections and lack of time for companies like ExamSoft to test their technology.
In response to critics who called remote bar exam testing too risky, an ExamSoft spokesperson said “We have no way to speak about how other vendors feel about their software or the quality, stability, or resilience of their products,” adding that the company has built trust among thousands of clients over its 22-year history.
Diploma privilege, provisional licensing, and supervised practice In an attempt to address unprecedented logistical challenges and keep the legal profession moving forward during the pandemic, state bar associations are even beginning to consider ways law school graduates can practice law with temporary licenses or do away with bar exams entirely.
In California, for example, the bar exam passing grade was permanently lowered in July from 1440 to 1390.
Last week, the State Bar of California Board of Trustees approved a provisional license agreement that will allow law school graduates to practice law until 2022 without taking the bar exam. The board also directed its provisional licensing working group to consider whether it would recommend these individuals be admitted to the state bar if they complete a set number of hours of supervised practice as provisionally licensed lawyers.
Caton said she can’t imagine that any job offer with a rate of pay promised to a licensed lawyer will be offered to a provisionally licensed attorney.
“It’s basically a glorified internship or something,” Caton said of provisional licensing. “So I’ve never understood the purpose. I don’t think that my opinion is anything novel or unusual. I think a lot of people have the same question.” The District of Columbia Court of Appeals took yet another approach, adopting a supervised practice program last week that allows graduates of accredited universities to receive a license to practice law if they work under the supervision of a more senior attorney for three years.
Diploma privilege means people can get a license to practice law without taking the bar exam as long as they meet certain requirements, like graduating from an American Bar Association accredited law school. As state bar associations began delaying tests and implementing remote testing options last spring, groups like United for Diploma Privilege and Diploma Privilege for Maryland sprang up to encourage more state bars to adopt diploma privilege, particularly to ensure access for groups like low-income applicants and people with disabilities. This year, states like Washington, Utah, and Louisiana have adopted diploma privilege.
As testing and licensing boards grapple with the need to administer certification tests during the pandemic, the kinds of logistical and privacy hurdles law school graduates encounter are impacting professionals across multiple industries.
In some instances, engineers have had to drive across state lines to complete a certification test. Other individuals have had to choose whether to risk their lives by going to an in-person certification that could lead to higher wages or better opportunities amid an economic recession.
During the pandemic, remote learning has revealed ways students can get left behind as issues like lack of broadband access impact their education. Remote test monitoring software that uses facial recognition and debates around licensing requirements for lawyers reveal additional inequalities, as well as surveillance and data privacy challenges.
Caton said she’s proud of this generation of legal professionals for voicing concerns, but she wonders why more politicians and bar-certified lawyers aren’t speaking up on their behalf. Being a bar exam applicant in the current environment, she said, can give law school graduates the impression that the legal community isn’t interested in protecting them because they’re not quite lawyers and yet can no longer be considered members of the general public.
“I can’t quite wrap my head around how this could possibly be the state of things right now, and it’s a little concerning also that we haven’t had too many attorneys or legislators standing up for us,” she said. “It feels like we’re being treated like we’re expendable, like our rights and our data and privacy are expendable, and so I think that’s where we’re at right now.” Update: This article was cited by California ACLU organizations in a letter sent to the California Supreme Court on October 1, 2020 in opposition to remote proctoring software use for the California bar exam.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,056 | 2,020 |
"EdCast and edX Give 100,000+ Small Businesses Free Access to Upskilling and Reskilling Tools to Navigate Economic Volatility | VentureBeat"
|
"https://venturebeat.com/business/edcast-and-edx-give-100000-small-businesses-free-access-to-upskilling-and-reskilling-tools-to-navigate-economic-volatility"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release EdCast and edX Give 100,000+ Small Businesses Free Access to Upskilling and Reskilling Tools to Navigate Economic Volatility Share on Facebook Share on X Share on LinkedIn In partnership with Adecco, the International Chamber of Commerce and edX, EdCast invests $315 million in workforce education for small and mid-sized businesses MOUNTAIN VIEW, Calif.–(BUSINESS WIRE)–September 30, 2020– EdCast announced today a commitment to helping small and mid-sized businesses (SMBs) unlock organizational productivity and innovation by waiving access fees to Spark by EdCast that features access to edX courses and programs for one year. This $315M initiative is made possible by EdCast’s SMB partners — Adecco , the International Chamber of Commerce , and edX — in an effort to help SMBs overcome challenges brought on by this year’s pandemic-related economic volatility.
“Small and mid-sized businesses are struggling to adapt to the new normal, facing challenges that range from health and safety to financial to employee management,” said Karl Mehta, CEO and Founder of EdCast. “By granting free access to a suite of tools that enable remote work and collaboration, EdCast and its SMB-oriented partners are pitching in to support the small business community, a critical driver of the nation’s economic success.” Spark, EdCast’s market-leading Learning Experience Platform (LXP) for SMBs, enables remote upskilling, training and learning programs. SMB employees will have access to thousands of in-demand skills and subjects associated with formal badges and certifications they can earn and share on topics ranging from cybersecurity to mindfulness. Available online and as a mobile application, Spark can be accessed on all major digital platforms — Google Search, O365, MS Teams, GSuite, FB Workplace, Salesforce — and has been shown to drive a 20% increase in productivity. Additionally, SMBs can choose to offer their employees the ability to earn innovative credentials through edX, including MicroMasters® program credentials, earned after completing a series of graduate level courses from top universities that deliver deep learning and in-demand skills to employers.
This initiative is part of an ongoing collaboration that EdCast and other corporate partners launched in 2020 called the Future of Work Alliance.
Stemming from conversations with global business leaders at the World Economic Forum in Davos, the Alliance seeks to build new public-private partnerships to coalesce the world’s most innovative companies for change.
EdCast’s partners in this $315 million initiative have made important and significant commitments to SMBs as well, including: Adecco: Small businesses play a critical role in the health of economies around the world. Just in the U.S. alone, SMBs provide jobs for roughly half of the nation’s private workforce. Today, many small business owners have been forced to do more with less, making it essential to get the most out of every dollar, every resource and, most important, every employee. At Adecco, enabling small businesses through our portfolio of companies remains a key priority of our executives.
Marcus Sawyerr, Adecco’s Global Head of Digital Partnerships, said: “Many small businesses are not able to offer the formal training and development programs provided by larger companies to their employees. This is why Adecco partnered with EdCast to bring Spark–EdCast’s market-leading remote work platform for collaboration and lifelong learning–to SMBs and their employees. Spark provides a personalized daily feed that includes videos, courses, articles and more, all tailored toward users’ learning and training needs to ensure they stay competitive.” International Chamber of Commerce: As the institutional representative of more than 45 million companies in over 100 countries, ICC is committed to saving lives and livelihoods by supporting the survival and rapid recovery of small and medium-sized enterprises (SMEs) post-pandemic in line with the global Save Our SMEs campaign. Sharing ICC’s expertise via the Spark platform will enable continued professional development for more workforces and allow struggling SMEs to continue bridging skills gaps and futureproofing workforces to help weather the COVID-19 storm.
John W.H. Denton AO, ICC Secretary General, said: “Delivering our quality, professional education from a trusted source is in line with ICC commitments to provide concrete tools to help save our SMEs. We are delighted to partner with EdCast to make ICC expertise even more accessible to SMEs at this crucial time, when most micro- and small enterprises do not have enough working capital to keep business going, yet alone invest in training that can help workforces survive and thrive.” edX: edX is the trusted platform for education and learning. Founded by Harvard and MIT, edX is home to more than 30 million learners, the majority of top-ranked universities in the world and industry-leading companies. As a global nonprofit, edX is transforming traditional education, removing the barriers of cost, location and access. Fulfilling the demand for people to learn on their own terms, edX is reimagining the possibilities of education, providing the highest-quality, stackable learning experiences including the groundbreaking MicroMasters® programs. Supporting learners at every stage, whether entering the job market, changing fields, seeking a promotion or exploring new interests, edX delivers courses for curious minds on topics ranging from data and computer science to leadership and communications. edX is where people go to learn.
Anant Agarwal, edX CEO and MIT Professor, said: “edX is excited to collaborate with EdCast to help SMBs upskill, reskill and train their workforces by providing access to the highest-quality content available from top institutions and in subject areas relevant to their business today and in the future. We are committed to helping all companies come out of this period of economic crisis with workforces ready to tackle the future.” For more details on EdCast’s Spark for SMBs offering, please visit https://spark.edcast.com.
About EdCast EdCast is the AI-Powered Knowledge Cloud solution for unified discovery, personalized learning and knowledge management across enterprises and SMBs, including work teams that are more remote and highly distributed than ever before. Its award-winning platform is used internationally by organizations ranging from large Global 2000 and Fortune 500 companies to small businesses with fewer than 25 employees. EdCast’s offerings include its Learning Experience Platform (LXP), Spark for SMBs, the EdCast Marketplace and MyGuide. For additional information, visit www.edcast.com or follow on Twitter @EdCast.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200930005284/en/ Philip Levinson, Vice President of Marketing, EdCast [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,057 | 2,020 |
"D-Wave's 5,000-qubit quantum computing platform handles 1 million variables | VentureBeat"
|
"https://venturebeat.com/business/d-wave-advantage-quantum-computing-5000-qubits-1-million-variables"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages D-Wave’s 5,000-qubit quantum computing platform handles 1 million variables Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
D-Wave today launched its next-generation quantum computing platform available via its Leap quantum cloud service.
The company calls Advantage “the first quantum computer built for business.” In that vein, D-Wave today also debuted Launch, a jump-start program for businesses that want to begin building hybrid quantum applications.
“The Advantage quantum computer is the first quantum computer designed and developed from the ground up to support business applications,” D-Wave CEO Alan Baratz told VentureBeat. “We engineered it to be able to deal with large, complex commercial applications and to be able to support the running of those applications in production environments. There is no other quantum computer anywhere in the world that can solve problems at the scale and complexity that this quantum computer can solve problems. It really is the only one that you can run real business applications on. The other quantum computers are primarily prototypes. You can do experimentation, run small proofs of concept, but none of them can support applications at the scale that we can.” Quantum computing leverages qubits (unlike bits that can only be in a state of 0 or 1, qubits can also be in a superposition of the two) to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing.
But D-Wave doesn’t sell quantum computers anymore. Advantage and its over 5,000 qubits (up from 2,000 in the company’s 2000Q system) are only available via the cloud. (That means through Leap or a partner like Amazon Braket.
) 5,000+ qubits, 15-way qubit connectivity If you’re confused by the “over 5,000 qubits” part, you’re not alone. More qubits typically means more potential for building commercial quantum applications. But D-Wave isn’t giving a specific qubit count for Advantage because the exact number varies between systems.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Essentially, D-Wave is guaranteeing the availability of 5,000 qubits to Leap users using Advantage,” a D-Wave spokesperson told VentureBeat. “The actual specific number of qubits varies from chip to chip in each Advantage system. Some of the chips have significantly more than 5,000 qubits, and others are a bit closer to 5,000. But bottom line — anyone using Leap will have full access to at least 5,000 qubits.” Advantage also promises 15-way qubit connectivity, thanks to a new chip topology, Pegasus , which D-Wave detailed back in February 2019.
(Pegasus’ predecessor, Chimera , offered six connected qubits.) Having each qubit connected to 15 other qubits instead of six translates to 2.5 times more connectivity, which in turn enables the embedding of larger and more complex problems with fewer physical qubits.
“The combination of the number of qubits and the connectivity between those qubits determines how large a problem you can solve natively on the quantum computer,” Baratz said. “With the 2,000-qubit processor, we could natively solve problems within 100- to 200-variable range. With the Advantage quantum computer, having twice as many qubits and twice as much connectivity, we can solve problems more in the 600- to 800-variable range. As we’ve looked at different types of problems, and done some rough calculations, it comes out to generally we can solve problems about 2.6 times as large on the Advantage system as what we could have solved on the 2000-qubit processor. But that should not be mistaken with the size problem you can solve using the hybrid solver backed up by the Advantage quantum computer.” 1 million variables, same problem types D-Wave today also announced its expanded hybrid solver service will be able to handle problems with up to 1 million variables (up from 10,000 variables). It will be generally available in Leap on October 8. The discrete quadratic model (DQM) solver is supposed to let businesses and developers apply hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (integers from 1 to 500, colors, etc.), expanding the types of problems that can run on Advantage. D-Wave asserts that Advantage and DQM together will let businesses “run performant, real-time, hybrid quantum applications for the first time.” Put another way, 1 million variables means tackling large-scale, business-critical problems. “Now, with the Advantage system and the enhancements to the hybrid solver service, we’ll be able to solve problems with up to 1 million variables,” Baratz said. “That means truly able to solve production-scale commercial applications.” Depending on the technology they are built on, different quantum computers tend to be better at solving different problems. D-Wave has long said its quantum computers are good at solving optimization problems, “and most business problems are optimization problems,” Baratz argues.
Advantage isn’t going to be able to solve different types of problems, compared to its 2000Q predecessor. But coupled with DQM and the sheer number of variables, it may still be significantly more useful to businesses.
“The architecture is the same,” Baratz confirmed. “Both of these quantum computers are annealing quantum computers. And so the class of problems, the types of problems they can solve, are the same. It’s just at a different scale and complexity. The 2000-qubit processor just couldn’t solve these problems at the scale that our customers need to solve them in order for them to impact their business operations.” D-Wave Launch In March, D-Wave made its quantum computers available for free to coronavirus researchers and developers.
“Through that process what we learned was that while we have really good software, really good tools, really good training, developers and businesses still need help,” Baratz told VentureBeat. “Help understanding what are the best problems that they can benefit from the quantum computer and how to best formulate those problems to get the most out of the quantum computer.” D-Wave Launch will thus make the company’s application experts and a set of handpicked partner companies available to its customers. Launch aims to help anyone understand how to best leverage D-Wave’s quantum systems to support their business. Fill out a form on D-Wave’s website and you will be triaged to determine who might be best able to offer guidance.
“In order to actually do anything with the quantum processor, you do need to become a Leap customer,” Baratz said. “But you don’t have to first become a Leap customer. We’re perfectly happy to engage with you to help you understand the benefits of the quantum computer and how to use it.” D-Wave will make available “about 10” of its own employees as part of Launch, plus partners.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,058 | 2,020 |
"CuriosityStream and Software Acquisition Group to Participate in SPACInsider Webinar on October 5th at 2pm ET | VentureBeat"
|
"https://venturebeat.com/business/curiositystream-and-software-acquisition-group-to-participate-in-spacinsider-webinar-on-october-5th-at-2pm-et"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release CuriosityStream and Software Acquisition Group to Participate in SPACInsider Webinar on October 5th at 2pm ET Share on Facebook Share on X Share on LinkedIn SILVER SPRING, Md.–(BUSINESS WIRE)–September 29, 2020– CuriosityStream Inc. (“CuriosityStream”), a leading global factual entertainment company, which has entered into a definitive merger agreement with Software Acquisition Group, Inc. (NASDAQ: SAQN) (“Software Acquisition Group”), a special purpose acquisition company (SPAC), today announced that the two companies will participate in a webinar hosted by SPACInsider on October 5, 2020 at 2:00 p.m. ET.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20200929005706/en/ Learn more and register for the event at: https://zoom.us/webinar/register/9516010379919/WN_8S-Tp88nR2yyJX_mdrYg8Q Participants in the webinar will include: John Hendricks, Founder and Chairman, CuriosityStream Clint Stinchcomb, President and CEO, CuriosityStream Jason Eustace, Chief Financial Officer, CuriosityStream Devin Emery, Chief Product Officer and EVP of Content Strategy, CuriosityStream Jonathan Huberman, Chairman and CEO, Software Acquisition Group Zack Silver – Equity Research Analyst, B. Riley Securities With approximately 13 million paying subscribers in over 175 countries, thousands of original and licensed documentaries, and a history of doubling annual revenue, CuriosityStream is poised to accelerate growth with new cash funding resulting from the merger as it drives adoption across global media markets.
In addition to offering consumers the opportunity to subscribe to CuriosityStream directly or through partner distributors on an a la carte basis, CuriosityStream is also undergoing rapid distribution growth due to a flexible “bundled” partnership plan through which distributors can deliver CuriosityStream’s SVOD service and CuriosityStream’s customized linear channels to a significant customer segment.
Including DTC subscriptions and bundled distribution, CuriosityStream has a scalable and recurring multi-channel revenue stack also bolstered by Fortune 500 CSR and association partnerships, traditional advertising sales and multi-platform brand partnerships, and content licensing to major networks, studios, and distributors.
CuriosityStream features more than 3,000 titles including over 900 exclusive originals and has embarked on an original production and content acquisition plan that it projects will achieve a streaming library of more than 11,000 premium factual titles within five years.
At the closing of the transaction the combined company will be well capitalized with zero debt and an estimated $180 million of cash on the balance sheet (assuming no redemptions of Software Acquisition Group stock). John Hendricks, founder of the Discovery Channel and former Chairman of Discovery Communications, will remain Chairman of the Board as well as the combined company’s largest shareholder. CuriosityStream will continue to operate under the current management team led by Clint Stinchcomb, President and CEO, a media executive with more than 25 years’ experience launching networks and developing and monetizing content.
In connection with signing the merger agreement, Software Acquisition Group secured a $25 million PIPE investment at $10.00 per share to support the business combination. The PIPE investment includes significant commitments from new investors as well as existing investors in CuriosityStream, insiders of Software Acquisition Group and existing Software Acquisition Group investors.
About CuriosityStream Launched by media visionary John Hendricks, CuriosityStream is a leading global independent factual media company. Our documentary series and features cover every topic from space exploration to adventure to the secret life of pets, empowering viewers of all ages to fuel their passions and explore new ones. With thousands of titles, many in Ultra HD 4K, including exclusive originals, CuriosityStream features stunning visuals and unrivaled storytelling to demystify science, nature, history, technology, society, and lifestyle. CuriosityStream programming is available worldwide to watch on TV, desktop, mobile and tablets. Find us on Roku, Apple TV Channels and Apple TV, Xbox One, Amazon Fire TV, Google Chromecast, iOS and Android, as well as Amazon Prime Video Channels, YouTube TV, Sling TV, DISH, Comcast Xfinity on Demand, Cox Communications, Altice USA, Suddenlink, T- Mobile, Frndly TV, Vidgo, Sony, LG, Samsung and VIZIO smart TVs, Liberty Global, Com Hem, MultiChoice, StarHub TV, Totalplay, Millicom, Okko, Gazprom and other global distribution partners and platforms. For more information, visit CuriosityStream.com.
About Software Acquisition Group, Inc. (NASDAQ: SAQN) Software Acquisition Group is a blank check company formed for the purpose of effecting a merger, capital stock exchange, asset acquisition, stock purchase, reorganization or similar business combination with one or more businesses. The Company is led by industry veterans Chairman and Chief Executive Officer, Jonathan Huberman, and Vice President of Acquisitions, Mike Nikzad. In addition to Messrs. Huberman and Nikzad, the Board of Directors includes Andrew Nikou, Stephanie Davis, Peter Diamandis, Steven Guggenheimer and Matt Olton.
About SPACInsider SPACInsider is a trusted intelligence and analysis provider specializing in the Special Purpose Acquisition Corporation (SPAC) asset class. SPACInsider’s mission is to be the best-in-class source for SPAC information benefiting investors, SPAC teams, bankers and service providers. The company provides comprehensive data covering the SPAC transaction universe, along with detailed analysis and coverage of IPO and acquisition events. SPACInsider is led by Kristi Marvin, a career investment banker with over 15 years of experience in the capital markets, who began working on SPACs in 2005.
Additional Information about the Business Combination and Where to Find It This communication is being made in respect of the proposed merger transaction involving Software Acquisition Group and CuriosityStream. Software Acquisition Group has filed a definitive proxy statement on Schedule 14A with the SEC and will file other documents with the SEC regarding the proposed transaction. A copy of the definitive proxy statement will also be sent to the stockholders of Software Acquisition Group seeking any required stockholder approval. Before making any voting or investment decision, investors and security holders of Software Acquisition Group are urged to carefully read the entire proxy statement and any other relevant documents filed with the SEC, as well as any amendments or supplements to these documents, because they will contain important information about the proposed transaction. The documents filed by Software Acquisition Group with the SEC may be obtained free of charge at the SEC’s website at www.sec.gov.
Participants in the Solicitation Software Acquisition Group, CuriosityStream and certain of their respective directors and executive officers may be deemed to be participants in the solicitation of proxies from the stockholders of Software Acquisition Group, with respect to the proposed business combination. Information regarding Software Acquisition Group’s directors and executive officers is contained in Software Acquisition Group’s Annual Report on Form 10-K for the year ended December 31, 2019, its Quarterly Report on Form 10-Q for the quarterly period ended June 30, 2020 and its other documents, which are filed with the SEC. Additional information regarding the interests of those participants and other persons who may be deemed participants in the transaction, including the directors and executive officers of CuriosityStream, may be obtained by reading the proxy statement and other relevant documents filed with the SEC when they become available. Free copies of these documents may be obtained as described under “Additional Information about the Business Combination and Where to Find It.” No Offer or Solicitation This press release is for informational purposes only and does not constitute an offer to sell or the solicitation of an offer to buy any securities, nor will there be any sale of securities in any jurisdiction in which such offer, solicitation or sale would be unlawful prior to registration or qualification under the securities laws of any such jurisdiction. No offering of securities shall be made except by means of a prospectus meeting the requirements of Section 10 of the U.S. Securities Act of 1933, as amended.
Forward-Looking Statements Certain statements in this press release may be considered “forward-looking statements” within the meaning of the Private Securities Litigation Reform Act of 1995 including, but not limited to, Software Acquisition Group’s and CuriosityStream’s expectations or predictions of future financial or business performance or conditions. Forward-looking statements are inherently subject to risks, uncertainties and assumptions. Generally, statements that are not historical facts, including statements concerning possible or assumed future actions, business strategies, events or results of operations, are forward-looking statements. These statements may be preceded by, followed by or include the words “believes,” “estimates,” “expects,” “projects,” “forecasts,” “may,” “will,” “should,” “seeks,” “plans,” “scheduled,” “anticipates,” “predicts” or “intends” or similar expressions. Such forward-looking statements involve risks and uncertainties that may cause actual events, results or performance to differ materially from those indicated by such statements. Certain of these risks are identified and discussed in Software Acquisition Group’s (i) Form 10-K for the year ended December 31, 2019, under “Risk Factors” in Part I, Item 1A, (ii) Form 10-Q for the quarterly period ended June 30, 2020, under “Risk Factors” in Part II, Item 1A, (iii) Definitive Proxy Statement on Schedule 14A filed with the SEC on September 22, 2020, under “Risk Factors”, and (iv)its other SEC filings. These risk factors will be important to consider in determining future results and should be reviewed in their entirety. Forward-looking statements are based on the current belief of the respective management of Software Acquisition Group and CuriosityStream, based on currently available information, as to the outcome and timing of future events, and involve factors, risks, and uncertainties that may cause actual results in future periods to differ materially from such statements. However, there can be no assurance that the events, results or trends identified in these forward-looking statements will occur or be achieved. Forward-looking statements speak only as of the date they are made, and neither Software Acquisition Group nor CuriosityStream is under any obligation, and each of them expressly disclaims any obligation, to update, alter or otherwise revise any forward-looking statement, whether as a result of new information, future events or otherwise, except as required by law. Readers should carefully review the statements set forth in the reports that Software Acquisition Group has filed or will file from time to time with the SEC.
In addition to factors previously disclosed in Software Acquisition Group’s reports filed with the SEC and those identified elsewhere in this communication, the following factors, among others, could cause actual results to differ materially from forward-looking statements or historical performance: (i) ability to meet the closing conditions to the merger, including approval by stockholders of Software Acquisition Group on the expected terms and schedule and the risk that any regulatory approvals required for the merger are not obtained or are obtained subject to conditions that are not anticipated; (ii) the occurrence of any event, change or other circumstance that could cause the termination of the merger agreement or a delay in the closing of the merger; (iii) the effect of the announcement or pendency of the proposed merger on CuriosityStream’s business relationships, operating results, and business generally; (iv) failure to realize the benefits expected from the proposed transaction; (v) risks that the proposed merger disrupts CuriosityStream’s current plans and operations and potential difficulties in CuriosityStream’s employee retention as a result of the proposed merger; (vi) the effects of pending and future legislation; (vii) risks related to disruption of management time from ongoing business operations due to the proposed transaction; (viii) risks related to CuriosityStream’s limited operating history; (ix) the amount of the costs, fees, expenses and other charges related to the merger; (x) risks of the internet, online commerce and media industry; (xi) the highly competitive nature of the internet, online commerce and media industry and CuriosityStream’s ability to compete therein; (xii) litigation, complaints, and/or adverse publicity; (xiii) the ability to meet Nasdaq’s listing standards following the consummation of the proposed transaction and (ix) privacy and data protection laws, privacy or data breaches, or the loss of data. This communication is not intended to be all-inclusive or to contain all the information that a person may desire in considering an investment in Software Acquisition Group and is not intended to form the basis of an investment decision in Software Acquisition Group. All subsequent written and oral forward-looking statements concerning Software Acquisition Group and CuriosityStream, the proposed transaction or other matters and attributable to Software Acquisition Group and CuriosityStream or any person acting on their behalf are expressly qualified in their entirety by the cautionary statements above.
This press release and additional information about CuriosityStream can be found at https://press.curiositystream.com/ View source version on businesswire.com: https://www.businesswire.com/news/home/20200929005706/en/ Software Acquisition Group, Inc.
Jonathan Huberman Chief Executive Officer [email protected] CuriosityStream Investor Relations Denise Garcia [email protected] CuriosityStream Media Relations Vanessa Gillon [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,059 | 2,020 |
"China threatens antitrust investigation of Google | VentureBeat"
|
"https://venturebeat.com/business/china-threatens-antitrust-investigation-of-google"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages China threatens antitrust investigation of Google Share on Facebook Share on X Share on LinkedIn Google San Francisco office Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
China is preparing to launch an antitrust probe into Alphabet’s Google, looking into allegations it has leveraged the dominance of its Android mobile operating system to stifle competition, two people familiar with the matter said.
The case was proposed by telecommunications equipment giant Huawei last year and has been submitted by the country’s top market regulator to the State Council’s antitrust committee for review, they added. A decision on whether to proceed with a formal investigation may come as soon as October and could be affected by the state of China’s relationship with the United States, one of the people said.
The potential investigation follows a raft of actions by U.S. President Donald Trump’s administration to hobble Chinese tech companies, citing national security risks. This has included putting Huawei on its trade blacklist , threatening similar action for Semiconductor Manufacturing International and ordering TikTok owner ByteDance to divest the short-form video app.
It also comes as China embarks on a major revamp of its antitrust laws with proposed amendments including a dramatic increase in maximum fines and expanded criteria for judging a company’s control of a market.
A potential probe would also look at accusations that Google’s market position could cause “extreme damage” to Chinese companies like Huawei, as losing the U.S. tech giant’s support for Android-based operating systems would lead to loss of confidence and revenue, a second person said.
The sources were not authorized to speak publicly on the matter and declined to be identified. Google did not provide immediate comment, while Huawei declined to comment. Neither China’s top market regulator, the State Administration for Market Regulation, nor the State Council responded immediately to requests for comment.
Europe’s example The U.S. trade blacklist bars Google from providing technical support to new Huawei phone models and access to Google Mobile Services, the bundle of developer services upon which most Android apps are based. Google had a temporary license that exempted it from the ban on Huawei, but it expired in August.
It was not immediately clear what Google services the potential probe would focus on. Most Chinese smartphone vendors use an open source version of the Android platform with alternatives to Google services on their domestic phones. Google’s search, email, and other services are blocked in China.
Huawei has said it missed its 2019 revenue target by $12 billion, which company officials have attributed to U.S. actions against it. Seeking to overcome its reliance on Google, the Chinese firm announced plans this month to introduce its proprietary Harmony operating system in smartphones next year.
Chinese regulators will be looking at examples set by their peers in Europe and in India if it proceeds with the antitrust investigation, the first source said. “China will also look at what other countries have done, including holding inquiries with Google executives,” said the person. The second source added that one learning point would be how to levy fines based on a firm’s global revenues rather than local revenues.
The European Union fined Google 4.3 billion euros ($5.1 billion) in 2018 over anticompetitive practices, including forcing phone makers to pre-install Google apps on Android devices and blocking them from using rivals to Google’s Android and search engine. That decision prompted Google to give European users more choice over default search tools and allow handset makers more leeway to use competing systems.
Indian authorities are looking into allegations that Google is abusing its market position to unfairly promote its mobile payments app.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,060 | 2,020 |
"Bounteous Launches Compelling Product Experiences with Akeneo | VentureBeat"
|
"https://venturebeat.com/business/bounteous-launches-compelling-product-experiences-with-akeneo"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Bounteous Launches Compelling Product Experiences with Akeneo Share on Facebook Share on X Share on LinkedIn Introducing a holistic approach to create advanced product experiences CHICAGO–(BUSINESS WIRE)–September 30, 2020– Leading digital transformation agency Bounteous today announced a new joint offering, Compelling Product Experiences , in collaboration with long-standing partner Akeneo , a global leader in Product Experience Management (PXM) solutions. The offering is designed to help businesses successfully transform digital product experiences for their customers and provide a complete product management solution that increases productivity and online revenues.
Following a record-setting year, Bounteous was elevated from an Akeneo Silver Solutions partner to one of the very few Gold Solutions partners in North America and was also awarded the B2C Project of the Year at Akeneo PIM Summit (APS) 2020. Through this new offering, Bounteous and Akeneo aim to extend their strategic approach to any company that manufactures, supplies, distributes, or sells products in North America.
The Compelling Product Experiences solution is Product Experience Management (PXM) for organizations with complex data requirements. PXM is a newer concept for how audiences holistically experience products visually and from an informational perspective. It is not one specific tool or platform, but rather the experience that surrounds a product, often supported by multiple platforms. An effective PXM requires businesses to provide product information in a way that is both contextualized and personalized across the customer journey. Creating PXM depends upon a number of platforms working together.
Bounteous specializes in seamlessly integrating Akeneo PIM with various ERP, eCommerce, CMS, CDP, CRM, print, product syndication, and other platforms to help businesses create complete product experiences out of complex data and environments.
“Bounteous is a key partner for Akeneo in North America and helps organizations create state-of-the-art product experiences for their customers,” said Mike Bender, North America Vice President of Sales at Akeneo. “Their strong expertise across commerce platforms makes even the most challenging PXM solutions a success, returning the results that organizations need.” With today’s consumers expecting seamless product experiences from beginning to end, and across all channels, Bounteous’ Compelling Product Experiences solution is timely, helping businesses navigate complex product data structures and perform advanced data modeling to implement successful solutions that deliver transformational product experiences.
“Akeneo’s Product Information Management solution and the ability of their team to craft strategic PXM solutions makes their offering one of the first we recommend to businesses in need of a digital product experience transformation,” said Jean Bordelon, Director of Data Management at Bounteous. “We truly believe that Product Experience Management is the foundation of a successful retail digital transformation.” To learn more about the Compelling Product Experiences solution, visit the Bounteous website.
About Bounteous Founded in 2003 in Chicago, Bounteous creates big-picture digital solutions that help leading companies deliver transformational digital brand experiences. Our expertise includes Strategy, Experience Design, Solutions Engineering, Analytics and Marketing. Bounteous forms problem-solving partnerships with clients to envision, design, and build their digital futures. For more information, please visit www.bounteous.com.
For the most up-to-date news, follow Bounteous on Twitter , LinkedIn , Facebook , and Instagram.
About Akeneo Akeneo is a global leader in Product Experience Management (PXM) solutions that help merchants and brands deliver a compelling customer experience across all sales channels, including eCommerce, mobile, print, and retail points of sale. Akeneo’s open-source enterprise PIM, and product data intelligence solutions, dramatically improve product data quality and accuracy while simplifying and accelerating product catalog management.
Leading global brands, including Midland Scientific, Air Liquide, Fossil, Shop.com, and Auchan trust Akeneo’s solutions to scale and customize their omnichannel and cross-border commerce initiatives. Using Akeneo, brands and retailers can improve customer experience, increase sales, reduce time to market, go global, and boost team productivity.
For more information, please visit https://www.akeneo.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200930005264/en/ Bounteous Caroline Habrowski (877) 220-5862 [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,061 | 2,020 |
"Arm unveils new chips for advanced driver assistance systems | VentureBeat"
|
"https://venturebeat.com/business/arm-unveils-new-chips-for-advanced-driver-assistance-systems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm unveils new chips for advanced driver assistance systems Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Arm today announced a suite of technologies intended to make it easier for autonomous car developers to bring their designs to market. According to the company, integrating three new processors onto a system-on-chip — the Arm Cortex-A78AE processor, Mali-G78AE graphics processor, and Mali-C71AE image signal processor — provides the power-efficient and safety-enabled processing required to achieve the potential of autonomous decision-making.
While fully autonomous vehicles or driverless cars might be years away from commercial deployment, automation features built into advanced driver assistance systems (ADAS) could help reduce the number of accidents by up to 40%. That’s critical, given that 94% of road traffic accidents occur due to human error, according to the U.S. National Highway Traffic Safety Administration, and it’s perhaps why the global ADAS market is projected to grow from $27 billion in 2020 to $83 billion by 2030. (Arm estimates automation in automotive and industrial sectors will be an $8 billion silicon opportunity in 2030.) Arm says the Cortex-A78AE, Mali-G78AE, and Mali-C71AE — specialized versions of the existing Cortex-A78, Mali-G78, and Mali-C71 — are engineered to work in combination with supporting software and tools to handle autonomous vehicle workloads. On the software front, Arm offers Arm Fast Models, which can be used to build functionally accurate virtual platforms that enable software development and validation ahead of hardware availability. There’s also Arm Development Studio, which includes the Arm Compiler for Safety qualified by TÜV SÜD, one of the nationally recognized German testing laboratories providing vehicular inspection and product certification services.
Cortex-A78AE The Cortex-A78AE is the successor to the Cortex-A76AE (which was announced a little less than two years ago), and Arm says the microarchitecture has been revamped on a number of fronts. It features additional fetch bandwidth, improved branch detection, and a memory subsystem with 50% higher bandwidth than the previous generation. But the Cortex-A78AE’s standout feature is perhaps the macro-operation cache, a structure designed to hold decoded instructions that decouples the fetch engines and execution to support dynamic code sequence optimizations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Arm says these innovations together drive an over 30% performance improvement on the Spec2006 synthetic benchmark suite across both integer and floating-point routines. Moreover, they contribute to the Cortex-A78AE’s power efficiency. The Cortex-A78AE achieves targeted performance at 60% lower power on a 7-nanometer implementation and a 25% performance boost at the same power envelope.
Arm is touting the Cortex-A78AE’s security and privacy features as major platform advances. Pointer Authentication (PAC) ostensibly shores up vulnerabilities in Return-Oriented-Programming — statistically, the most common form of software exploit — by providing a cryptographic check of stack addresses before they’re put on the program counter. Temporal diversity guards against common cause failures while line lockout support avoids hitting bad locations in the cache structures. And a hybrid-mode allows shared DSU-AE logic to continue operating in a “lock mode” while the processors remain independent, permitting individual processors to be taken offline for testing while the cluster itself remains available for compute.
The Cortex-A78AE can be scaled in processor clusters up to a maximum of four cores and in a variety of cache sizes across L1, L2, and L3. Multiple clusters can be grouped together to offer a many-core implementation (including a Cortext-A78AE and Cortex-A65AE), optionally with accelerators over the chip’s Accelerator Coherence Port.
Mali-G78AE Complementing the Cortex-A78AE is the new Mali-G78AE, a graphics component Arm says addresses the need for heterogeneous compute in autonomous systems. The Mali-G78AE GPU offers a new approach for resource allocation with a feature called flexible partitioning, which enables graphics resources to be dedicated to different workloads while remaining separate from each other. Basically, the Mali-G78AE can be split to look like multiple GPUs within a system, with up to four dedicated partitions for workload separation that can be individually powered up, powered down, and reset with separate memory interfaces for transactions.
The Mali-G78AE scales from one shader core — the fundamental building block of Mali GPUs — to 24 shader cores. With the new architecture, this means scaling from one slice with one shader core up to eight slices, each with three shader cores. Slices come with independent memory interfaces, job control, and L2 cache to ensure separation for safety and security, and the slices can be grouped together in up to four partitions configurable in software. (The Mali-G78AE can be assembled as one large partition with eight slices and 24 shader cores or four smaller partitions sized according to workload needs.) The Mali-G78AE also includes dedicated hardware virtualization, meaning that the GPU as whole (i.e. each individual partition) can be virtualized between multiple virtual machines. Beyond this, it comes with safety features, including lock-step, built-in self-testing, interface parity, isolation checks, and read-only memory protection.
Mali-C71AE The last of the three chips unveiled today — the Mali-C71AE — leverages hardware safety mechanisms and diagnostic software to prevent and detect faults and ensure “every-pixel reliability.” In fact, Arm says the Mali-C71AE is the first product in the Mali camera series of ISPs with built-in features for functional safety applications.
The Mali-C71AE supports up to four real-time camera inputs or 16 camera streams from memory. Camera inputs can be processed in a range of ways, including in as-received order, in a programmed order, or in various other software-defined patterns. Advanced spatial noise reduction, per-exposure noise profiling, and chromatic aberration correction deliver optimized data for computer vision applications and real-time safety features for ADAS and human-machine interface applications, enabling system-level functional safety compliance with over 400 dedicated fault-detection circuits and built-in self-test. Moreover, with its 24-bit processing of ultra-wide dynamic range, the Mali-C71AE offers independent dynamic range management, region-of-interest crops, and planar histograms for further analysis.
Arm says all of the new hardware is available to partners as of today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,062 | 2,020 |
"Accenture: Tech companies' disregard for inclusion drives women away | VentureBeat"
|
"https://venturebeat.com/business/accenture-tech-companies-disregard-for-inclusion-drives-women-away"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Accenture: Tech companies’ disregard for inclusion drives women away Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A joint report from Accenture and Girls Who Code found a massive perception gap between leaders in the tech industry — including C-suite executives and senior human resource officers — and its female-identifying employees. While 77% of leaders think their workplace empowers women, only 54% of these women agree. And while 45% of leaders claim it’s easy for women to thrive in tech-related jobs, only 21% of women overall (and 8% of women of color) feel the same way.
These findings from the report “ Resetting Tech Culture ” are based on online surveys completed by three distinct groups within the United States in 2019: 1,990 tech employees (1,502 of whom identify as women), 500 senior human resources leaders, and 2,700 college students. The researchers then analyzed workplace culture by applying a linear regression model to the survey results, which quantified the impact of different cultural factors on women’s advancement.
According to the report, the disparity is all about culture and opportunity: uncomfortable classroom settings in college, or even high school, combined with less-than-ideal company work environments, lead over 50% of young women in technology roles to drop out of the industry by the age of 35.
Senior human resources leaders are largely responsible for workplace culture. They’re changemakers who determine who is hired, how they work, and what they work on. But according to the survey results, they largely overestimate how safe and welcoming their workplaces are while underestimating how difficult it is for women to build their careers in technology.
This perception gap is key because leadership undervalues inclusion in the workplace and remains focused on hiring women when there’s an existing attrition problem. The report indicates that leaders tend to center their efforts on hiring rather than retaining women. An emphasis on hiring makes it less likely for women to advance in their career within a company; the company then misses out on reduced bias, a more equitable workplace, and an overall improved culture. The report asserts that the corporate world cannot improve at the rate it needs to without the contributions of women.
This report identifies five actionable cultural practices that can curb this trend: strengthening parental leave policies, selecting diverse leaders for senior teams, developing women-specific mentorship programs, rewarding employees for creativity, and scheduling networking events that are open to all team members. It expects that these changes could help ensure up to 3 million early-in-career women will work in technology roles by 2030. That’s almost twice as many as there are right now, according to the report.
Accenture and Girls Who Code say this reset would help to “drive much-needed change: [the] analysis suggests that if every company scored high on measures of an inclusive culture — specifically, if they were on par with those in the top 20% of [the] study — the annual attrition rate of women in tech could drop by up to 70%.” Although the number of women working in technology as a whole has increased, the proportional gender imbalance in technology today is actually greater than it was 35 years ago. This disequilibrium hurts not only women’s earnings and advancement but also the goals of technology companies, because inclusivity and innovation are closely intertwined.
And if technology is the future, these next few years present a golden opportunity to make it work for everyone. Accenture and Girls Who Code believe that this begins with resolving the critical disconnect between tech leaders and their employees through empathy and women-focused policies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,063 | 2,020 |
"Using machine learning to tackle some of the world’s biggest problems (Infographic) | VentureBeat"
|
"https://venturebeat.com/ai/using-machine-learning-to-tackle-some-of-the-worlds-biggest-problems-infographic"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Using machine learning to tackle some of the world’s biggest problems (Infographic) Share on Facebook Share on X Share on LinkedIn Presented by AWS Machine Learning It takes a combination of imagination, innovation, and machine learning to help create change in the world. The promise of machine learning for social good is being realized as the technology evolves to a point where organizations across industries and categories can leverage its unique power. Innovators are tapping into the best practices, in-depth expertise, and powerful solutions of companies like AWS to launch new initiatives and solutions that are improving lives and protecting our planet right now.
With machine learning, organizations are making inroads toward protecting biodiversity, supporting our veterans, finding homes for the homeless, understanding climate change, and more. But this is just the beginning.
Here’s a look at some of the world’s most powerful, promising applications of machine learning to benefit society.
For a more in-depth read, see this recent article on VentureBeat and see more ways machine learning is being used to tackle today’s biggest social, humanitarian, and environmental challenges.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,064 | 2,020 |
"Siren's smart socks remotely monitor foot health for diabetics | VentureBeat"
|
"https://venturebeat.com/ai/sirens-smart-socks-remotely-monitor-foot-health-for-diabetics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Siren’s smart socks remotely monitor foot health for diabetics Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Siren , a startup developing washable smart socks designed to help remotely monitor diabetes patients, today raised $9 million. According to CEO Ran Ma, the extension to Siren’s series B round will be used to grow the team and meet demand for its product and services.
In response to the pandemic, companies like Current Health and Twistle have teamed up with health care providers to pilot at-home wellness-tracking platforms. ABI Research predicts that by 2025, spending on AI in health care and pharmaceuticals will increase by $1.5 billion as a result of the novel coronavirus.
Ma studied biomedical engineering at Northwestern University’s Fineberg School of Medicine and worked with Siren’s founding team to develop the socks and foot monitoring systems that track issues related to inflammation, particularly in people with diabetes. Siren monitors foot temperature continuously at six key points, notifying wearers and their doctors via an app and text when it detects signs of inflammation.
Siren points to U.S. National Institutes of Health recommendations that people with neuropathy check their feet daily for signs of injury that can lead to ulcers, skin infections, and more. A 2007 study published in the American Journal of Medicine found that temperature monitoring, in contrast to visual checks alone, can improve ulcer-related outcomes by 87%.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To ensure accuracy, Siren ships replacement socks every six months. The company says its system adheres to Medicare and Medicaid guidance for remote patient monitoring services, with support for clinical population management and decisioning tools, as well as 24-hour support for clinicians and patients.
Ma says demand for Siren’s products has risen sharply during the pandemic. Monthly subscriptions in 2020 are up 19 times and the number of patients with signed contracts rose 340%, exceeding expectations for the year. Meanwhile, the number of ordering clinics more than tripled (up 216%), spurring Siren to expand its workforce.
The company secured $11.8 million earlier this year, putting its total series B funding at around $21 million. Previous investors include Khosla Ventures, Founders Fund, DCM Ventures, Gaingels, and Anathem Ventures.
Siren occupies a market overcrowded with connected and “smart” textiles that could top 10 million in sales this year, according to Tractica.
In 2013, Owlet launched a sock for young children that gave information on heart rate, blood oxygen levels, sleep quality, skin temperature, sleep position, and other vitals. And Sensoria sells socks made of running-friendly fabric infused with sensors that track step count, speed, and calories burned through an app.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,065 | 2,020 |
"62 Schnucks Supermarkets will deploy Simbe inventory-tracking robots | VentureBeat"
|
"https://venturebeat.com/ai/simbe-inks-deal-with-schnuck-to-deploy-robots-in-over-100-supermarkets"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 62 Schnucks Supermarkets will deploy Simbe inventory-tracking robots Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Simbe Robotics , a developer of grocery store inventory robots, today announced it has inked a deal with Schnucks Markets to roll out robots to 62 supermarkets across the U.S. In 15 Missouri and Illinois pilots during the pandemic, Simbe claims its robots have sped up inventory and replenishment by 14 times at Schnucks locations while minimizing the number of workers in the aisles, reducing out-of-stock incidents by 20%.
Robots are coming for the grocery aisle, promising to save storeowners time by inventorying stock quickly and accurately. Research and Markets anticipates that the global brick-and-mortar automation market will be worth close to $18.9 million by 2023, which some analysts say could cut down on the billions of dollars in lost revenue traced to misplaced and erroneously priced items. Robots also have the benefit of promoting contactless, physically distant shopping and work environments. A July survey by researchers at Pompeu Fabra University found that over 195 different kinds of robots have been piloted in hospitals, health centers, airports, office buildings, and other public spaces since the start of the pandemic.
This opportunity motivated Brad Bogolea, formerly at Silver Spring Networks, to found Simbe Robotics in 2015. Together with Willow Garage veterans Jeff Gee and Mirza Shah, Bogolea sought to transform the retail industry with robots capable of keeping real-time tabs on inventory. Only a few years later, Simbe’s Tally robot has navigated more than 25,000 miles in stores owned by Giant Eagle, Decathlon Sporting Goods, and Groupe Casino.
When installing the 6-foot, 85-pound, 12-camera Tally, Simbe drives the robot around to create a store map and “teach” it the location of its charging dock. (Simbe claims customers see Tally achieve baseline accuracy in about a week.) Once configured, Tally operates on its own, only requiring remote or on-site servicing if something goes wrong.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tally taps computer vision to determine which products are missing from a shelf and which (if any) lack facings and RFID, allowing it to take precise inventory counts. A single robot can scan 15,000 to 30,000 products per hour, or around 80,000 SKUs in roughly two hours. That’s compared with the average employee, who spends 20 to 30 hours a week scanning 10,000 to 20,000 products.
In a typical setup, Tally performs rounds three times per day: once in the morning to check the previous night’s restock, once in the afternoon to fill holes with backstock, and again in the evening to provide recommendations to restockers. It’s designed for grocery stores, drug stores, value stores, clothing stores, and consumer electronics stores larger than 5,000 square feet and captures all items excepting bakery, deli, and produce.
There’s a benefit for customers, as well. Via a mobile app, Tally provides product location information at select stores, down to the aisle and section number.
Simbe doesn’t sell Tally units. Rather, the company provides retailers with the hardware for a monthly service fee, which varies depending on factors like deployment size, the number of SKUs scanned, and whether computer vision and/or RFID scanning are actively used.
Simbe competes with a number of companies in the retail automation space, including San Francisco-based Bossa Nova, which last year raised $29 million for a robot that scans store shelves for missing inventory.
Penna Systems eschews wheeled robots for autonomous quadcopters that track store inventory from above. U.K. supermarket chain Ocado — one of the world’s largest online-only grocery retailers — has engineered a packing system that uses computer vision to transfer goods. And Takeoff Technologies’ platform, which works out of pharmacies, convenience stores, and quick-service restaurants, doubles as a pick-up station, complete with lockers for easy access.
But Simbe has the benefit of an agreement with SoftBank that will enable it to build 1,000 additional robots within the next few months. With a $26 million funding round that closed last September, the company looks to be well-positioned for growth.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,066 | 2,020 |
"SendinBlue raises $160 million to automate repetitive marketing tasks | VentureBeat"
|
"https://venturebeat.com/ai/sendinblue-raises-160-million-to-automate-repetitive-marketing-tasks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SendinBlue raises $160 million to automate repetitive marketing tasks Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Marketing automation startup SendinBlue today announced a $160 million funding round. A company spokesperson says the money will be put toward accelerating go-to-market efforts as it adapts to growth during the pandemic.
Lockdowns and shelter-in-place orders aimed at beating back the novel coronavirus have forced marketers to fully embrace digital. According to a report published by The CMO Survey, some 60.8% of respondents indicated they’ve “shifted resources to building customer-facing digital interfaces” and 56.2% transformed their businesses to focus on digital opportunities. Moreover, marketers reported customers’ increased openness to digital offerings introduced during the pandemic and said they were likely to see greater value in digital experiences.
SendinBlue, which was cofounded by Polytechnique graduate Armand Thiberge in 2012, competes with companies like Mailchimp and offers solutions aimed at expediting common marketing tasks. Initially focused on email, the company pivoted to address businesses’ increased demand for online acquisition and loyalty tools. Using its pipelines, clients can start by sending newsletters before diving deeper with templates and chat tools that tie into their websites. SendinBlue says these products were designed to be easy for marketers to use and are aimed at companies in industries like hospitality, construction, ecommerce, and manufacturing.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SendinBlue’s platform provides a range of email, SMS, and chat messaging tools, as well as integrations with existing customer relationship management systems. Via transactional email and segmentation, customers can set up the design, engagement, and discoverability of messages and send messages in a more targeted way. With landing pages, signup forms, and retargeting, those customers can create more targeted visitor experiences and grow their email contact list while showing ads to websites visitors as they browse other websites.
But SendinBlue’s real differentiator lies in automation. Leveraging AI and machine learning, the company’s MailClark email bot extracts relevant content from emails, prequalifies them, and handles specific actions to optimize response time. Customers can use MailClark within the platform or integrate it with third-party apps via an API.
SendinBlue claims it achieved 60% year-over-year growth even before the pandemic. But between March and June, the company saw a 50% uptick in business and reached more than 180,000 customers across over 160 countries. With 70% of SendinBlue’s revenue coming from abroad, Thiberge says he now plans to focus on international expansion. The startup recently opened its first office in Toronto, bringing its total number of offices to five and its headcount to over 400.
SendinBlue raised $33 million in September 2017. This latest round, which was led by Bridgepoint Development Capital and BPI, brings the startup’s total raised to nearly $200 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,067 | 2,020 |
"Pixel 5 fails to live up to Google's AI showcase device | VentureBeat"
|
"https://venturebeat.com/ai/pixel-5-fails-to-live-up-to-googles-ai-showcase-device"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pixel 5 fails to live up to Google’s AI showcase device Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
As widely predicted, Google announced two smartphones during its Launch Night In event today: The Pixel 5 and Pixel 4a (5G). The Pixel 5 is the follow-up to last year’s Pixel 4 , while the Pixel 4a (5G) is a 5G-compatible version of the Pixel 4a that launched in August.
Neither phone appears to introduce many AI-powered features that aren’t already available on existing Pixel devices. (Pixel hardware has historically been a showcase for Google’s AI innovations.
) Instead, they seem aimed at nudging the lineup toward the midrange. Affordability is the focus rather than cutting-edge technology, along with the recognition that neither phone is likely to make a splash in a highly saturated market.
Reportedly , Google plans to produce less than 1 million Pixel 5 smartphones this year; production could be as low as around 800,000 units for the 5G-capable Pixel 5.
The Pixel 5 might be a successor in name, but it’s arguably a downgrade from the Pixel 4 in that it swaps the Qualcomm Snapdragon 855 processor for the less-powerful Snapdragon 765G. The RAM capacity has been bumped from 6GB to 8GB, which could make tasks like app-switching faster. The Pixel 5 also has a 4,080mAh battery — the largest in any Pixel to date. Google claims it lasts up to 48 hours on a charge with Extreme Battery Saver, a mode that lets users choose which apps remain awake.
Speaking of the battery, the Pixel 5 introduces Battery Share, a reverse charging feature that can be used to wirelessly recharge Google’s Pixel Buds and other Qi-compatible devices. It’s akin to the Qi reverse wireless charging features found in Samsung’s Galaxy S10 and S20 series.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The Pixel 5.
The Pixel 5 retains the 90Hz-refresh-rate, 6-inch, 2,340×1,080 OLED display (19.5:9 aspect ratio) introduced with the Pixel 4, as well as the Pixel 4’s rear-facing 12.2-megapixel and 16-megapixel cameras. (The 16-megapixel camera might have an ultra-wide lens, rather than the Pixel 4’s telephoto lens.) As for the front-facing camera, it’s a single 8-megapixel wide-angle affair. There’s a fingerprint sensor on the rear of Pixel 5, harking back to the Pixel 3, and Google has ditched the Pixel 4’s gesture-sensing Soli radar in favor of a streamlined design.
Other Pixel 5 highlights include IP68-rated water- and dust-resistant casing, sub-6GHz 5G compatibility, and 18W USB-C charging and wireless charging. There’s also Hold for Me , a Google Assistant-powered feature that waits on hold for you and lets you know when someone’s on the line. (Currently, Hold for Me is only available in the U.S. in English for toll-free numbers, Google says.) Google’s night shooting mode, Night Sight, now works in portrait mode; Portrait Light illuminates portraits even when they’re backlit; and Cinematic Pan creates a “sweeping” video effect by stabilizing and slowing down motion.
The Pixel 4a (5G) is a tad less exciting, but it sports a larger display than the Pixel 4 (6.2 inches versus 5.8 inches). It also shares the Pixel 5’s 2,340×1,080 resolution, processor, and cameras alongside a headphone jack, but at the expense of other components. The Pixel 4a (5G) makes do with a 60Hz screen refresh rate, 6GB of RAM, a 3,885mAh battery, and Gorilla Glass 3 instead of the Pixel 5’s Gorilla Glass 6, with no IP rating for water or dust resistance.
The Pixel 4a (5G) will cost $499, according to Google — a $150 premium over the $349 Pixel 4a. It’s available in the U.S., Canada, U.K., Ireland, France, Germany, Japan, Taiwan, and Australia. The Pixel 5 costs around $699 in the U.S., U.K., Canada, Ireland, France, Germany, Japan, Taiwan, and Australia, which makes it far cheaper than the $799-and-up Pixel 4.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,068 | 2,020 |
"NIST is crowdsourcing differential privacy techniques for public safety datasets | VentureBeat"
|
"https://venturebeat.com/ai/nist-is-crowdsourcing-differential-privacy-techniques-for-public-safety-datasets"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NIST is crowdsourcing differential privacy techniques for public safety datasets Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The National Institute of Standards and Technology (NIST) is launching the Differential Privacy Temporal Map Challenge.
It’s a set of contests, with cash prizes attached, intended to crowdsource new ways of handling personally identifiable information (PII) in public safety datasets.
The problem is that although rich, detailed data is valuable for researchers and building AI models — in this case, in the areas of emergency planning and epidemiology — using it raises serious and potentially dangerous data privacy and rights issues.
Even if datasets are kept under a proverbial lock and key, malicious actors can, based on just a few data points, re-infer sensitive information about people.
The solution is to de-identify the data such that it remains useful without compromising individuals’ privacy. NIST already has a clear standard for what that means. In part, it says “De-identification removes identifying information from a dataset so that individual data cannot be linked with specific individuals.” The purpose of the challenge is to find better ways to do that using a technique called differential privacy, which essentially introduces enough noise into datasets to ensure privacy.
Differential privacy is widely used in products from companies like Google, Apple, and Nvidia , and lawmakers are leaning on it to inform data privacy policy.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Specifically, the challenge focuses on temporal map data, which contains temporal and spatial information. The call for the NIST contest states, “Public safety agencies collect extensive data containing time, geographic, and potentially personally identifiable information.” For example, a 911 call would reveal a person’s name, age, gender, address, symptoms or situation, and more. The NIST announcement notes that “Temporal map data is of particular interest to the public safety community.” The Differential Privacy Temporal Map Challenge stands on the shoulders of previous NIST differential privacy challenges — one centered on synthetic data and one aimed at developing the technique more generally.
NIST is offering a total of $276,000 in prize money across three categories. The Better Meter Stick will award a total of $29,000 to entries that measure the quality of differentially private algorithms. A total of $147,000 is available for those who come up with the best balance of data utility and privacy preservation. And the wing of the contest that awards the usability of source code for open source endeavors has a $100,000 pot.
The challenge is accepting submissions now through January 5, 2021. Non-federal agency partners include DrivenData, HeroX, and Knexus Research. Winners will be announced February 4, 2021.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,069 | 2,020 |
"Launch Night In Google: How to watch, and what to expect | VentureBeat"
|
"https://venturebeat.com/ai/launch-night-in-google-how-to-watch-and-what-to-expect"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Launch Night In Google: How to watch, and what to expect Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
During its Launch Night In event, which kicks off today at 11 a.m. Pacific (2 p.m. Eastern), Google is expected to launch new hardware across its product families. Leaks and premature sales spoiled some surprises — eagle-eyed buyers managed to snag Google’s new Chromecast from Home Depot , while Walmart’s mobile app leaked the specs of the Nest Audio smart speaker. Still, there’s a chance Google has an ace or two up its sleeve.
Here’s what we expect to see during this afternoon’s livestream, which can be found here.
Pixel 5 and Pixel 4a 5G Pixel 5 It’s all but certain Google will announce two smartphones today: The Pixel 5 and Pixel 4a 5G. The Pixel 5 is the follow-up to last year’s Pixel 4 , while the Pixel 4a 5G is a 5G-compatible version of the Pixel 4a that launched in August.
While the Pixel 5 might be a successor in name, it’s a potential downgrade from the Pixel 4 in that it reportedly swaps the Qualcomm Snapdragon 855 processor for the less-powerful Snapdragon 765G. Leaks suggest the RAM capacity has been bumped from 6GB to 8GB, which could make tasks like app-switching faster. The Pixel 5 is also rumored to have a 4,080mAh battery, which would be the largest in any Pixel to date.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! We expect the Pixel 5 to retain the 90Hz-refresh-rate, 6-inch, 2340×1080 OLED display (19.5:9 aspect ratio) introduced with the Pixel 4, as well as the Pixel 4’s rear-facing 12.2-megapixel and 16-megapixel cameras. (The 16-megapixel camera might have an ultra-wide lens, rather than the Pixel 4’s telephoto lens.) As for the front-facing camera, it’s rumored to be a single 8-megapixel wide-angle affair. Some outlets report that there’s a fingerprint sensor on the rear of Pixel 5, harkening back to the Pixel 3, and Google has apparently ditched the Pixel 4’s gesture-sensing Soli radar in favor of a streamlined design.
Other reported Pixel 5 highlights include IP68-rated water- and dust-resistant casing, sub-6GHz 5G compatibility, and 18W USB-C charging and wireless charging. The Pixel 5 is anticipated to cost around $699 in the U.S., U.K., Canada, Ireland, France, Germany, Japan, Taiwan, and Australia, which would make it far cheaper than the $799-and-up Pixel 4.
Pixel 4a 5G The Pixel 4a 5G is a tad less exciting, but rumors imply it will sport a larger display than the Pixel 4 (potentially 6.2 inches versus 5.8 inches). It might also share the Pixel 5’s 2340×1080 resolution, processor, and cameras alongside a headphone jack, but supposedly at the expense of other components. The Pixel 4a 5G is rumored to make do with a 60Hz screen refresh rate, 6GB of RAM, a 3,885mAh battery, and Gorilla Glass 3 instead of the Pixel 5’s Gorilla Glass 6, with no IP rating for water or dust resistance.
The Pixel 4A 5G will cost $499, according to Google — a $150 premium over the $349 Pixel 4a. It will be available in the U.S., Canada, the U.K., Ireland, France, Germany, Japan, Taiwan, and Australia when it goes on sale, likely later today.
Chromecast with Google TV and Nest Audio Chromecast with Google TV Google’s new Chromecast dongle runs Google TV, a rebrand of Android TV, Google’s TV-centric operating system. Unlike previous Chromecast devices, it ships with its own remote control featuring a directional pad with buttons for Google Assistant, YouTube, and Netflix.
The new Chromecast supports 4K, HDR, and multiple Google accounts, as well as Bluetooth devices and USB-to-Ethernet adapters. But it doesn’t appear to tightly integrate with Google’s Stadia gaming service — at least not out of the box. The Verge’s Chris Welch, who managed to get his hands on a Chromecast unit early this week, reports that he sideloaded the Stadia app without issue and streamed a few titles with an Xbox controller.
The new Chromecast costs $50, or $20 less than the Chromecast Ultra.
Nest Audio Details about Nest Audio leaked more or less in full on Monday (courtesy of Walmart). The new speaker, which aligns with the design of the Nest Mini and Nest Hub, is covered in a mesh fabric (and 70% recycled fabric) and features four status LEDs and Bluetooth connectivity. It stands vertically and is substantially louder than the original Google Home speaker, with Google claiming it provides 75% louder audio and a 50% stronger base. Like the Google-made smart speakers before it, Nest Audio works with other Nest speakers and displays for multiroom audio and leverages Google Assistant for voice-controlled music; podcasts; and audiobooks from Spotify, YouTube Music, and more.
It’s also likely that Nest Audio will pack a dedicated AI chip for workloads like natural language understanding, speech recognition, and text synthesis. Google introduced such a chip with the Nest Mini and Google Wifi last year, claiming at the time that it could deliver up to a teraflop of processing power.
Nest Audio is expected to come in several colors and cost around $100.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,070 | 2,020 |
"How bots threaten to derail the 2020 U.S. elections | VentureBeat"
|
"https://venturebeat.com/ai/how-bots-threaten-to-influence-conversations-ahead-of-the-2020-u-s-elections"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How bots threaten to derail the 2020 U.S. elections Share on Facebook Share on X Share on LinkedIn 3D Rendering, Robots speaking no evil, hearing no evil, seeing no evil Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a paper published last year in the journal Nature Communications , researchers at the City College of New York investigated the spread of fake news on Twitter in the months leading up to the 2016 U.S. elections. Drawing from a dataset of 171 million tweets sent by 11 million unique accounts, the team attempted to quantify the importance of tweets and top “news” spreaders, concluding that bots tweeted links to websites containing fake news at rates higher than any other group.
This insight wasn’t new — the role bots play in disseminating false and misleading information was already well-established.
Research by Indiana University scientists found that over a 10-month period between 2016 and 2017, bots targeted influential users through replies and mentions in order to surface untrue stories before they went viral. During the 2017 Catalan referendum for independence in Spain, bots generated and promoted violent content aimed at users calling for independence.
But as the U.S. 2020 elections approach, experts are concerned that bots will evade even fairly sophisticated filters to amplify misleading information, disrupt get-out-the-vote efforts, and sow confusion in the election’s aftermath. While people in academia and private industry continue to pursue new techniques to identify and disable bots, the extent to which bots can be stopped remains unclear.
Campaigns Bots are now used around the world to plant seeds of unrest, either through the spread of misinformation or the elevation of controversial points of view. An Oxford Internet Institute report published in 2019 found evidence of bots spreading propaganda in 50 countries, including Cuba, Egypt, India, Iran, Italy, South Korea, and Vietnam. Between June 5 and 12 — ahead of the U.K.’s referendum on whether to leave the EU (Brexit) — researchers estimate half a million tweets on the topic came from bots.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! U.S. social media continues to be plagued by bots, most recently targeting the coronavirus pandemic and the Black Lives Matter movement. A team at Carnegie Mellon found that bots may account for up to 60% of accounts discussing COVID-19 on Twitter and tend to advocate false medical advice, conspiracy theories about the virus, and a push to end lockdowns. Bot Sentinel, which tracks bot activity on social networks, in early July observed new accounts promoting Black Lives Matter disinformation campaigns, including false assertions that billionaire George Soros is funding the protests and that the killing of George Floyd was a hoax.
But the activity perhaps most relevant to the upcoming elections occurred last November, when “cyborg” bots spread misinformation during local Kentucky elections. (Cyborg accounts attempt to evade Twitter’s spam detection tools by transmitting some tweets from a human operator.) VineSight, a company that tracks social media misinformation, uncovered small networks of bots retweeting and liking messages that cast doubt on the gubernatorial results before and after polls closed.
A separate Indiana University study sheds light on the way social bots like those identified by VineSight actually work. The bots pounce on fake news and conspiracy theories in the seconds after they’re published and retweet them broadly, encouraging human users to do the subsequent retweeting. Then the bots mention influential users in an effort to spur them to reshare (and in the process legitimize and amplify) tweets. The coauthors spotlight a single bot that mentioned @realDonaldTrump (President Trump’s Twitter handle) 19 times, linking to the false claim that millions of votes were cast by illegal immigrants in the 2016 presidential election.
Why are humans so susceptible to bot-driven content? The Indiana University study speculates that novelty plays a role. Novel content, which attracts attention because it’s often surprising and emotional, can confer social status on the sharer, who is seen as someone “in the know.” False news also tends to inspire more surprise and disgust than truthful news does, motivating people to engage in reckless sharing behaviors.
Recognizing this psychological component, Twitter recently conducted an experiment that encouraged users to read the full content of articles before retweeting them. Beginning in May, the social network prevented a subset of users from retweeting any tweet containing a link before clicking to open that link. After several months, Twitter concluded that the experiment was a success: Users opened articles before sharing them 40% more often than they did without the prompt.
Not all bots are created equal In anticipation of campaigns targeting the 2020 U.S. elections, Twitter and Facebook claim to have made strides in detecting and removing bots that promote false and harmful content. Twitter head of integrity Yoel Roth says the company’s “proactive work” has led to “significant gains” in tackling manipulation across the network, with the number of suspected bot accounts declining 9% this summer versus the previous reporting period. In March, Facebook revealed that one of its AI tools has helped identify and disable over 6.6 billion fake accounts since 2018.
But while some bots are easy to spot and take down, others are not. In a keynote at this year’s Black Hat security conference, Stanford Internet Observatory research manager Renee DiResta noted that China-orchestrated bot campaigns tend to be less effective than Russian efforts. That’s partly because it remains difficult for China to apply certain tactics to Western platforms banned in China. DiResta pointed out that Chinese bots often have blocks of related usernames, stock profile photos, and primitive biographies.
But while Russia’s high-profile social media efforts have smaller audiences on the whole (e.g., RT’s 6.4 million Facebook followers versus China Daily’s 99 million), they see engagement “an order of magnitude higher” because of their reliance on memes and other “snackable” content. “Russia is, at the moment, kind of best in class at information operations,” DiResta said. “They spend a fraction of the budget that China does.” As early as March, Twitter and Facebook revealed evidence suggesting Russian bots were becoming more sophisticated and harder to detect. Facebook said it took down a network of cyborg accounts — posting about topics ranging from Black history to gossip and fashion — that were operated by people in Ghana and Nigeria on behalf of agents in Russia. And some of the accounts on Facebook attempted to impersonate legitimate nongovernmental organizations (NGOs). Meanwhile, Twitter said it removed bots emphasizing fake news about race and civil rights.
The twin reports followed a University of Wisconsin-Madison study that found Russia-linked social media accounts were posting about the same flashpoint topics — race relations, gun laws, and immigration — as they did in 2016. “For normal users, it is too subtle to discern the differences,” Kim told the Wisconsin State Journal in an interview earlier this year. “By mimicking domestic actors, with similar logos (and) similar names, they are trying to avoid verification.” Mixed findings Despite the acknowledged proliferation of bots ahead of the 2020 U.S. elections, their reach is a subject of debate.
First is the challenge of defining “bots.” Some define them as strictly automated accounts (like news aggregators), while others include software like Hootsuite and cyborg bots. These differences of opinion manifest in bot-analyzing tools like SparkToro’s Fake Followers Audit tool, Botcheck.me, Bot Sentinel , and NortonLifeLock’s BotSight , which each rely on different detection criteria.
In a statement provided to Wired about a bot-identifying service developed by University of Iowa researchers, Twitter disputed the notion that third-party services without access to its internal datasets can accurately detect bot activity. “Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies,” a spokesperson said.
USC study lead author Emilio Ferrara generally agrees with the University of Iowa researchers’ findings. “As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content,” he said. “We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 U.S. elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.” To be clear, there’s no silver bullet. As soon as Twitter and Facebook purge bots from their rolls, new ones crop up to take their place. And even if fewer make it through automated filters than before, false news tends to spread quickly without much help from the bots sharing it. Vigilance — and more experiments in the vein of Twitter’s sharing prompt — could be partial antidotes. But with the clock ticking down to Election Day, it’s safe to suggest that any efforts will be wildly insufficient.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,071 | 2,020 |
"Google's Smart Cleanup taps AI to streamline data entry | VentureBeat"
|
"https://venturebeat.com/ai/googles-smart-cleanup-taps-ai-to-streamline-data-entry"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Smart Cleanup taps AI to streamline data entry Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In June, Google unveiled Smart Cleanup , a Google Sheets feature that taps AI to learn patterns and autocomplete data while surfacing formatting suggestions. Now, following a months-long beta, Smart Cleanup is today launching into general availability for all G Suite users.
Smart Cleanup comes as Google looks to inject G Suite with more AI-powered functionality. Recently, the company added a feature that lets users ask natural language questions about data in spreadsheets, like “Which person has the top score?” and “What’s the sum of price by salesperson?” Google Meet earlier this year gained adaptive noise cancellation.
And two years ago, Google rolled out Quick Access, a machine learning-powered tool that suggests files relevant to documents users are editing, to Sheets, Docs, and Slides.
As G Suite product manager Ryan Weber explained in an interview with VentureBeat, Smart Cleanup was created in an attempt to unify and improve the discoverability of Sheets’ existing AI-powered auto-formatting features. “What we find is that just because the functionality is there doesn’t always mean that users know it and know how to use it,” he said. Weber gave the example of white-space-trimming and data-deduplication tools that launched over a year ago. “The problem is that no one knows these features exist — they don’t know what to look for in the menus.” Smart Cleanup is proactive in the sense that it surfaces suggestions in Sheets’ side panel. It helps identify and fix duplicate rows and number-formatting issues, showing column stats that provide a snapshot of data, including the distribution of values and the most frequent value in a column. At the same time, Smart Cleanup evaluates whether common cleanup actions like removing duplicates are relevant for a given sheet and spotlights the most appropriate suggestions to aid users in streamlining data prior to analysis.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Let’s say you’re ready to import some data. You want to upload a .txt file or paste in a big table of data. Once you do that, Smart Cleanup will use AI to detect this and do things like trim whitespace and apply number, currency, and date formatting,” Weber said.
One of Smart Cleanup’s more powerful features is semantic duplicate detection. If there’s a column in a document labeled “Country” and within that column entities like “USA” and “United States of America,” Smart Cleanup will recognize that those entities refer to the same thing: United States. Reflecting this, it will suggest replacing differently named entities with a standard nomenclature (say, “United States”) to eliminate duplicates.
Weber says that the AI models underpinning Smart Cleanup were trained on large data sets from Sheets containing anonymized and aggregated information, and that they continue to improve over time as people interact with Smart Cleanup and either accept or reject changes. These models, which were developed using Google’s TensorFlow machine learning framework, only trigger suggestions when they reach a certain confidence threshold. That’s to prevent unwelcome or erroneous recommendations from popping up in users’ feeds.
“We try to err on the side of accuracy,” Weber said. “We look at things like the rate of acceptance to make sure that the acceptance rate of these features is high. If that drops below a baseline value, that means people aren’t finding value — that these things aren’t correct. And so we try to make sure that we’re giving high-quality suggestions … Much of our time spent is optimizing for when to show things and, just as importantly, when not to show things because we don’t want to slow users down more to make them frustrated.” Smart Cleanup’s models also draw on the Google Knowledge Graph, the knowledge base Google uses to enhance its services with information gathered from a range of web sources. Its data is retrieved from the CIA World Factbook, Wikidata, and Wikipedia, among other sources, and it spans over 500 billion facts on more than 5 billion entities.
“Smart Cleanup uses the Knowledge Graph … for semantic duplicates so it can figure out when people are typing, for example, different abbreviations for a state, country, or company. The data sets allow it to figure out that these are often the same thing and suggest replacing them with a consistent piece of text,” Weber said.
Weber was coy when asked what the future might hold for Smart Cleanup and Google Sheets broadly, but he asserted that spreadsheets are becoming more capable than they used to be thanks in part to AI. “Today, many people use spreadsheets, but they only use a very small percentage of the true power behind the spreadsheets … So I think there’s a huge opportunity for us to think about how we expose that power to beginner users and how we democratize data analysis so we don’t have users feeling like they have to read a book on how to become a spreadsheet expert … There’s a whole host of things we’re thinking about investing in to make sure that anyone regardless of skill set can get a ton of value out of sheets,” Weber said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,072 | 2,020 |
"Google's Cloud TPUs now better support PyTorch | VentureBeat"
|
"https://venturebeat.com/ai/googles-cloud-tpus-now-better-support-pytorch-via-pytorch-xla"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Cloud TPUs now better support PyTorch Share on Facebook Share on X Share on LinkedIn Tensor processing units (TPUs) in one of Google's data centers.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In 2018, Google introduced accelerated linear algebra (XLA), an optimizing compiler that speeds up machine learning models’ operations by combining what used to be multiple kernels into one. (In this context, “kernels” refer to classes of algorithms for pattern analysis.) While XLA supports processor and graphics card hardware, it also runs on Google’s proprietary tensor processing units (TPUs) and was instrumental in bringing TPU support to Facebook’s PyTorch AI and machine learning framework. As of today, PyTorch/XLA support for Cloud TPUs — Google’s managed TPU service — is now generally available, enabling PyTorch users to take advantage of TPUs using first-party integrations.
Google’s TPUs are application-specific integrated circuits (ASICs) developed specifically to accelerate AI. They’re liquid-cooled and designed to slot into server racks; deliver up to 100 petaflops of compute; and power Google products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs. Google announced the third generation at its annual I/O developer conference in 2018 and in July took the wraps off its successor, which is in the research stage.
Google and Facebook say PyTorch/XLA — a Python package that uses XLA to connect PyTorch and TPUs — represents two years of work. According to the companies, PyTorch/XLA runs most standard PyTorch programs with minimal modifications, falling back to processors to execute operations unsupported by TPUs. With the help of the reports PyTorch/XLA generates, PyTorch developers can find bottlenecks and adapt programs to run on Cloud TPUs.
Google says the Allen Institute for AI recently used PyTorch/XLA on Cloud TPUs across several projects, including one exploring how to add a visual component to language models to improve their understanding capabilities.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Alongside PyTorch/XLA, Google and Facebook today debuted tools to facilitate continuous AI model testing, which they say they’ve helped the PyTorch Lightning and Hugging Face teams use with Cloud TPUs. Google and Facebook also released a new image — Deep Learning VM — that has PyTorch/XLA preinstalled, along with PyTorch 1.6.
PyTorch, which Facebook publicly released in October 2016, is an open source library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see rapid uptake in the data science and developer communities. Facebook recently revealed that in 2019 the number of contributors to the platform grew more than 50% year-over-year to nearly 1,200.
Analysis conducted by the Gradient found that every major AI conference in 2019 has had a majority of papers implemented in PyTorch. And O’Reilly noted that PyTorch citations in papers grew by more than 194% in the first half of 2019 alone.
Unsurprisingly, a number of leading machine learning software projects are built on top of PyTorch, including Uber’s Pyro and HuggingFace’s Transformers. Software developer Preferred Networks joined the ranks recently with a pledge to move from AI framework Chainer to PyTorch in the near future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,073 | 2,020 |
"Coralogix raises $25 million to parse software logs with AI | VentureBeat"
|
"https://venturebeat.com/ai/coralogix-raises-25-million-to-parse-software-logs-with-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Coralogix raises $25 million to parse software logs with AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Coralogix , which analyzes software logs with AI, today announced $25 million in new funding and launched a real-time analytics solution that allows customers to pay according to data priority instead of volume. This allows them to get queries, alerts, and machine learning capabilities without using storage.
About 50% of logging statements don’t include any information about critical things like variable state at the time of an error, according to GitHub and OverOps surveys.
That may be why developers spend an estimated half of their time on troubleshooting and bug-fixing.
Founded in 2014, San Francisco-based Coralogix provides AI analytics solutions for a host of software development challenges. Its suite automatically clusters log records back to their patterns and identifies connections among those patterns, forming baseline flows for comparison and future study. Scaling from hundreds to millions of logs — with integrations for popular languages and platforms like Docker, Python, Heroku, .NET, Kubernetes, and Java — Coralogix spotlights anomalies and affords developers access to a suite of identification, visualization, and remediation tools.
Coralogix is hosted as an Amazon Web Services app, and it hooks into Jenkins and other popular continuous integration/continuous delivery tools to ingest updates pushed to production systems. A query in Coralogix pulls up grouped results — highlighting when and where something occurred, any associated parameters, and the total percentage of those occurrences within the logs.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It also creates “component-level” insights from log data, in part by applying machine learning to software releases to spot quality issues. The service can enrich weblogs with IP blacklists to identify suspicious activity while issuing alerts when errors or critical log entries occur. In addition, Coralogix has an integrated security information and event management and intrusion detection system that taps machine learning to pinpoint anomalies within network packets, server events, and audit logs.
Coralogix claims to analyze over 1 million events per second in real time. Moreover, the company says it has automatically benchmarked over 100,000 software versions to date.
The log management market is expected to reach $1.2 billion by 2022, according to Research and Markets, and Coralogix isn’t the only one leveraging AI to surface abnormalities. Mountain View-based LogDNA raised $25 million last December to further develop its AI-powered tools that surface data to mitigate outages. And Anandot, which is based in Israel, claims it analyzed over 5.2 billion data points per day over six months to launch its log-monitoring platform.
But Coralogix — which has 60 employees and expects to reach 100 by the end of the year — has already managed to grow its client base to over 1,800 brands, among them Lufthansa, BioCatch, Fiverr, KFC, and Caesars Entertainment. (500 customers came onboard in the last six months alone.) In August, it expanded to India with an onsite team to offer customers based in the country regional server and data storage capabilities.
This latest investment brings Coralogix’s total raised to $41.2 million. New investors Red Dot Capital partners and O.G. Tech Partners co-led the round, with participation from Aleph, StageOne Ventures, Janvest Capital Partners, and 2B Angels.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,074 | 2,020 |
"Baidu's smart home group seeks to raise capital at a $2.9 billion valuation | VentureBeat"
|
"https://venturebeat.com/ai/baidus-smart-home-group-seeks-to-raise-capital-at-a-2-9-billion-valuation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Baidu’s smart home group seeks to raise capital at a $2.9 billion valuation Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Baidu today announced that it will seek to raise funding for its Smart Living Business (SLG), the internal group that maintains Baidu’s DuerOS voice platform and DuerOS-powered smart devices. Baidu expects a series A funding round valuing SLG at approximately RMB 20 billion ($2.9 billion) to close in Q4 2020, with definitive agreements from CPE, Baidu Capital, and IDG Capital. Baidu will hold super-voting rights in SLG and will continue to consolidate SLG’s financial results as a majority shareholder.
The pandemic has supercharged voice platform usage, which was already on an upswing.
According to a study by NPR and Edison Research, the percentage of voice-enabled device owners who use commands at least once a day rose between the beginning of 2020 and the start of April. Just over a third of smart speaker owners say they listen to more music, entertainment, and news from their devices than they did before, and owners report requesting an average of 10.8 tasks per week from their assistant this year compared with 9.4 different tasks in 2019.
DuerOS , a suite of tools developers can use to plug Baidu’s voice platform into speakers, refrigerators, washing machines, infotainment systems, and set-top boxes, hasn’t quite reached the storied heights of rivals like Amazon’s Alexa Voice Service.
But Baidu, which claims DuerOS now has over 4,000 third-party apps, continues to work with heavy hitters like Huawei, Vivo, and Oppo to build the framework into future flagship devices. The company has partnerships with automakers like BMW, Daimler, Ford, Hyundai, and Kia, as well as with hotel chains such as InterContinental. Just this summer, Baidu inked a strategic agreement with Midea — China’s largest smart device manufacturer with 70 million smart home appliances on the market — to sell bundles of appliances with Baidu-powered products.
During its Baidu World 2020 conference earlier this month, Baidu said over 60 automakers and more than 40,000 developers are working to integrate DuerOS with their products. As of March, DuerOS was handling 6.5 billion voice queries per month and 3.3 billion from Baidu’s Xiaodu smart speakers and displays alone. As of July 2019, the platform was on 400 million devices, in 500 vehicle models (and over a million vehicles), and in over 100,000 hotel rooms.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Baidu recently detailed DuerOS 6.0, which brings support for low-power voice processing chips and neural beamforming, a technique that employs algorithms to amplify the sound recorded by microphone arrays. Voice recognition error rates are now 46% lower compared with DuerOS 5.0, Baidu claims, and DuerOS 6.0 incorporates a more efficient text-to-speech model (WaveRNN).
Baidu also recently unveiled Xiaodu Pods, dual-microphone earbuds that last up to 28 hours on a single charge. Xiaodu Pods have a noise-canceling feature and can translate between several languages. Via voice command, they relay timely information like the weather, turn-by-turn directions, answers to math problems, and local news on command. A special Wandering Earth mode available in English and Chinese allows two users wearing one earbud each to translate conversations into their preferred language in real time.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,075 | 2,020 |
"As AI chips improve, is TOPS the best way to measure their power? | VentureBeat"
|
"https://venturebeat.com/ai/as-ai-chips-improve-is-tops-the-best-way-to-measure-their-power"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages As AI chips improve, is TOPS the best way to measure their power? Share on Facebook Share on X Share on LinkedIn "AI engines" are now being marketed as standout features of smartphones, laptops, and tablets, with performance measured by TOPS.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Once in a while, a young company will claim it has more experience than would be logical — a just-opened law firm might tout 60 years of legal experience, but actually consist of three people who have each practiced law for 20 years. The number “60” catches your eye and summarizes something , yet might leave you wondering whether to prefer one lawyer with 60 years of experience. There’s actually no universally correct answer; your choice should be based on the type of services you’re looking for. A single lawyer might be superb at certain tasks and not great at others, while three lawyers with solid experience could canvas a wider collection of subjects.
If you understand that example, you also understand the challenge of evaluating AI chip performance using “TOPS,” a metric that means trillions of operations per second, or “tera operations per second.” Over the past few years, mobile and laptop chips have grown to include dedicated AI processors, typically measured by TOPS as an abstract measure of capability.
Apple’s A14 Bionic brings 11 TOPS of “machine learning performance” to the new iPad Air tablet, while Qualcomm’s smartphone-ready Snapdragon 865 claims a faster AI processing speed of 15 TOPS.
But whether you’re an executive considering the purchase of new AI-capable computers for an enterprise or an end user hoping to understand just how much power your next phone will have, you’re probably wondering what these TOPS numbers really mean. To demystify the concept and put it in some perspective, let’s take a high-level look at the concept of TOPS, as well as some examples of how companies are marketing chips using this metric.
TOPS, explained Though some people dislike the use of abstract performance metrics when evaluating computing capabilities, customers tend to prefer simple, seemingly understandable distillations to the alternative, and perhaps rightfully so. TOPS is a classic example of a simplifying metric: It tells you in a single number how many computing operations an AI chip can handle in one second — in other words, how many basic math problems a chip can solve in that very short period of time. While TOPS doesn’t differentiate between the types or quality of operations a chip can process, if one AI chip offers 5 TOPS and another offers 10 TOPS, you might correctly assume that the second is twice as fast as the first.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Yes, holding all else equal, a chip that does twice as much in one second as last year’s version could be a big leap forward. As AI chips blossom and mature, the year-to-year AI processing improvement may even be as much as nine times , not just two. But from chip to chip, there may be multiple processing cores tackling AI tasks, as well as differences in the types of operations and tasks certain chips specialize in. One company’s solution might be optimized for common computer vision tasks, or able to compress deep learning models, giving it an edge over less purpose-specific rivals; another may just be solid across the board, regardless of what’s thrown at it. Just like the law firm example above, distilling everything down to one number removes the nuance of how that number was arrived at, potentially distracting customers from specializations that make a big difference to developers.
Simple measures like TOPS have their appeal, but over time, they tend to lose whatever meaning and marketing appeal they might initially have had. Video game consoles were once measured by “bits” until the Atari Jaguar arrived as the first “64-bit” console, demonstrating the foolishness of focusing on a single metric when total system performance was more important.
Sony’s “32-bit” PlayStation ultimately outsold the Jaguar by a 400:1 ratio, and Nintendo’s 64-bit console by a 3:1 ratio, all but ending reliance on bits as a proxy for capability. Megahertz and gigahertz, the classic measures of CPU speeds, have similarly become less relevant in determining overall computer performance in recent years.
Apple on TOPS Apple has tried to reduce its use of abstract numeric performance metrics over the years: Try as you might, you won’t find references on Apple’s website to the gigahertz speeds of its A13 Bionic or A14 Bionic chips, nor the specific capacities of its iPhone batteries — at most, it will describe the A14’s processing performance as “mind-blowing,” and offer examples of the number of hours one can expect from various battery usage scenarios. But as interest in AI-powered applications has grown, Apple has atypically called attention to how many trillion operations its latest AI chips can process in a second, even if you have to hunt a little to find the details.
Apple’s just-introduced A14 Bionic chip will power the 2020 iPad Air , as well as multiple iPhone 12 models slated for announcement next month. At this point, Apple hasn’t said a lot about the A14 Bionic’s performance, beyond to note that it enables the iPad Air to be faster than its predecessor and has more transistors inside. But it offered several details about the A14’s “next-generation 16-core Neural Engine,” a dedicated AI chip with 11 TOPS of processing performance — a “2x increase in machine learning performance” over the A13 Bionic, which has an 8-core Neural Engine with 5 TOPS.
Previously, Apple noted that the A13’s Neural Engine was dedicated to machine learning, assisted by two machine learning accelerators on the CPU, plus a Machine Learning Controller to automatically balance efficiency and performance. Depending on the task and current system-wide allocation of resources, the Controller can dynamically assign machine learning operations to the CPU, GPU, or Neural Engine, so AI tasks get done as quickly as possible by whatever processor and cores are available.
Some confusion comes in when you notice that Apple is also claiming a 10x improvement in calculation speeds between the A14 and A12. That appears to be referring specifically to the machine learning accelerators on the CPU, which might be the primary processor of unspecified tasks or the secondary processor when the Neural Engine or GPU are otherwise occupied. Apple doesn’t break down exactly how the A14 routes specific AI/ML tasks, presumably because it doesn’t think most users care to know the details.
Qualcomm on TOPS Apple’s “tell them only a little more than they need to know” approach contrasts mightily with Qualcomm’s, which generally requires both engineering expertise and an atypically long attention span to digest. When Qualcomm talks about a new flagship-class Snapdragon chipset, it’s open about the fact that it distributes various AI tasks to multiple specialized processors, but provides a TOPS figure as a simple summary metric. For the smartphone-focused Snapdragon 865 , that AI number is 15 TOPS, while its new second-generation Snapdragon 8cx laptop chip promises 9 TOPS of AI performance.
The confusion comes in when you try to figure out how exactly Qualcomm comes up with those numbers. Like prior Snapdragon chips, the 865 includes a “Qualcomm AI Engine” that aggregates AI performance across multiple processors ranging from the Kryo CPU and Adreno GPU to a Hexagon digital signal processor (DSP). Qualcomm’s latest AI Engine is “fifth-generation,” including an Adreno 650 GPU promising 2x higher TOPS for AI than the prior generation, plus new AI mixed precision instructions, and a Hexagon 698 DSP claiming 4x higher TOPS and a compression feature that reduces the bandwidth required by deep learning models. It appears that Qualcomm is adding the separate chips’ numbers together to arrive at its 15 TOPS total; you can decide whether you prefer getting multiple diamonds with a large total karat weight or one diamond with a similar but slightly lower weight.
If those details weren’t enough to get your head spinning, Qualcomm also notes that the Hexagon 698 includes AI-boosting features such as tensor, scalar, and vector acceleration, as well as the Sensing Hub, an always-on processor that draws minimal power while awaiting either camera or voice activation. These AI features aren’t necessarily exclusive to Snapdragons, but the company tends to spotlight them in ways Apple does not, and its software partners — including Google and Microsoft — aren’t afraid to use the hardware to push the edge of what AI-powered mobile devices can do. While Microsoft might want to use AI features to improve a laptop’s or tablet’s user authentication, Google might rely on an AI-powered camera to let a phone self-detect whether it’s in a car, office, or movie theater and adjust its behaviors accordingly.
Though the new Snapdragon 8cx has fewer TOPS than the 865 — 9 TOPS, compared with the less expensive Snapdragon 8c (6 TOPS) and 7c (5 TOPS) — note that Qualcomm is ahead of the curve just by including dedicated AI processing functionality in a laptop chipset, one benefit of building laptop platforms upwards from a mobile foundation. This gives the Snapdragon laptop chips baked-in advantages over Intel processors for AI applications, and we can reasonably expect to see Apple use the same strategy to differentiate Macs when they start moving to “Apple Silicon” later this year. It wouldn’t be surprising to see Apple’s first Mac chips stomp Snapdragons in both overall and AI performance, but we’ll probably have to wait until November to hear the details.
Huawei, Mediatek, and Samsung on TOPS There are options beyond Apple’s and Qualcomm’s AI chips. China’s Huawei, Taiwan’s Mediatek, and South Korea’s Samsung all make their own mobile processors with AI capabilities.
Huawei’s HiSilicon division made flagship chips called the Kirin 990 and Kirin 990 5G , which differentiate their Da Vinci neural processing units with either two- or three-core designs. Both Da Vinci NPUs include one “tiny core,” but the 5G version jumps from one to two “big cores,” giving the higher-end chip extra power. The company says the tiny core can deliver up to 24 times the efficiency of a big core for AI facial recognition, while the big core handles larger AI tasks. It doesn’t disclose the number of TOPS for either Kirin 990 variant. They’ve apparently both been discontinued due to a ban by the U.S. government.
Mediatek’s current flagship , the Dimensity 1000+, includes an AI processing unit called the APU 3.0. Alternately described as a hexa-core processor or a six AI processor solution, the APU 3.0 promises “up to 4.5 TOPS performance” for use with AI camera, AI assistant, in-app, and OS-level AI needs. Since Mediatek chips are typically destined for midrange smartphones and affordable smart devices such as speakers and TVs, it’s simultaneously unsurprising that it’s not leading the pack in performance and interesting to think of how much AI capability will soon be considered table stakes for inexpensive “smart” products.
Last but not least, Samsung’s Exynos 990 has a “dual-core neural processing unit” paired with a DSP, promising “approximately 15 TOPS.” The company says its AI features enable smartphones to include “intelligent camera, virtual assistant and extended reality” features, including camera scene recognition for improved image optimization. Samsung notably uses Qualcomm’s Snapdragon 865 as an alternative to the Exynos 990 in many markets, which many observers have taken as a sign that Exynos chips just can’t match Snapdragons, even when Samsung has full control over its own manufacturing and pricing.
Top of the TOPS Mobile processors have become popular and critically important, but they’re not the only chips with dedicated AI hardware in the marketplace, nor are they the most powerful. Designed for datacenters, Qualcomm’s Cloud AI 100 inference accelerator promises up to 400 TOPS of AI performance with 75 watts of power, though the company uses another metric — ResNet-50 deep neural network processing — to favorably compare its inference performance to rival solutions such as Intel’s 100-watt Habana Goya ASIC (~4x faster) and Nvidia’s 70-watt Tesla T4 (~10x faster). Many high-end AI chipsets are offered at multiple speed levels based on the power supplied by various server-class form factors, any of which will be considerably more than a smartphone or tablet can offer with a small rechargeable battery pack.
Another key factor to consider is the comparative role of an AI processor in an overall hardware package. Whereas an Nvidia or Qualcomm inference accelerator might well have been designed to handle machine learning tasks all day, every day, the AI processors in smartphones, tablets, and computers are typically not the star features of their respective devices. In years past, no one even considered devoting a chip full time to AI functionality, but as AI becomes an increasingly compelling selling point for all sorts of devices, efforts to engineer and market more performant solutions will continue.
Just as was the case in the console and computer performance wars of years past, relying on TOPS as a singular data point in assessing the AI processing potential of any solution probably isn’t wise, and if you’re reading this as an AI expert or developer, you probably already knew as much before looking at this article. While end users considering the purchase of AI-powered devices should look past simple numbers in favor of solutions that perform tasks that matter to them, businesses should consider TOPS alongside other metrics and features — such as the presence or absence of specific accelerators — to make investments in AI hardware that will be worth keeping around for years to come.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,076 | 2,020 |
"Applying machine learning to keep employees safe and save lives | VentureBeat"
|
"https://venturebeat.com/ai/applying-machine-learning-to-keep-employees-safe-and-save-lives"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Applying machine learning to keep employees safe and save lives Share on Facebook Share on X Share on LinkedIn Presented by AWS Machine Learning Whether on factory floors, construction sites, or warehouses, accidents have been an ongoing, and sometimes deadly, factor across industries. Add in the pandemic — and an increasing rate and intensity of natural disasters — and the safety of employees and citizens becomes more complicated.
Australian-based Bigmate, a computer vision company focused on enhancing workplace safety, is using machine learning to reduce workplace accidents, help companies detect potentially ill employees as they arrive on site, and aid organizations in the operational management of natural disasters.
Bigmate’s risk management and computer vision expertise combined with their long-term experience in asset management are all supported by their in-depth knowledge of advanced AWS Services to maximize operational turnaround.
“Organizations are deeply concerned about safety, and are looking to what AI and ML can bring to the table, not for the sake of technology but to help improve safety in the workplace through targeted applications with clear benefits.” says Brett Orr, General Manager at Bigmate. “Our engineers’ superpower is using computer vision to identify unsafe situations, and pairing that information with existing and new sensors, to understand where things are, what they’re doing, and if they’re working as they should.” Reducing accidents with a high degree of accuracy Work-related injuries cost the economy $61.8 billion with the costs borne by both the organization and the worker themselves, making this extremely challenging for both parties. Many of those injuries happen on the factory floor which is over-represented as a proportion of all work-related injuries, Orr says.
“70% of workplace injuries or deaths happen because of unwanted interactions between heavy vehicles and people,” he says. “Of those 70%, more than 30% are unwanted interactions between forklifts and people.” Organizations have stepped up their Occupational Health & Safety activities to make employees more aware of workplace dangers and improved safety measures, which does help, but accidents can still occur.
Bigmate developed Warny to enhance safety in the workplace and reduce these kinds of accidents. The Warny ecosystem is comprised of three core applications: vehicle collision avoidance, safety zone alerting, and thermal analysis of people and industrial systems on the factory floor.
Developed over a number of years, the solution is built in-house and uses both edge hardware for local performance and privacy as well as AWS Services in the cloud. Warny leverages many of the key AWS Services such as IoT, Greengrass, and Sagemaker.
Warny uses sophisticated computer vision algorithms to protect people working around dangerous machines — such as forklifts, trucks, or manufacturing machinery. It can detect instances of spontaneous combustion of materials, overheating of equipment, and fires in the workplace, as well as analyze, report on, and alert machine operators in real time about unexpected events, such as a person being in an unsafe area even when not in line of sight of the operator.
Early into installation, one factory in Singapore saw a 22% drop in incidents, an impressive precursor of what was to come when completely installed. Across the board, on average, Bigmate is seeing a 80% drop in incidents in their clients’ work environment.
Using edge-based software and cloud-based services, data is streamed from IoT sensors to the platform, which analyses images and object detection data. They’ve created a neural net that can recognize equipment and people with near-100% accuracy, says Orr.
A significant portion of the solution is the ability to create depth of field on a standard CCTV camera. With that depth information, they can determine position, distance between objects, and speed to begin to audit the environment, understanding the movements of people and forklifts.
Using trajectory data calculated from depth, position, time, and distance, machine learning algorithms then predict an object’s path in near real time. If a collision is imminent, the platform sends out an alert to the employee’s wearable, with only milliseconds of latency thanks to AWS IoT Greengrass. The data is then collected in dashboards to allow organizations to analyse their workplace safety practices.
Using AWS Services, Bigmate was able to develop their platform in a little over nine months with a small core development team, and a 15-20% reduction in development resources overall.
“For us, the partnership with AWS is not just important. It’s critical to what we do,” says Orr. “AWS has allowed us to scale infinitely and quickly, allowing us the flexibility to turn applications on a dime, and the ability to work with industry best practices inherent in the products that AWS sends out.” Making it safe to go back to work In a pandemic, the choice between returning to work and staying safe has been particularly challenging. Now, safety measures such as mask and social distancing policies, as well as temperature screening, are allowing businesses to reopen their doors and keep their employees healthy.
Manual temperature screening, however, can be a blow to productivity, causing bottlenecks at entrances and exits, and it doesn’t catch employees who begin to get sick during the day.
Bigmate’s pre-screening solution, Thermy, tackles those issues, using thermal imaging that can immediately detect elevated temperatures of people in real time, at scale, scanning 30 people a second, 500-600 people a minute, plus run 8.3 scans per second to validate its readings.
It can be deployed in any location in an organization — at the entrance of the building or the factory floor, cafeterias, breakrooms, washrooms, and anywhere else employees move throughout their day.
The solution, which is based on the Warny platform and technologies, uses thermal cameras and advanced analytics with machine learning, providing real-time information through dashboards hosted on AWS for remote viewing and trend analysis.
Other thermal solutions only capture skin temperature, which doesn’t accurately diagnose core body temperature. Bigmate’s platform calculates a true representation of a person’s core temperature. It first uses computer vision technology and the data from a thermal camera and an optical camera to isolate the subject’s head, to capture skin temperature even when the subject has a beard, glasses, a hard hat, or other features. A machine learning algorithm can then calculate a representation of the core body temperature, to determine whether the subject has a temperature.
The product was originally designed to help stop the spread of flu and other highly contagious diseases being spread throughout an organization. When the pandemic struck, Bigmate was able to deploy the solution immediately to organizations concerned with keeping their workers safe from COVID-19.
“In organizations that have had 50% or 60% positive rates, it’s helping to reduce the spread and hot spots before they happen,” Orr says. “It means businesses can continue to run.” Mitigating natural disasters with ML Bigmate also leverages its computer vision and thermal detection technologies to help state and federal government organizations detect natural disasters, from floods to tsunamis to forest fires, in real time.
They use imaging taken by high-tech cameras sitting on fixed-wing aircraft and helicopters to capture data like the latitude, longitude, time, and the distance a disaster is from landmarks. The technology also calculates information about the chopper’s height, speed, and the focus point of the cameras to understand exactly where the incident is, geographically.
They send the real-time metadata to the bureau of meteorology in order to factor in weather data, such as windshear, wind direction, rain, and so on. That data is merged with population and location data.
This gives safety and environmental organizations that respond to disasters the information they need to project where resources should be deployed to save lives and property safely and effectively — whether that be to quell a fire, clear out a campground, prepare for worsening conditions, and more.
The social promise of ML and computer vision The promise of machine learning goes beyond business challenges — solutions like these are a demonstration of the impact technology can have on society, tackling health, environment, and safety issues that were previously difficult without the help of AI, machine learning, and new innovation.
And at Bigmate, the work they do makes a difference every day, Orr says. They’ve delivered some important milestones, from the number of accidents their machine learning algorithms have been able to prevent, to the number of outbreaks their computer vision technology has helped to reduce, to the lives they’ve helped emergency services save.
“A lot of the time you can’t point back and see where the work that you’ve done has had a direct impact on people and their lives and families, but we’re able to bring our experience and technology to bear to getting people home safely,” Orr says. “That’s a big one.” Dig deeper : See more ways machine learning is being used to tackle today’s biggest social, humanitarian, and environmental challenges.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,077 | 2,020 |
"Why the CyberLEAP act must pass | VentureBeat"
|
"https://venturebeat.com/security/why-the-cyberleap-act-must-pass"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why the CyberLEAP act must pass Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Gameplay and game theory are some of the most valuable tools to teach information security.
Game theory is a branch of mathematics that allows us to reason through cyberattack/defense scenarios without spinning in philosophical circles. It allows you to model probabilities on how someone else will take action and what you should do to counter that action.
And it’s a critical part of an effective cybersecurity strategy, which is why the U.S. military has run a number of game theory training programs to date.
The All-Army Cyberstakes is a 10-day long cybersecurity-based capture-the-flag competition. All members of the military and U.S. government are invited to play with the goal of training. Other similar but shorter programs have been run, too, featuring attack and defend scenarios.
Perhaps the grandest example was the Defense Advanced Research Projects Agency (DARPA) Cyber Grand Challenge in 2016, in which seven teams constructed autonomous systems designed to play an attack and defend-style capture-the-flag without any human intervention.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! My team was one of the finalists in that challenge.
The Cybersecurity Competitions to Yield Better Efforts to Research the Latest Exceptionally Advanced Problems (CYBER LEAP) Act of 2020 builds on these existing programs. Sponsored by Senators Roger Wicker, R-Miss, Jacky Rosen, D-Nev., and Cory Gardner, CyberLEAP would instruct the Commerce Secretary to establish national challenges to “achieve high-priority breakthroughs in cybersecurity by 2028” in five areas: the economics of a cyberattack, cyber training, emerging technology, reimagining digital identity and federal agency resilience.
It would establish a coherent policy toward finding the best cyber talent within the US Government. Senator Rosen, a former computer programmer, told NextGov , “Investing in our cybersecurity workforce is vital for our national security and our economic future.” Unfortunately, the legislation, which passed a committee vote in May, has now stalled on the U.S. Senate floor. It needs to be passed. At a time when there are legitimate security concerns around the upcoming presidential election, with our financial instructions, and even our drive to find an effective vaccine for COVID-19, we need a commitment to educating our government employees and officials on best practices for cybersecurity. And what better way to learn than through gamification? Results from the CyberStakes program have already been beneficial. Former DARPA project manager Frank Pound said that before the military competitions started in 2014, it was hard to find somebody in military leadership who actually knew the low-level details of software exploitation, and why it mattered. Or what’s happening in a computer’s memory with buffer overflows. Or how the memory of a program can be manipulated from the outside by an adversary. He said that unless you understand those nuanced problems, it is hard to make good military strategy decisions about how to defend against them.
So game theory can influence policy decisions. It can highlight where we can place incentives that may not be obvious and whether those incentives actually change the game we (think) we’re playing.
In cyber, you don’t have certainty in what exploits your adversary knows about, whether they are using an exploit they already disclosed, and whether your zero-day is really a zero-day (again, no visibility). So it’s critical that our military has experience in navigating attacks and defence on the cyber front through effective training.
It’s critical that the Senate move the CyberLEAP bill forward to ensure we have the cybersecurity skills we need to keep the country protected.
David Brumley is CEO and co-founder of CMU professor (currently on leave).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,078 | 2,020 |
"Wasteland 3 review-in-progress -- Colder, but more intimate | VentureBeat"
|
"https://venturebeat.com/pc-gaming/wasteland-3-review"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Wasteland 3 review-in-progress — Colder, but more intimate Share on Facebook Share on X Share on LinkedIn Wasteland 3's battlefields have cover that you can destroy -- and take advantage of.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
I think someone at InXile Entertainment has watched Honey, I Shrunk the Kids a few too many times.
Wasteland 3 has grenades that shrink your foes. Enemies have them, too. The studio’s previous role-playing game, The Bard’s Tale IV: Barrows Deep , has spells that turn the baddies into pipsqueaks. And in that one, they have tiny voices and look ridiculous in combat. I shrunk my foes in so many battles.
This time, the enemy shrunk me.
I was storming an apartment, where the landlord thought someone was brewing bombs. Turns out they were creating clones — and these copies were insane. So when I blew up the door and stormed in, they responded with stark-raving mad shrieks and by lobbing shrink grenades.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! This encounter is just one example of the wild, wooly West that awaits you in the postapocalyptic Colorado in Wasteland 3, InXile’s latest entry in the series. It’s on PC, PlayStation 4, and Xbox One, and it’s the final RPG from the studio’s independent days of funding projects on Kickstarter, before Microsoft acquired it in 2018. Deep Silver’s the publisher.
It picks up where Wasteland 2’s story leaves off. The Arizona Rangers are in shambles after defeating the Cochise AI and setting off a nuke. But a powerful person reaches out to offer your succor — the Patriarch, the leader of Colorado. He needs your help with his kids, all who want to take over his rulership over Colorado Springs and the surrounding area. And all of them, to paraphrase Hank Hill, just ain’t right. From there, you clean up the mess, build up your forces, and learn just what’s happening in the Rockies these days.
Here’s what I think after more than 10 hours with Wasteland 3.
Cleaner tactics My everlasting takeaway from Wasteland 2 is that despite it being a fantastic RPG, the turn-based combat is too challenging at lower levels. The foes come across as too powerful to me.
Wasteland 3 feels better-tuned at earlier levels. While I did lose some troops a few times, I didn’t feel like I was a group of grunts storming no-man’s land getting mowed down by machine guns. Cover plays into this. InXile has thrown cover in strategic points around battlefields in a manner that both makes sense and is exploitable — I’ve been able to find some fantastic killzones thanks to the layout of some encounters. You can also destroy a great deal of cover. Chewing away at a wall with your machine gun makes an enemy more vulnerable without forcing you to expose your adventurers.
I dig one encounter in the remains of a mall parking lot. Your party starts in a chokepoint, which widens as you make progress. The baddies are on a higher level, with two ramps on either side of the map leading to their perch. Thankfully, you have a great deal of cover (concrete barriers, empty oil drums, and more) to help you advance as the villains fire away. You also face some foes on the ground below, but a few well-placed rockets, blowing up an oil drum, or other strategies pay off. By the time I had chewed my way through the henchmen, I got to the main baddie with three characters still alive, and the dead one happened only because I left her exposed to melee attacks.
What’s nice about this set up is that I was able to take advantage of cover, lay down fire to protect for my close-in fighters, and take out the enemy on the top level, all while feeling like I had the tools to do so. I don’t remember any of the battles in Wasteland 2 feeling this tactical.
It’s hilarious Above: This feels like a Jonathan Coulton song come to life.
InXile peppers Wasteland 3 with exploding pigs, bad accents, silly puns, and weird foes with bad outfits (along with the shrink grenades). This is the sort of humor I live for in RPGs (and in books, TV, and movies as well. I’m a cheesy fella). I’m long past the “the world is ending, we have to save it” stories that emphasize drama over laughs.
The Bizarre stands out as a bastion of this silliness. You end up clearing out the lower levels of a mall for this area’s capo, Flab the Inhaler. He’s a grotesque, obese play on a vampire lord. He speaks in a horrible accent out of the worst bloodsucker flicks. It turns out this place is under siege by a gang that dresses up as clowns and turns pigs into bombs. As you go down into the tunnels underneath the mall, you find not just a silly band of baddies but also a good dungeon to crawl through. And that’s what I love about the humor; it’s important, but it doesn’t detract from the game, its mechanics, and its design.
My favorite laugh so far might be MacTavish. He affects a Scottish persona, with an accent cheesier than Scotty’s and an outfit to match. You end up finding a cassette that shows it’s all a fake — the recording has him practicing the accent, and when he fails, you hear him complaining in an even worse Texan drawl. What’s more, after you figure this out and capture him, he sticks to it in jail, even after you both know the accent is a fake.
Different stakes In Wasteland 2, you end up dealing with an evil AI and another nuke. So far, your adventures in Colorado feel more homey. You’re taking care of bad people and making it safer for those who just want to live in peace. You’re fighting The Patriarch’s wild kids. Yes, some of it is a bit over-the-top and gross, and I do have a great deal more of the story to dig into. Yet rebuilding the Rangers and making Colorado safer just feels comforting.
This doesn’t mean Wasteland 3 lacks moral choices. I’ve already had to decide if I want to side with corrupt cops or gangsters that appear to have a good heart. Another choice involves killing a man — and those who helped him — after you learn he let a bloody gang into Colorado Springs. As a kicker, you also deal with whether you want to help one of your companions take vengeance on them for that gang slaughtering her family. Sure, we’re not dealing with nukes, but the stakes are still there. They’re more personal this time around.
Above: This might be the first time in my decades of playing RPGs that I’ve found cat litter loot.
Now, I may find out that one of these power-mad kids is going to blow it all up with a nuke. That certainly would change the stakes. But so far, I like just how much of Wasteland 3 is about dealing with the locals and their problems.
Final thoughts Wasteland 3 feels different from both its predecessor and other RPGs coming out right now, such as Pathfinder: Kingmaker’s definite edition. It still has a big scope that we’ve come to expect from InXile, but it feels more intimate as well. It’s more welcoming than other games.
And so far, it doesn’t have that freight train momentum you sometimes feel from other RPGs, where the story gains so much steam you feel like you need to blitz through it, missing out the sidequests and other tidbits that gives games character. Wasteland 3 isn’t asking me to hurry up; it wants me to stay awhile, crack open a beer, and take it on at my pace. And that might be its greatest strength so far.
And nothing can shrink that.
Score: Pending Wasteland 3 is out now for PC, PlayStation 4, and Xbox One. It’s also a part of Xbox Game Pass. Deep Silver sent GamesBeat a Steam code for the purposes of this review.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,079 | 2,020 |
"Fall Guys Season 2 gets new medieval games and cosmetics | VentureBeat"
|
"https://venturebeat.com/pc-gaming/fall-guys-gonna-go-medieval-on-your-buns"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fall Guys Season 2 gets new medieval games and cosmetics Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Fall Guys is getting all-new content as part of its upcoming Season 2.
Developer Mediatonic revealed the medieval-themed update as part of Gamescom’s Opening Night Live event today.
The studio didn’t reveal a release date, but it said it’s putting the “finishing touches” on the cosmetics and all-new minigames. So it should launch relatively soon.
In the trailer for Fall Guys Season 2, Mediatonic showed off some of the new stuff players can look forward to. This includes medieval-themed games featuring draw bridges and castles. In one of the games, players must work together to move wooden platforms into position to climb up to the top of a massive structure. In another, players move ramps around to jump through hoops to score points. As for the cosmetics, players can expect to unlock more themed costumes. Mediatonic showed off knights, vikings, dragons, and wizards in its trailer.
Mediatonic didn’t explain how it will release this content. The new stages are likely free updates, but the costumes could come as part of a battle pass or some other form of microtransaction. But it’s clear that the studio knows it needs to keep things feeling fresh for Fall Guys to maintain its huge momentum. It’s still one of the most popular games in the world right now, and new levels and cosmetics to unlock could ensure that players keep coming back for weeks and then months.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,080 | 2,020 |
"Crusader Kings III review -- King me | VentureBeat"
|
"https://venturebeat.com/pc-gaming/crusader-kings-iii-review"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Crusader Kings III review — King me Share on Facebook Share on X Share on LinkedIn My wife, my strength.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
As took my final breaths, I wondered why my son hated me so. I provided the best education, sending him to the imperial court for tutelage. I found him a spouse that opened doors for both of us. I doubled the size of the realm he would inherit. And yet, he killed me in his thirst for power.
Crusader Kings III isn’t a grand strategy game. It’s really a role-playing game in which you play the sovereign of a realm. But where its genius lies isn’t in just playing one ruler — when one dies, you take the mantle of your heir. This doesn’t just keep the game going; it allows you to continue role-playing your realm, but from a different perspective.
So when the son deposed my first ruler, I started playing as him … until my uncle then led a revolt of the realm’s most powerful vassals against my new character. And this continues until you either play yourself into a corner and quit, hit the end date (you start in 1060 AD and play until 1453), or start a new game. It comes to PC on September 1 from Paradox Interactive and Paradox Development Studio.
And like most Paradox games , I expect Crusader Kings III to get even better with age.
But it’s damn good already. I started by telling you about how I felt after 43.3 hours over a few weeks, and after another 10 hours, I’ve decided that this is the best game Paradox has ever made.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! It’s an RPG, not a strategy game For a long time, I struggled with Crusader Kings II. I would sit there and start plotting out my strategy, approaching it as I would a Civilization or Europa Universalis (another Paradox staple). But a few years ago, I stumbled upon a realization many already knew: You’re playing as the rulers, not the realm. It’s as much about them as it is about growing duchy or kingdom. Once I realized this, I found Crusader Kings not only easier to play but also more enjoyable as well.
For my current ruler, the King of Brittany, I started out with two goals: to become a learned, well-liked man; and to set up my own kingdom. I focused on the skills in the diplomatic and learning tress. I got lucky — he assumed rulership in his late teens, and he’s still on the throne at 75. He’s found numerous tomes, some mystical in nature, and he’s won renown for his knowledge of religion and many other topics. Whenever I ask another ruler if I can teach one of their children as a ward, they almost always accept.
Above: My book-learning leads to some deep thoughts.
But somehow, he also became an infamous reveler. He got invited to many feasts, and hosted numerous fetes as well. And he became fat. Very fat, gaining the obese trait in his late-60s.
I found it funny when I decided to lose weight after gaining the obese trait. Shortly after making this decision, I got ill, then was on death’s door. At 71, I ordered my court physician to do no more than the minimum to keep me alive, and then … I survived. After my recovery, I put an end to the losing weight nonsense and hosted a feast — the next day, I woke up better rested and healthier than ever before.
Obviously, the lesson here is to continue to be a partier and a glutton in my waning years.
My favorite part is playing matchmaker, marrying off my children, grandchildren, courtiers, and vassals to other important folk in nearby realms. I do so with an eye on alliances — will this marriage result in an ally for an upcoming war? I look not just at how it benefits my line but also my military. It’s a hoot to have a half-dozen allies, all with armies stronger than yours, and then to start a war with a neighbor with a stronger military but weaker allies (or no friends!), overwhelming them and winning your claim despite putting fewer of your levies and knights at-risk.
So instead of thinking about how you want your realm to grow, think more about how you want to rule. Think about your ruler as your character; once you do so, the complex web of relationships among family, vassals, courtiers, your liege, and other rulers makes more sense.
Better tips, but it needs more Crusader Kings II came out in 2012, and Paradox has supported it with patches, updates, and 15 pieces of downloadable content (expansions, packs, etc.). It remains quite playable today.
I expect Crusader Kings III to get the same level of support, if not more. And at launch, it does a passable job of explaining its complex mechanics: ruling vassals, choosing marriages, making claims on titles and lands, and so on. I wish it did more to show you how diplomacy works — how to build alliances and make friends without going through multiple menus and lumps to figure it out yourself.
Above: The King and I.
And I crave a small tutorial scenario on how to set yourself up as a king, winning your independence from your liege.
But Paradox could end up adding more tutorials down the line, or it could add mechanics that make Crusader Kings III’s more obscure workings more transparent. And it certainly will post dev dairies and work with streamers to show folks the ropes. As always with this publisher’s works, I’m excited to see how this game grows in the coming years.
The long view It takes some time to gronk that while some of your decisions have immediate or short-term consequences, a number of choices result in things that either happen years later or last for a decade or two. Let’s take executions. I learned a vassal (an uncle) was plotting my murder, so I gave him the axe. This hurt my standing with my court and my family for over a decade. They were afraid of me, but they also worried I could become a tyrant. Up to that time, I hadn’t done anything tyrannical, other than keeping a prisoner locked up for three decades (another plotting vassal, and they deserved it! I wanted them to rot until they died under house arrest).
And this is important: Every decision you make should have some long-term goal to it. Offering a ward to a powerful ruler, prince, or duke? Make sure that improvement in opinion you get from that person can help you with an alliance, a marriage, or another transaction down the road. Expanding your kingdom? Make sure that new county or duchy pays off with levies for your army, taxes for your coffers, and titles that make you matter more to your friends and rivals.
Early on, I made friends with the prince of England. He was in line to inherit the realm. And he did. We married our firstborn to each other. And it proved to be a fruitful alliance. He helped me take over several counties, and his support was key to my early expansion. Meanwhile, I helped him take over his western turf and bite off a piece of Scotland. But setting this up took almost 20 years.
And it was worth it.
Where do I go from here? Above: Crusader Kings III’s first name suggestion for my daughter was Rum. I couldn’t resist.
I’ve played two ruling dynasties so far. I ran the first into the ground, as coups and murders tore the realm apart. In my current run, I’m on my second ruler, a wise man who’s taken Brittany from a tiny realm in the corner of France, expanded it threefold, and waged a successful war for independence, all while becoming a learned, trusted person among his peers.
I do have one minor complaint about running your realm: I wish the powerful vassals demanding council positions had better stats. Every time one nagged me about it, they always had lower scores in diplomacy, stewardship, and so on, than those on my council. I had to decide if it was worth hurting the realm to make them happy. Maybe that run just had some bad luck. But really, if all your stats are 10 or less, vassal, just shut up, collect your taxes, and enjoy your title. Leave the ruling of the realm to the adults! At this point, I could score this game … but Crusader Kings III, like all of Paradox’s releases, is like my Brittany. It takes time to figure out its web of mechanics and systems, just as it takes time to make and tune them. The development team learns, and Crusader Kings grows, becoming a different (and oftentimes better) game in the process.
In my review-in-progress, I said if I were to score it now, I’d give it a 5/5. More than a month later, I’m ready to do so. This is the best Crusader Kings release yet — and it’s Paradox’s best game release ever.
Score: 5/5 Crusader Kings III launches September 1 for PC. Paradox Interactive provided GamesBeat with a Steam code for the purposes of this review.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,081 | 2,020 |
"AVerMedia Live Gamer Duo review -- The right capture card for livestreamers | VentureBeat"
|
"https://venturebeat.com/pc-gaming/avermedia-live-gamer-duo-review-the-right-capture-card-for-livestreamers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review AVerMedia Live Gamer Duo review — The right capture card for livestreamers Share on Facebook Share on X Share on LinkedIn AVerMedia's Live Gamer Duo capture card.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
AVerMedia’s Live Gamer Duo makes so much sense, so why did it take so long to turn into reality? The key feature is that it has two HDMI inputs. One is an HDMI 2.0 port for your console or gaming PC. The other is an HDMI 1.4 that is ideal for something like a mirrorless camera. This enables you to have one piece of equipment for capturing all of your imaging devices, which should help simplify the livestreaming process.
The AVerMedia Live Gamer Duo is available for $250.
It captures 1080p60 footage in HDR through its HDMI 1 port. And that can pass-through a 2160p60, 1440p144, or 1080p240 image (all in HDR) to your display. The HDMI 2 port, meanwhile, captures 1080p60. So this isn’t a card for recording 4K lossless video from a PlayStation 5 or the upcoming RTX 3080. That is a good thing, however, because of the price.
At $250, this is a well-positioned card for anyone who needs to capture both a gaming feed and a camera.
Simple yet powerful The biggest benefit of a capture card like this is that it eliminates the need for multiple individual capture cards. AVerMedia’s Live Gamer Duo does the job of an Elgato Gaming HD60S+ and its Camlink all by itself, all while maintaining that attractive price.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! But in simplifying the hardware, AVerMedia doesn’t skimp on the capabilities. You can easily bring the game feed and camera feed into a tool like OBS or Xsplit. But it also gives you the option of sending each HDMI port to different software. This is important if you want to capture full 1080p60 video from both feeds and then edit them together later.
This makes the Duo an easy recommendation because it does everything that almost any streamer would ever need. Even if you’re not planning to use a fancy, expensive DSLR or mirrorless camera to start, this gives you the option to upgrade in the future. And at just $50 more than the Elgato HD60 S+, that seems like a smart move.
AVerMedia’s Live Gamer Duo makes some necessary compromises You can tell that one of AVerMedia’s goals with the Live Gamer Duo was to keep the price down. It succeeded, and it did so without any major sacrifices. But that doesn’t mean the Duo is without shortcomings.
A big frustration for me is the lack of HDMI 2.1. I get that support for HDMI 2.1 is limited now, but that’s going to change in a matter of months. PlayStation 5, Xbox Series X, and the next-gen GPUs from Nvidia and AMD will all use HDMI 2.1. And that means we won’t get 2160p120 video pass-through on the Duo. Even worse, we don’t get adaptive sync, which enables the framerate to sync up across devices to eliminate tearing and jittering. Support for adaptive sync would make the Duo a lot better at handling a second PC, but since it processes video at 60Hz, it can actually produce more tearing in your footage if you play at a higher refresh-rate.
Other issues include the lack of 2160p30 support on the camera port. This card should be able to handle that — although I get that AVerMedia likely doesn’t want to imply that this is a 4K capture device.
The last issue is that the card itself has a somewhat bulky profile. This can make it difficult to fit into you case if you have a lot of other PCIe devices.
You should get it When I think about who this capture card is for, I get excited. The AVerMedia Live Gamer Duo is great for anyone who wants to focus on livestreaming. That audience won’t even notice its issues. All they’ll know is that it provides a streamlined solution for getting high-quality video from both a source and a camera.
And it makes sense to recommend at $250 because it’ll save you money, space, and time in the future.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,082 | 2,020 |
"Best CBD capsules and pills: Top picks and buyer’s guide | VentureBeat"
|
"https://venturebeat.com/offbeat/best-cbd-capsules-and-pills-top-picks-and-buyers-guide"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Best CBD capsules and pills: Top picks and buyer’s guide Share on Facebook Share on X Share on LinkedIn Presented by The Weed News Company CBD capsules are an excellent way of using CBD. They offer all the same benefits as CBD oil, but with a few advantages of their own.
People use CBD capsules to ease pain, reduce inflammation, alleviate stress and anxiety, improve sleep, and more.
Choosing the right capsule can be challenging — there are many different companies offering similar products — each claiming to have the best CBD capsules on the market.
The truth is that most capsules aren’t worth the money. It’s easy for unethical companies to sell underpowered CBD capsules at premium prices. The only way to spot high-value, premium CBD capsules it to do some digging.
We’ve done this research for you. We assessed thousands of customer-reviews, looked through dozens of independent lab reports, and tested over 26 CBD capsules. Here are three brands that stood out for offering the best CBD capsules in 2020.
Best CBD capsules to buy in 2020 Royal CBD — Award Winning, Best CBD capsules overall Gold Bee – Runner Up, Quality product at a good price Hemp Bombs — Honorable Mention, high-potency CBD Capsules 1.
Royal CBD Capsules — Best CBD capsules overall Product Details: Total CBD 750 mg CBD per Capsule 25 mg Royal CBD is one of our most frequently recommended brands. We’ve already reviewed the company’s CBD oil and given it two thumbs up.
The reason these capsules are also at the top of our list is because they’re made from the same award-winning oil the company is already known for.
These capsules begin as a full-spectrum CBD oil. The company sources organic hemp plants from California and Colorado — which are then extracted using supercritical CO2. This is considered the gold standard for CBD extraction because it’s clean, safe, and efficient.
Once the extract is complete, it’s dissolved into a premium MCT oil base.
From here, the CBD oil is injected into high-grade, rapidly dissolving soft gel capsules.
Each CBD capsule is standardized to contain 25 milligrams of CBD, along with a variety of other cannabinoids, terpenes, sterols, and other hemp-derived compounds.
The best part about these capsules is that they don’t cost anything extra. You can expect to pay roughly the same amount for these capsules as you would anywhere else.
All CBD products sold by Royal CBD have been independently verified to confirm the CBD levels listed on the bottle.
Royal CBD products have been awarded the number one CBD brand by SF Examiner , Metro Times , Riverfront Times , SF Examiner , Observer , Orlando Weekly , SA Current , Cleveland Scene , City Beat , CL Tampa , and more.
Pros Made from an award-winning CBD oil base Each capsule contains high-potency 25 mg CBD Third-party tested for quality assurance 30-day satisfaction guarantee Cons Only available online (not sold in-store) No low-potency capsules currently available 2.
Gold Bee CBD Capsules — Runner-Up Product Details: Total CBD 750 mg CBD per Capsule 25 mg Gold Bee is a much smaller (and newer) company than Royal CBD.
The company started out selling organic superfoods but recently stepped into the CBD space with a series of CBD products such as capsules, oils, topicals, and gummies.
Gold Bee CBD capsules contain 25 milligrams of CBD made from organically-grown hemp.
These capsules only became available outside the brand’s home state of California earlier this year. Gold Bee completed construction on two new extraction facilities and an online shop so they could keep up with demand.
Even with the new facilities, Gold Bee products are often out of stock.
These capsules have already won several awards. They’ve also been mentioned on several online publications as one of the best CBD capsules on the market. You’ll find mentions of Gold Bee on websites including LA Weekly , We Be High , CFAH , Daily CBD , SF Weekly , Weed News , and more.
What makes these capsules so unique is the extraction technique the company employs. The Gold Bee team extracts its hemp in very small batches. This ensures the team can maintain full control over the entire extraction process — dramatically reducing the loss of terpenes in the final product.
As a result of their efforts, Gold Bee CBD capsules contain an exceptionally high level of active terpenes — including myrcene, humulene, pinene, limonene, and bisabolol — each of which offers therapeutic benefits of their own.
Gold Bee capsules are priced competitively. Despite the brands’ popularity, the founder of Gold Bee recently stated in an interview he had no interest in raising the prices. He wants to keep CBD products affordable to anybody that needs them.
Pros Made from high-grade, terpene-rich organic hemp extract Excellent cost to potency ratio Third-party tested Available online throughout the United States Cons Small production runs and high-demand sometimes result in this product going temporarily out of stock 3. Hemp Bombs High-Potency CBD Capsules — Honorable Mention Product Details: Total CBD 75 – 1800 mg CBD per Capsule 15 – 30 mg Hemp Bombs is a sub-brand of one of the largest CBD companies in the world. They’re part of a larger conglomerate of CBD companies known as Global Widget LLC.
The benefit of buying from a large and dominant company is that its products tend to be a little cheaper thanks to mass bulk buying and large extraction facilities.
Unfortunately, the benefits end here.
The main issue with large companies in this space is the quality of the products. Yes, they’re cheaper on average, but the quality also tends to be a little lower as well.
We’ve included Hemp Bombs as our honorable mention because despite the large size, and lack of a clear source of hemp, Hemp Bombs CBD capsules are great value — especially the higher-potency option.
Hemp Bombs CBD capsules come in two potencies (15 mg and 30 mg), and three different package sizes (5, 30, and 60 counts).
The best value, by far, is the higher-potency capsules and the larger bottle counts.
The smaller bottles and lower strength capsules seem cheap at first glance, but when you assess the cost per milligram (more info on how to do this below), it’s clear the value for these products is much lower.
Pros Available in multiple different potencies and bottle-counts One of the cheapest sources of CBD products available Third-party tested for quality assurance Cons Hemp source not listed (questionable quality) Low-potency and smaller bottle size capsules don’t provide good value Pro tips: How to not waste money on CBD capsules If you want to try another brand or live outside the U.S. or U.K., here are some tips to follow when shopping for CBD capsules to help you avoid wasting your money on ineffective or scam CBD capsules.
A) Check the company’s hemp source The source of hemp used to make CBD capsules matters. Companies will often try and increase profit margins by using cheap imported hemp. The problem is that if hemp is grown in contaminated air or soil, it will hyper-accumulate these toxic compounds.
These contaminants ultimately end up in your CBD capsules and eventually into your body.
The best way around this is to look for companies that clearly mention that their hemp is grown in the United States or Europe, where laws surrounding hemp cultivation are strict.
The best CBD capsules are made from organically-grown hemp.
B) Look for third-party testing Third-party testing is the only way you can trust the claims made by a CBD brand. It’s become the gold standard for corporate transparency within the CBD industry.
Here’s how it works.
A CBD company will send a sample from every batch of fresh CBD capsules it produces. The sample is tested to determine the cannabinoid content and terpene profiles. Other tests can also be done to check the sample for any known contaminants — including heavy metals, pesticides, solvents, or microbial byproducts (such as mycotoxins or endotoxins).
Independent testing is important because it’s done by a professional lab with no monetary affiliation to the CBD company. If they find something that doesn’t look right, they’ll publish it anyway.
If a CBD capsule doesn’t come with these tests, you can reach out to the company to ask for them. If you can’t track an up-to-date lab report, move on to another brand.
Low-quality CBD capsules usually won’t come with third-party testing done because the company knows it’s likely to fail.
C) Assess the price in terms of the “Cost per mg of CBD” Looking at the overall cost of CBD capsules isn’t helpful for determining value. A lot of companies will use tricks to make it seem like their capsules are cheaper than everybody else’s.
Comparing the overall cost also makes it difficult to compare different products if the CBD content or capsule counts are different.
The easiest way to compare CBD capsules and determine the value is to find the cost per milligram of CBD.
To do this, simply divide the overall cost of the capsules by the amount of CBD in the container.
The average CBD capsules should cost around $0.15 – $0.20 per milligram of CBD. Premium capsules usually go for $0.18 – $0.25 per milligram. Anything higher than this isn’t worth the money.
CBD capsule dosage: How much should I take? The best thing about using CBD capsules is their ability to provide consistent and repeatable doses. The hardest part is determining exactly what dose of CBD you should take to begin with. But it’s not as hard as you may think.
Although everybody responds to CBD differently, the best starting dose comes down to your weight and how strong you want the effects to be.
A lighter dosage is a good starting point and is usually enough for most people to get the effects they’re looking for. We’ve outlined some general guidelines for a starting dosage in the table below.
Some people need a more potent dose — so we’ve also included a strong dosage range for each weight group.
Starting dosage for CBD capsules according to weight: Weight Group Low Dose (Light Effects) High Dose (Moderate to Strong Effects) Less than 100 pounds 5 – 10 mg CBD 20 – 30 mg CBD 100 – 150 Pounds 10 – 20 mg CBD 40 – 50 mg CBD 150 – 200 Pounds 20 – 30 mg CBD 50 – 60 mg CBD Over 200 Pounds 30 – 40 mg CBD 60 – 70 mg CBD Final thoughts: Best CBD capsules CBD capsules are a great choice for a daily CBD supplement. They make dosing simple, offer excellent value, and can easily fit in your pillbox with the rest of your supplements.
CBD capsules are rapidly absorbed and offer all the same benefits as CBD oil for supporting anxiety, easing pain and inflammation, improving sleep, and protecting the digestive tract.
If you’re looking for a quick recommendation, we suggest the high-potency capsules by Royal CBD. They’re made from a premium, award-winning CBD oil base and deliver excellent value for the money.
Our runner up is from Gold Bee — a smaller CBD brand that’s been gaining a lot of attention lately in the CBD space. If you can get your hands on Gold Bee CBD capsules, they’re well worth the effort.
The product recommendations in this article are made solely by the sponsor and are not recommendations made by VentureBeat. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,083 | 2,020 |
"The RetroBeat: -- Ratchet & Clank Future: A Crack in Time is the franchise at its best | VentureBeat"
|
"https://venturebeat.com/games/the-retrobeat-ratchet-clank-future-a-crack-in-time-is-the-franchise-at-its-best"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The RetroBeat: — Ratchet & Clank Future: A Crack in Time is the franchise at its best Share on Facebook Share on X Share on LinkedIn Ratchet & Clank Future: A Crack in Time.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Ratchet & Clank: Rift Apart is my most anticipated next-generation game so far. And it has me thinking about one of my favorite PlayStation 3 games.
I’m not the world’s biggest Ratchet & Clank fan. Don’t get me wrong, I’ve enjoyed all of the games in the series that I’ve played. But I also have not gone through great lengths to play through all of them. Of those I’ve played, Ratchet & Clank Future: A Crack in Time is easily my favorite.
A Crack in Time released for the PS3 in 2009. Before I start praising it, I want to start with something trivial. I love using the word “future” as a tag in a title. Be it in Jet Set Radio Future, Steven Universe Future , or whatever. Sorry, “adventure,” “forever,” and “returns” — nothing beats “future.” Time for fun Now, let’s get more serious. Like every other game in the series, A Crack in Time features a fun blend of 3D platforming and shooting action. To me, Ratchet & Clank is the 3D Mega Man series we never got (don’t get me started on Mega Man Legends … that’s for another time). And just like with Mega Man, each Ratchet & Clank game features an arsenal of creative weapons.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! What makes A Crack in Time special? First off, it has these incredible puzzle segments. For most of the game, Ratchet and Clank aren’t together. While the Ratchet sections play as you’d expect, the Clank levels focus on creating copies of yourselves and using time-bending abilities to solve puzzles. It’s easier to understand if you see it.
I love these sections. Puzzle-based gameplay like this is tricky to pull off. It’s hard to offer something challenging without making it too frustrating. But these Clank levels are creative and wild while being logical. I never tired of them.
But Ratchet’s levels also see some surprises. Along with the typical worlds you adventure in as you go through the story, you can explore the galaxy in a spaceship, finding small planets and moons that offer special rewards and interesting challenges. It offers a level of exploration beyond what the franchise usually delivers.
Ratcheted up Even the story is memorable, which is a rare accomplishment for 3D platformers. Both Ratchet and Clank confront mysteries about their past, and Dr. Nefarious serves as a humorous yet threatening villain. He also delivers a memorable final boss fight that features some of the best rail-grinding I’ve seen in a 3D platformer (sorry, Sonic Adventure 2). Dr. Nefarious is returning in Rift Apart, which is another reason why I’m looking forward to it.
While I’m not a Ratchet & Clank expert, A Crack in Time is the series’ high mark for me. I’m hoping that Rift Apart can rise above it, but it’ll need more than fast loading times and insane graphical fidelity to do that. It’ll have to match the creativity and charm of one of the best 3D platformers ever made.
The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,084 | 2,020 |
"Ratchet & Clank: Rift Apart will release in PS5's 'launch window' | VentureBeat"
|
"https://venturebeat.com/games/ratchet-clank-rift-apart-will-launch-in-ps5s-launch-window"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ratchet & Clank: Rift Apart will release in PS5’s ‘launch window’ Share on Facebook Share on X Share on LinkedIn Ratchet & Clank: Rift Apart is going to define the PS5 generation.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
We got to see more of Ratchet & Clank: Rift Apart for PlayStation 5 during Gamescom ‘s Opening Night Live event.
Developer Insomniac revealed that it is coming out during the PlayStation 5’s launch window. That usually means at some point during the console’s first few months of release.
Rift Apart is the latest in insomniac’s 3D action-platformer series. The franchise mixes traditional platforming puzzles with shooter aspects and wacky guns. The first Ratchet & Clank came out for PlayStation 2 in 2002 and has had over a dozen sequels. The last one came out for PlayStation 4 in 2016. It was also called Ratchet & Clank, and it was both a remake of the original and a tie-in with the Ratchet & Clank movie.
Of all the next-gen games we’ve seen so far, Rift Apart is among the most impressive. It looks like a CG movie come to life, and its titular heroes travel through portals to other worlds in the middle of intense action sequences, showing off the loading prowess of the PlayStation 5’s SSD.
The new gameplay demo shows regular villain Dr. Nefarious. Ratchet pulls rifts toward him, helping him cross a destroyed bridge. He also uses a weapon that looks like a lawn sprinkler that freezes enemies. Overall, it’s an extended, uninterrupted version of what we saw before. You can watch the demo below.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In an interview after the trailer, Insomniac said that the game will have no load screens whatsoever. It will also use the PS5 DualSense controller’s haptic vibrations and adaptive triggers to make each weapon feel unique. For a shotgun-style weapon, the trigger will offer resistance about halfway through, representing the use of a single shell. Pulling the trigger all the way down will fire two.
Insomniac also confirmed that this game does chronologically follow Into the Nexus, which came out for PlayStation 3 in 2013.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,085 | 2,020 |
"Jurassic World: Evolution stomps onto Switch on November 3 | VentureBeat"
|
"https://venturebeat.com/games/jurassic-world-evolution-stomps-onto-switch-on-november-3"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jurassic World: Evolution stomps onto Switch on November 3 Share on Facebook Share on X Share on LinkedIn Everybody walk the dinosaur.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Jurassic World: Evolution is getting a Switch port on November 3, the Frontier Developments game studio announced today.
The simulation game has players creating and operating their own dinosaur park. This is the Complete Edition, so it includes all of the downloadable content released for the original game, which came out for PC, PlayStation 4, and Xbox One in 2018.
Licensed games for console and PC are becoming more of a rarity these days (they thrive on mobile’s free-to-play model), but Jurassic World: Evolution became a hit. It sold over 2 million copies seven months after its launch. Coming to Switch can open the game up to an even larger audience.
The game also includes dialogue, with actors reprising their roles from the original Jurassic Park film, including Jeff Goldblum, Laura Dern, and Sam Neill. It also features Bryce Dallas Howard reprising her role as Claire Dearing from the Jurassic World films.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,086 | 2,020 |
"TRON and Band Protocol Form Strategic Partnership; Scalable Oracle Technology and Ecosystem Integrations Underway | VentureBeat"
|
"https://venturebeat.com/business/tron-and-band-protocol-form-strategic-partnership-scalable-oracle-technology-and-ecosystem-integrations-underway"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release TRON and Band Protocol Form Strategic Partnership; Scalable Oracle Technology and Ecosystem Integrations Underway Share on Facebook Share on X Share on LinkedIn SAN FRANCISCO–(BUSINESS WIRE)–August 31, 2020– TRON, one of the largest blockchain-based operating systems in the world, has formed a strategic partnership and completed integrations with Band Protocol to bring secure and verified decentralized oracles to power its rapidly growing DeFi and decentralized application space. The announcement comes as TRON Founder and CEO of BitTorrent Justin Sun prepares to conduct an AMA at with Band Protocol CEO and Co-Founder Soravis Srinawakoon on Wednesday, September 2nd, 9:00 PM Pacific Time (PT).
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20200831005363/en/ (Graphic: Business Wire) Band and TRON are solving the blockchain scalability issue and have joined forces to bring high-throughput, customizable, and decentralized oracles for all TRON developers and various industry-leading DApps in the TRON ecosystem. Additionally, a TRC20 BAND token is being explored to be utilized in TRON DeFi DApps as a collateral asset, store of value, medium of exchange, and more.
As part of the strategic partnership, the TRON Foundation is already working closely with Band Protocol to integrate and secure all TRON DApps covering DeFi, games, RNG, enterprise applications, and other applications processing millions of dollars in value and hundreds of thousands of users. The first Band Protocol DApp integration is with JUST, the leading stablecoin protocol on TRON, to secure over $30M in collateral.
The bridge implementation of the BandChain decentralized oracle network has been completed with the assistance of TRON core developers who have also officially added Band Protocol into the TRON developer documentation. The documentation provided developers a comprehensive guide on how to leverage the tools to query the data already available on the BandChain bridge contract or create a custom decentralized oracle with fine-tuned parameters. With this integration, TRON developers are empowered to build highly secure decentralized applications that are truly scalable without the limitations of excessive data costs or congestion issues common in native blockchain-based oracles.
“We are ecstatic to enhance TRON with the most valuable oracle product on the market,” said Justin Sun, CEO of BitTorrent and Founder of TRON. “This integration marks a new era of high-quality partners, protocols, and services migrating to TRON’s blockchain. The future of blockchain technology is bright as we kick off a series of TRON DeFi partnerships to come. TRON’s ecosystem is growing faster than ever and having Band Protocol secure our top applications helps us speed up the adoption of DeFi and DApps worldwide.” TRON is considered to be one of the most secure and operationally efficient public chain systems in the blockchain industry and trusted by an extensive ecosystem of major partners such as Samsung, Opera, enterprise, and government departments. The decentralized network is operated by over 925+ unique and active nodes in which the creation and storage of data does not rely on a particular individual or organization.
“Band Protocol is thrilled to be the first oracle solution integrated into the TRON public blockchain, a platform for scalable blockchain technology and operating system for almost 1000 decentralized applications,” said Soravis Srinawakoon, CEO & Co-Founder of Band Protocol. “Working closely with the TRON team to support and bring secure, customizable, and decentralized oracle technology will power the next generation of scalable applications that fuel the next wave of blockchain adoption.” This strategic partnership with TRON is an on-going process and deep collaboration that will be extended into the long-term. Both teams will be playing a pivotal role in creating a secure standard for the operation of oracles and data in decentralized applications to ensure the highest level of security and ease-of-use to prepare the blockchain industry for mass adoption.
About TRON TRON is dedicated to accelerating the decentralization of the internet through blockchain technology and decentralized applications. Founded in September 2017 by Justin Sun, the company has delivered a series of achievements, including MainNet launch in May 2018, network independence in June 2018, and TRON Virtual Machine release in August 2018. July 2018 also marked the acquisition of BitTorrent, a pioneer in decentralized services boasting approximately 100M monthly active users.
About Band Protocol Band Protocol is a cross-chain data oracle platform that aggregates and connects real-world data and APIs to smart contracts. Blockchains are great at immutable storage and deterministic, verifiable computations – however, they cannot securely access data available outside the blockchain networks. Band Protocol enables smart contract applications such as DeFi, prediction markets, and games to be built on-chain without relying on the single point of failure of a centralized oracle. Band Protocol is backed by a strong network of stakeholders including Sequoia Capital, one of the top venture capital firms in the world, and the leading cryptocurrency exchange, Binance.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200831005363/en/ Ryan E. Dennis [email protected] Kevin Lu [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,087 | 2,020 |
"The end of Privacy Shield: Why it matters and what businesses can do about it | VentureBeat"
|
"https://venturebeat.com/business/the-end-of-privacy-shield-why-it-matters-and-what-businesses-can-do-about-it"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The end of Privacy Shield: Why it matters and what businesses can do about it Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The rules that facilitate much of the digital commerce between the EU and US have been thrown into a state of flux in recent weeks. Last month, the Court of Justice of the European Union (CJEU) passed a landmark judgement to invalidate the Privacy Shield , a framework governing the flow of EU citizens’ personal data into US companies. Then, just last week, Austrian privacy advocate Max Schrems, who brought the initial case to the CJEU, filed fresh complaints against 101 companies that he alleges are failing to provide adequate protection to the data of EU citizens, in spite of the CJEU’s landmark judgement.
What does all this mean in practice? The Privacy Shield allowed US companies to self-certify that they would adhere to loftier data principles than those required of them at home, allowing for the transfer of personal data from the EU to the US. More than 5,000 organizations relied on the arrangement, and the freedom to move data between markets that it gave them has been critical to businesses’ ability to sell physical and digital goods and services to customers in Europe: activities that make up a large part of the $7 trillion in transatlantic trade conducted annually. The CJEU’s initial decision left businesses in the US and the EU in a precarious position and cast doubt over their ability to trade seamlessly.
A turning point? The CJEU’s move to invalidate the Privacy Shield has not, yet, meant that businesses are prohibited from moving EU data to the US. For the moment at least, businesses can rely on what are known as the Standard Contractual Clauses (SCCs) as a valid means of transfer (and in some instances, Binding Corporate Rules, although these are less common). These are a special set of terms designed to guarantee data privacy standards. SCCs are common, so many businesses have been able to continue as they had before.
However, the complaints that Schrems filed last week seek to remove this option for businesses. The complaints against 101 companies, including the likes of Airbnb and the Huffington Post, argue that SCCs do not provide adequate protection for EU personal data because US companies fall under US surveillance laws.
The 2013 Snowden leaks illustrated the extent to which US security agencies had been making use of personal data stored by companies. The ECJ determined that the Privacy Shield was an inadequate mechanism to protect data on EU residents from US surveillance programs — and Schrems argues that SCCs are no better.
With significant reform to US surveillance law unlikely in the near future, companies are being left in an awkward predicament. It is suddenly becoming less viable to rely on SCCs to move data, and businesses are supposed to carry out a comprehensive analysis of local laws and, if necessary, use supplementary measures to protect personal information. We await further guidance from the key regulatory and political stakeholders in this regard.
A patchwork agreement for a Privacy Shield replacement could follow, but there is a real possibility that we could reach a point where data can no longer move freely from the EU to the US. This could lead to a requirement that all data on EU citizens is stored within the EU. This could dramatically limit US providers’ ability to access and process this data and the range of digital services available to EU citizens.
A key issue in Brexit negotiations The ECJ’s decision on the Privacy Shield may also have a big impact on Brexit, with just a few months remaining for the UK and EU to ratify the terms of a post-Brexit trade deal. Sadly, the issues of data rights and privacy frameworks have not been a major talking point in negotiations thus far, with hot button political issues such as fishing rights seemingly taking priority — despite the huge economic impact that a failure to reach an agreement on data flows would bring. Whatever the outcome, the EU will need to make a decision on the UK’s “data adequacy,” meaning the extent to which UK law protects personal data in comparison with the EU’s own General Data Protection Regulation ( GDPR ).
The ECJ’s decision on the Privacy Shield was an indication of the level of scrutiny the EU will employ in assessing the UK. In the meantime, the UK needs to decide whether to align itself more with the EU or the US. Will it make it more difficult for companies to export data from the UK, as the EU has? Or will it favor a closer relationship with the US and risk facing the same kind of regulatory uncertainty that the US is now experiencing? This decision will have a huge impact on the way British businesses operate internationally and how international businesses operate in the UK. If a data adequacy agreement is not reached, the system that allows the free flow of personal data between the EU and the UK could be uprooted. And if one is reached, it could have an impact on a possible free trade deal between the UK and US.
Reacting in the face of uncertainty So, whether you’re a UK business facing the unpredictability of the Brexit negotiations, or a US company worrying about the future of data flows from the EU, what can you do now to prepare for the changes that are coming? As always, it starts by getting the basics in place. Here are four steps any organization can take to ensure they can adapt quickly and effectively to any regulatory outcome: Understand how you use data: If they are to react quickly, businesses have to know exactly what data they are using, where it came from, and how it is moving through their organization. This should be a continual undertaking, but right now too many companies don’t have a clear understanding of these issues.
Think long-term: With so much uncertainty, businesses must factor in potential data compliance requirements into their growth strategies. The privacy regime operating in each region must be a key consideration for any business planning to expand into new markets. Carefully evaluate data regulations when considering where to invest for growth and budget accordingly so you know that you’ll be able to comply with all local regulations.
Stay agile: Wherever they are headquartered, it is critical that startups and digital businesses are monitoring developments in the EU-US and the EU-UK negotiations. Progress won’t be steady: nothing could change for a while, and then it will all move very quickly. Make sure someone in the organization is responsible for keeping a close eye on the latest news and flagging anything important.
Communicate! Consumers are increasingly aware of how their data is being handled by businesses. Transparency is therefore crucial to building and maintaining trusted relationships. Be proactive about keeping customers informed about your policies and day-to-day operations. You should consider publishing your law enforcement guidelines and transparency reports to make it clear how your organization interacts with data requests from government agencies.
Mark Kahn is General Counsel at customer data platform Segment.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,088 | 2,020 |
"The DeanBeat: The Call of Duty League delivers a throne and a $4.6 million prize pool this weekend | VentureBeat"
|
"https://venturebeat.com/business/the-deanbeat-call-of-duty-league-delivers-a-throne-and-a-4-6-million-prize-pool-this-weekend"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: The Call of Duty League delivers a throne and a $4.6 million prize pool this weekend Share on Facebook Share on X Share on LinkedIn This year's London weekend of the Call of Duty League took place before the pandemic.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
This weekend, the Call of Duty League Championship will take place as a virtual esports tournament, and the winner will take the throne. In this case, it’s an actual throne, designed for the winning team to take home and sit in. The event will cap a grueling season for virtual esports and a big week for the Call of Duty franchise, which announced its new Call of Duty: Black Ops Cold War this week.
Four teams will compete for the throne: the Atlanta FaZe, Dallas Empire, Chicago Huntsmen, and London Royal Ravens. The winner will take home $1.5 million, and the total purse is around $4.6 million. For the first time, the league has been organized along a city-based franchise model, like the Overwatch League that preceded it and professional sports franchises. Disrupted by the pandemic, this year’s physical events were canceled and everything moved online in April. I am frankly surprised that, given all of the events that have happened this year, this league is pushing toward a finish line with viewership that shows significant growth. Somehow, in a year full of Warzone and George Floyd and #MeToo, spectators found enough reason to watch a bunch of nerds play video games.
When the lockdown happened in March, the league took a pause.
Physical esports was the biggest revenue generator for the industry, but it vaporized, hurting everything from ticket sales to merchandise stores. Without crowds to cheer on the esports pros in big stadiums, it looked like online-only esports events might turn out to be duds. But it has turned out OK.
“The season has been a wild roller coaster of a journey for all of us, but it’s been an awesome ride,” said Johanna Faries, the commissioner of the Call of Duty League, in an interview with GamesBeat. “If you think about how many pivots we’ve had to make, we’re still seeing great growth.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The most recent playoffs hit a peak of 156,000 viewers on YouTube, which Activision Blizzard named as the exclusive esports channel for the league in January for a reported $160 million over three years. The league’s YouTube channel recently surpassed a million subscribers. Big sponsors include companies like Intel, Mountain Dew, PlayStation, the U.S. Air Force, the U.S. Army, Twitter, and others. Faries expects this coming weekend’s event to be the biggest for fan engagement yet.
“The competitive live match play broadcasts continue to see momentum,” Faries said.
Above: Johanna Faries had to roll with the punches this season as commissioner of the Call of Duty League.
Cultural visionary Sheron Barber — who has designed for stars like Drake, Billie Eilish, Rihanna, Post Malone, and others — is making the elevated gaming throne as the trophy prize for the esports team that is crowned the champion. The throne will be made from formica with a titanium veneer finish, giving it a brushed metal and natural stone look.
The throne will be engraved with the championship date, names, and player stats from the season. That sort of touch will help stir some excitement among fans who can’t be together with their favorite teams, Faries said.
The switch from physical to online-only hasn’t been easy in one respect. Hackers can attack the online-only athletes in their various locations, and cheating or lag problems are always a risk when players are spread out, rather than playing together in a single room. Hackers have reportedly tried DDoS attacks to derail the matches. While that’s not confirmed, some players did get dropped from matches. That can be very disruptive, as lots of folks are betting in Las Vegas on how the matches are going to turn out.
Above: The CDL had a nailbiter match in its last playoff battle.
“Everybody’s really been thoughtful about it and really rigorous about how to take the right precautions across the board,” Faries said. “We continue to take every precaution possible to ensure that competitive integrity is preserved. Hopefully our matches are smooth and the broadcast experience will be as well.” Another thing that disrupted the league this year was the death of George Floyd in Minneapolis, where riots broke out. In early June, the CDL had to postpone a series of online events that were to be hosted by the Minneapolis team, further disrupting a league schedule that had been already thrown off course by the coronavirus.
“We had statements come through from individual players and some of our team organizations and caster talent,” said Faries. “We are never blind to what’s going out in the world, and we do speak culture in many ways in that regard. It’s why we always want to be more than just an esport experience. We want to certainly be clued in. In light of George Floyd and all the things that have been happening in Minneapolis, we paid homage to all the great work going on not only locally to advance social justice work in and around that community, and to support Minneapolis on the ground. But all of our teams and players really effectively did their part. And it was a great collaboration. And I felt the league was really on the same page. We’ll always keep our eyes open to support our community and our players.” Does Warzone help? Above: The winner of the Call of Duty League will get $1.5 million and a throne.
The popularity of Call of Duty: Warzone, the battle royale mode that debuted in March amid the lockdown, has been amazing, with more than 75 million people downloading the free-to-play game. But Warzone is far different from the 5-vs.-5 multiplayer gameplay of the Call of Duty League, which uses Call of Duty: Modern Warfare maps and modes for its tournament play. In Warzone, as many as 150 players go at each other until no one is left standing. In some ways, it feels like Warzone is the new pastime and Modern Warfare is the old one.
But Activision has been clear that Warzone has served as a gateway to Modern Warfare, as players who joined Warzone for free are upgrading to the full $60 game. And the Call of Duty League is benefiting from the cross-promotion within Warzone, Faries said, as Activision is advertising the esports event inside Warzone. And in another smart marketing stroke, the Call of Duty League announced yesterday that on Sunday during the peak of the event, Activision will randomly distribute 10,000 codes for the upcoming beta for Call of Duty: Black Ops Cold War. Fans will likely go crazy for a chance to be in the multiplayer beta for the game, which isn’t coming out until November 13.
Above: Gone are the days of physical events for the CDL.
“There are many ways the CDL can continue to expand and to be relevant to different parts of our player base,” Faries said. “We try to jump pretty quickly on the groundswell of popularity. We were seeing back in March and April around Warzone’s release that it was popular. So we infused Warzone weekends into our pregame time slot, enabling our athletes to play before the live matches.” That resulted in a lot of engagement with fans, who could see the best players fight seriously in Modern Warfare esports while relaxing and enjoying themselves with personality-driven play in Warzone, where players can do silly things like chop each other up with helicopter blades.
“Fans can see [athletes] in solos and duos and trios programming, where the stakes are much lower,” Faries said. “It was a chance for our pros to laugh at themselves and at each other. And with the codes for Cold War, we are really linking arms with the franchise team and thinking creatively about how we can set up rewards for the hottest new intellectual property coming.” The league has seen some hiccups, as Activision decided to change the gun sight on a key weapon in the tournament game, the MP5, in the middle of the season.
Above: The Call of Duty League switched to a city-based league this year.
How everything turns out this weekend will be important. After it’s done, Activision will turn its eye to launching Call of Duty: Black Ops Cold War. But since this is just the first season of the city-based franchise, Activision has to woo a lot of other teams to join and expand the league so that it grows, not only in the U.S. but overseas as well. Teams can pay a lot of money for the franchises, as we saw with the Overwatch League, where the upfront fee for joining the league rose to tens of millions of dollars. Faries said she is hopeful about expanding the league in places like Europe, where currently the London and Paris are the lone franchises.
“When you have a city-based and geo-targeted model, you’re able to really galvanize fans not just on the sheer prowess of how good these guys are, but also because you have some regional pride on the line that you want to see play out,” Faries said.
The viewership this weekend will depend on the personalities of the players, and plenty of grudge matches will take place between veterans who want to hang out to their prominence as well as newcomer teams who are challenging them.
“We have human interest stories that we’re airing, and we have a personality-driven spirit in our league, and that’s something we consistently celebrate,” Faries said. “We love seeing them do the healthy smack talk.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,089 | 2,020 |
"Terahertz tech is the next big thing in wireless -- but what is it? | VentureBeat"
|
"https://venturebeat.com/business/terahertz-tech-is-the-next-big-thing-in-wireless-but-what-is-it"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Terahertz tech is the next big thing in wireless — but what is it? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Although there’s no rigid cadence to engineering innovations — new things are discovered intentionally and accidentally all the time — cellular wireless technology has moved toward a generation-per-decade model, such that 4G devices dominated the 2010s, 5G is ready for the 2020s, and 6G is expected to define the 2030s. The key change underlying these generational shifts is an expansion of usable wireless spectrum. Miniaturized radios inside wireless devices can now broadcast on a larger number of frequencies than before, use multiple frequencies simultaneously, and fill wider channels with increasingly massive amounts of data.
I’ve previously explained the 4G-5G network difference as akin to widening an existing highway and adding new high-speed, extra-wide lanes. For wireless engineers, the challenge has been finding the space to build these highways. Throughout the 4G and 5G eras, governments have ( slowly and with plenty of drama ) reallocated military or otherwise reserved radio frequencies for consumer and industrial use. As 6G looms, engineers and governments are already planning to make use of “terahertz” spectrum, a block of radio frequencies so high that all-new testing equipment, chips, antennas, and other innovations are needed for commercialization.
What is terahertz technology, really? Here’s a primer that will help you understand the next decade of announcements.
Terahertz in context While most people think of wireless technologies as almost magical — check out this crowd’s reaction to Apple’s July 1999 demo of Wi-Fi — the underlying science is radio engineering, which has been around for over a century but advanced significantly in the past two decades. Just as home and car radios received audio broadcasts from giant outdoor towers, similar radios later shrank to fit inside computers, phones, watches, and earbuds, receiving data from smaller wireless base stations (and other devices).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Radio waves are commonly measured in multiples of “hertz” (Hz), the international unit of frequency representing the number of cycles in one second. Kilohertz (kHz) is a thousand Hz, megahertz (MHz) is a million Hz, gigahertz (GHz) is a billion Hz, and terahertz is a trillion Hz. To broadly generalize, as the frequencies go up, more bandwidth tends to be available for data, but the radio waves travel shorter distances and are easier to accidentally impede.
Car radios pick up low-quality AM audio signals anywhere in a city, your home Wi-Fi works only within your house, and a millimeter wave 5G phone may drop signal just by moving to the wrong side of a pane of glass.
To get a little more technical: AM radio technology used roughly 10kHz blocks to transmit monaural audio signals between 540kHz and 1.6MHz frequencies. FM radio then enabled stereo, superior-sounding audio by using larger, roughly 200kHz blocks of spectrum between 88.1MHz and 108.1MHz frequencies. Broadcast TV used varying amounts of bandwidth on 54-88MHz, 174-216MHz, and 470-806MHz frequencies to deliver combined video and audio signals, after which Wi-Fi (2.4GHz/5GHz) and cellular began gobbling up blocks of higher-frequency spectrum for data.
In retrospect, it’s almost funny that those TV frequencies were called “VHF” (very high frequency) and “UHF” (ultra high frequency). Today, pocket-sized devices can transmit on 39GHz millimeter wave frequencies, nearly a million times higher, while “submillimeter wave” terahertz frequencies are even higher than millimeter wave, but believed to be safe — about as far as radio signals can go without moving into light rays, X-rays, and cosmic rays, which — unlike lower-frequency radio waves — are forms of radiation that could potentially alter human biology.
Terahertz applications, today and tomorrow Bandwidth is the key reason terahertz technology is being eyed for 6G networks. Without even fully exploiting the spectrum’s potential, researchers have already demonstrated that terahertz waves will let chips exceed 5G’s peak of 10Gbps, and there’s been talk of a target 6G speed of 1Tbps.
That quantity of bandwidth would be able to support even higher-definition video than is available today, as well as human brain-caliber artificial intelligence , mobile holograms of people and objects, and livestreaming of “digital twins” of real buildings.
Once terahertz communications links are widely established between devices, Japanese cellular carrier Docomo predicts AI will become available everywhere.
Today, terahertz technology can be used to make cameras that “see” beyond the limitations of human eyes. Like millimeter waves, submillimeter waves can be used to detect weapons concealed inside clothing; they can also pass through soft tissue to image bones and peer through one layer of paint to see what’s underneath it. Companies such as TeraSense , Ino , and i2s have already developed terahertz cameras that can be used to see through materials or detect tiny manufacturing defects, though the price tags can be shocking — the i2s TZcam had an MSRP of $80,000 , with a note that the “lens is sold separately.” It’s fair to say that terahertz tech isn’t coming to smartphones anytime soon. Assuming international standards organization 3GPP does indeed coalesce on terahertz frequencies as foundational for 6G networks, the technology isn’t expected to be ready for commercialization in pocket devices until around 2030. Samsung thinks early devices could happen “ as early as 2028 ,” with mass commercialization following two or more years later. A similar 10-year timeline proved to be enough to transform millimeter wave from an engineers’ dream into a viable cellular technology, so don’t bet against it happening over the next decade.
Between now and then, you can expect to see plenty of announcements of terahertz engineering innovations.
Keysight Technologies helped NYU Wireless set up a 6G lab before most people even knew what 5G was, starting with sub-terahertz frequencies before moving up to the terahertz range. The U.S. FCC opened a huge swath of spectrum — 95GHz to 3THz — last year, offering 10-year licenses to companies interested in experimenting with the “tremendously high frequency” technology. Research is already underway across the world for potential fabrication materials and applications for terahertz technologies.
We’ll have to see whether terahertz spectrum lives up to its promise, but we could see ultra low-power, highly secure data transmissions accomplished by “extremely small” frequency antennas found within everything from phones and computers to wearables and even clothes. Engineers still have plenty of work ahead to make terahertz tech practical for consumers, but if they continue on their current paths, the next two decades of wireless innovation should be even more exciting than the last two have been.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,090 | 2,020 |
"Qualcomm doubles 5G mmWave range to 2.36 miles for broadband modems | VentureBeat"
|
"https://venturebeat.com/business/qualcomm-doubles-5g-mmwave-range-to-2-36-miles-for-broadband-modems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm doubles 5G mmWave range to 2.36 miles for broadband modems Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
As 5G networks have continued to spread across the world, the biggest issue with ultra-fast millimeter wave (mmWave) towers has been their short transmission distance, which is generally measured in city blocks rather than miles. Today, Qualcomm announced a breakthrough in mmWave transmission range, successfully achieving a 5G data connection over a 3.8-kilometer (2.36-mile) distance — over twice the range originally promised by its long-range QTM527 antenna system last year.
It’s important to put today’s news into perspective, as the record is specific to broadband modems rather than smartphones. Qualcomm is touting the achievement as evidence of mmWave’s viability as a fixed wireless access solution, enabling carriers to offer fiber-speed 5G coverage in rural, suburban, and urban communities that might have had poor wired home broadband options in the past. The successful test was conducted in Regional Victoria, Australia, presumably with minimal physical interference between the sending and receiving devices.
Millimeter wave 5G has the potential to be the fastest flavor of the new cellular standard, enabling multiple gigabit per second transfer speeds, thanks to generally huge blocks of available wireless spectrum. In the United States, Verizon has thus far relied exclusively on millimeter wave for its 5G service , enabling both home broadband modems and handsets to reach 1-2Gbps speeds if they’re in close proximity to 5G towers.
Combined with fast network responsiveness (aka low latency), those speeds are expected to enable everything from real-time mixed reality streaming to next-generation industrial applications.
But until now, mmWave has struggled to reach devices at long distances, requiring carriers to deploy large numbers of short-range “small cells” just to achieve coverage. Each doubling of range should significantly reduce the required small cell density, making deployment less expensive for carriers and more practical for actual 5G service rollouts. However, range improvement promises have thus far been focused on home broadband modems, not handheld devices.
The test relied on two existing Qualcomm hardware solutions — the Snapdragon X55 modem and QTM527 antenna — inside a consumer premises equipment broadband modem, communicating with Ericsson’s Air5121 and Baseband 6630 tower hardware, enhanced by extended-range software. No details were provided on speeds or other details of the connection, but Qualcomm characterized the successful range test as “the first step in utilizing mmWave for an extended-range 5G data transfer,” hinting that there may have been compromises in speed or other areas. The company previously noted that carriers would be able to deliver up to 7Gbps download speeds if the QTM527 could access a full 800MHz of mmWave spectrum. Existing tower hardware has hit 4.3Gbps for a single device or 8.5Gbps for two devices.
Qualcomm has already announced the more capable Snapdragon X60 modem as a followup to the X55 but hasn’t yet revealed a successor antenna solution to the QTM527, which was announced just under a year ago. As improving mmWave’s long-distance performance appears to have been a top priority for the company and its partners, it’s highly likely that we’ll see continued gains in future consumer and carrier offerings.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,091 | 2,020 |
"Medal of Honor: Above and Beyond will launch in holidays with single-player and multiplayer VR | VentureBeat"
|
"https://venturebeat.com/business/medal-of-honor-above-and-beyond-will-launch-in-holidays-with-single-player-and-multiplayer-vr"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Medal of Honor: Above and Beyond will launch in holidays with single-player and multiplayer VR Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Respawn Entertainment unveiled the latest gameplay of Medal of Honor: Above and Beyond , a big-budget virtual reality game coming from the folks who created the original Medal of Honor. This World War II shooter will have multiplayer action, the developer revealed for the first time at Gamescom’s opening ceremonies today.
Respawn CEO Vince Zampella introduced the latest gameplay trailer. It is coming at an unspecified date during the holiday season of 2020 on the Oculus Rift and Oculus Rift S VR headsets. I was a little worried about this game, as I played it in September but hadn’t heard anything about it since. I’m delighted it’s coming soon.
Game director Peter Hirschmann (showing off his pandemic beard) said this Medal of Honor is about putting the player in the boots of a solider fighting in World War II. He said VR makes the experience more immersive. You are recruited into the Office of Strategic Services (the precursor of the CIA) during World War II. You become an operative behind enemy lines.
Above: Medal of Honor: Above and Beyond is set in World War II.
The game has dozens of levels, a script of 120 pages, and a story that follows three acts. It goes from the early part of the war to D-Day to the German secret weapons program. You experience the story completely in first person, as if you are there, Hirschmann said.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! And it will have multiplayer action as well. “We are shipping a whole suite of” multiplayer VR modes, Hirschmann said.
Players will experience missions that will take them from Tunisia in North Africa to across Europe, participating in some of the biggest moments of the war.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,092 | 2,020 |
"Japan explores AI as the pandemic curbs in-person quality control | VentureBeat"
|
"https://venturebeat.com/business/japan-explores-ai-as-the-pandemic-curbs-in-person-quality-control"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Japan explores AI as the pandemic curbs in-person quality control Share on Facebook Share on X Share on LinkedIn A self-driving vehicle with parts is pictured at Ricoh's photocopier components factory in Atsugi, Kanagawa prefecture, Japan July 13, 2020.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
(Reuters) — At a factory south of Japan’s Toyota City, robots have started sharing the work of quality-control inspectors as the pandemic accelerates a shift from Toyota’s vaunted “go-and-see” system, which helped revolutionize mass production in the 20th century. Inside the auto parts plant of Musashi Seimitsu, a robotic arm picks up and spins a bevel gear, scanning its teeth against a light in search of surface flaws. The inspection takes about two seconds — similar to that of highly trained employees who check around 1,000 units per shift.
“Inspecting 1,000 of the exact same thing day in day out requires a lot of skill and expertise, but it’s not very creative,” CEO Hiroshi Otsuka told Reuters. “We’d like to release workers from those tasks.” Global manufacturers have long used robots in production while leaving the knotty work of spotting flaws mainly to humans. But social distancing measures to prevent the spread of COVID-19 have prompted a rethink of the factory floor. That has spurred the increased use of robots and other technology for quality control , including remote monitoring , which was already being adopted before the pandemic.
In Japan, such approaches represent an acute departure from the “genchi genbutsu” go-and-see methodology developed as part of the Toyota production system and embraced by Japanese manufacturers for decades with almost religious zeal. That process tasks workers with constantly monitoring all aspects of the production line to spot irregularities and has made quality control one of the last human holdouts in otherwise automated factories.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Yet even at Toyota itself, when asked about automating more genchi genbutsu procedures, a spokesperson said: “We are always looking at ways to improve our manufacturing processes, including automating processes where it makes sense to do so.” Quality demands Improvements in artificial intelligence (AI) have come in tandem with increasingly affordable equipment but also stricter quality requirements from customers.
“We’re increasingly seeing a gap between the quality of products made on regular production lines and the quality our customers demand,” said Kazutaka Nagaoka, chief manufacturing officer at Japan Display, a supplier to Apple, as well as numerous automakers.
“The quality of products made on automated lines is overwhelmingly higher and more consistent,” Nagaoka said.
However, automating inspections is challenging, given the need to teach robots to identify tens of thousands of possible defects for a specific product and apply that learning instantly. Musashi Seimitsu’s low defect rate of one per 50,000 units left the firm without enough defective examples to develop an efficient AI algorithm. But a solution came from Israeli entrepreneur Ran Poliakine, who applied AI and optics technology he had used in medical diagnostics to the production line. His idea was to teach the machine to spot the good, rather than the bad, by basing the algorithm on up to 100 perfect or near-perfect units — a modification of the so-called golden sample.
“If you look at human tissue, you are teaching an algorithm what is good and what is not good, and you only have one second to perform the diagnostic,” he said.
On steroids Since the breakthrough, Poliakine’s startup SixAI and Musashi Seimitsu have established MusashiAI, a joint venture that develops and hires out quality control robots — a first in the field.
Inquiries from automakers, parts suppliers, and other firms in Japan, India, the United States, and Europe have quadrupled since March, when the novel coronavirus went global, Poliakine said.
“COVID-19 has accelerated the move. Everything is on steroids now because working from home is showing that remote work can work,” he said.
Earlier this year, auto parts maker Marelli, which has operational headquarters in Japan and Italy, also began using AI quality inspection robots at a plant in Japan, and the company told Reuters last month that it wanted AI to play a bigger role in quality inspections in the coming years.
Printer maker Ricoh plans to automate all of the production processes for drum units and toner cartridges at one of its Japanese plants by March 2023. Robots perform most of the processes already, and since April technicians have been monitoring equipment on the factory floor from home.
“Of course, you need to be onsite to assess and execute solutions when issues come up, but identifying and confirming are tasks we can now do from home,” said Kazuhiro Kanno, general manager at Ricoh’s printer manufacturing unit.
Musashi Seimitsu will not say when it envisions its factory floors being fully automated, but Otsuka said AI stands to complement, not threaten, the go-and-see system.
“AI doesn’t ask ‘Why? Why?’ but humans do. We’re hoping to free them up to ask why and how defects occur,” he said. “This will enable more people to look for ways to constantly improve production, which is the purpose of ‘genchi genbutsu.'” (Reporting by Naomi Tajitsu and Makiko Yamazaki, additional reporting by Maki Shiraki and Noriyuki Hirata. Editing by David Dolan and Christopher Cushing.) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,093 | 2,020 |
"Hey, developers: Brazil is waiting for you | VentureBeat"
|
"https://venturebeat.com/business/hey-developers-brazil-is-waiting-for-you"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Hey, developers: Brazil is waiting for you Share on Facebook Share on X Share on LinkedIn Brazil was second only to the U.S. in terms of Google Play downloads last quarter.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Targeting the young, growing Brazilian market may be the smartest decision an app publisher can make this year. App installs have been growing at an impressive 30% year-over-year, easily the largest and fastest growth in the region. With the world’s sixth largest population, Brazil’s smartphone growth has exploded in recent years — up 11% in 2019.
But it may have only scratched the surface in terms of where the market could go. In 2019, Brazil had a smartphone penetration of only 45.6% , dwarfed in comparison to top countries like the U.K., Germany, and U.S., which are around 80%.
App publishers have capitalized on the young Brazilian user base through regionalizing their app and campaigns to a Gen-Z market quick to adopt a mobile lifestyle.
According to AppsFlyer , 9.1% of users who install an ecommerce app go on to become purchasing customers, more than twice as much as other high-end markets in the region. The categories that are crushing it in ecommerce sales align well with Brazil’s young demographic: food delivery, games, shopping, and travel.
Large pool of mobile-first consumers With a median age of just 32, cell phone sales in Brazil are brisk, and by 2023 some 104 million Brazilians will own a smartphone. Brazil’s growth, as is seen by most emerging economies, is driven by the younger population with 85% of the 18-34 demographic owning a smartphone vs. 32% over 50 – a large gap in comparison to developed countries (in the U.S., it’s 95% vs. 67%). As Brazilian youth turn to adults, they will continue to be quick adopters of new devices as the country as a whole evolves to a mobile lifestyle.
What’s particularly noteworthy about Brazilian consumers is that they’re enthusiastic app users, especially when it comes to shopping apps. Mobile sales are north of $7.6 billion in revenue and account for 32% of all ecommerce payments in Brazil. Considering 78% of Brazilians say they prefer to buy via an app over any other payment option due to the channel’s speed and simplicity, ecommerce will likely continue to grow as smartphone penetration approaches levels we see in other developed countries.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Advertisers now have access to scale and reach Scale and reach are key concerns of any advertiser considering a new market. Brazil’s market has proven to be very engaged as marketers have scaled their UA campaigns.
The country has seen steady increases in mobile app spending, which is estimated to top $4 billion in 2020.
According to AppsFlyer , Brazil accounted for more than half of app install ad spend in the region in 2019. What’s more, advertisers are seeing good returns for their ad spend. Last year 60% of all app installs in Brazil were the result of advertising campaigns.
Maturing market, maturing UA campaigns Like any maturing market, successful UA campaigns require nuance and an eye towards incrementality. Social media is a great way to obtain scale (there are 140 million social media users in Brazil as of this past January). In January 2020, Brazil accounted for more than 10% of all TikTok downloads.
But don’t spend your entire UA campaign on social media. First, competition for attention on those platforms is as fierce as it is in the US, and you’ll find that after a spate of new installs, your ad spend will deliver fewer and fewer users. Second, ad fraud concerns are large in Brazil, where fraudsters use unofficial places to lure customers to download apps which may cause people to shy away from traditional channels. Fortunately there are plenty of other channels that will deliver incremental users without the fraud concerns.
Consider, for instance, pre-loading your app on mobile devices. In Q1 2020, some 10.4 million mobile phones were sold in Brazil.
And recent data shows that people with new devices install three times more apps than people who have had their phones for more than a year in the US. It stands to reason that these trends would also follow in Brazil, meaning pre-loads could have an exponential effect for advertisers that invest on pre-loading.
Marketers should also consider new and emerging channels that have seen success in Brazil. TikTok’s large growth makes it a channel potentially to look into. Other Brazil growth categories, such as food delivery, games, shopping, and travel also offer opportunities for advertisers.
The Brazilian market represents a bright spot in a year that has been challenging on many levels. In a few years the market may very well become saturated, but at the moment, consumers are enthusiastic about trying new apps. If you add it to your marketing strategy I suspect you’ll reap rewards for years to come.
Matt Tubergen serves as the executive vice president of Digital Turbine Media , where he is responsible for overseeing all mobile media and campaign development, management, and strategy for its 300 top brands and app clients.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,094 | 2,020 |
"GB Decides 161: Where is the Mass Effect Trilogy? | VentureBeat"
|
"https://venturebeat.com/business/gb-decides-161-where-is-the-mass-effect-trilogy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GB Decides 161: Where is the Mass Effect Trilogy? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
On this week’s episode of the GamesBeat Decides podcast , reviews editor Mike Minotti and his co-host Jeff Grubb talk about gaming’s busy week. That includes DC FanDome, Gamescom’s Opening Night Live, and the weird Nintendo Direct Mini Partner Showcase. Most importantly, Mike and Jeff consider The Swedes. The pair also answer your questions, like: What’s going on with Nintendo? Where is the alleged Mass Effect Trilogy Remaster? And what is a live-service game, anyway? Join us! Subscribe to the RSS Listen on Anchor.fm Apple podcasts Spotify Google Podcasts Find past episodes here GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,095 | 2,020 |
"For tech to serve the public good, we need standards | VentureBeat"
|
"https://venturebeat.com/business/for-tech-to-serve-the-public-good-we-need-standards"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest For tech to serve the public good, we need standards Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When I first started as a technology investor, I became interested in Neil Postman ’s work at NYU. He stated — in a time of the rising proliferation and expansion of television programming — that mediums of mass persuasion and influence needed extremely careful consideration and society needed to understand the unintended consequences of these mass mediums.
Dr. Postman died before the popular rise of online everything. But he ( and others ) gave us every warning on how — if we did not carefully consider the negative ramifications of new developments — our society would be at grave risk of injury.
I’ve spent half my life looking closely at technology companies for my investments. One thing I’ve always found problematic from many of my peers is the widely held belief that the free market will ensure that any flawed tech system will breed competition that will then fix those flaws.
I don’t agree. This is how we ended up with “move fast and break things” and a too-prevalent underlying arrogance that anything that is new is better than what existed before it.
I truly believe that what we have lost in the last three decades far outweighs the tremendous gains since the Internet became a public utility. We are now at a point in which the public utility has become privatized and controlled by a dangerously small number of companies, the vast majority of whom lack even the same level of governance in place to oversee drinking at parks.
Big technology companies and their investors have created an ecosystem that resists oversight and has demonstrated limited ability to behave responsibly — from creating platforms that harbor misinformation to manipulating employee protections.
To share the blame, most governments have failed to protect their citizens. Most elected representatives are not sufficiently literate in the language of technology that is woven into every part of our lives. Instead, there’s a public antagonism between our business and public leaders that is impacting us all.
That needs to change. I still believe that technology can solve a lot of our collective challenges and that it can support education, social equality, and move toward public good.
Technology doesn’t have to be at odds with governance. This is my guiding principle with my investments: I look for entrepreneurs that don’t see government as the enemy but as welcome and respected collaborators in a lot of important areas.
Transportation is one example. Government mandates safety regulations, and automobile companies follow, resulting in fewer accidents and more lives saved. Even now, the possibilities have grown: Public-private partnership may lead to a future in self-driving cars.
Our election systems could be another. We are in a unique place where collaboration between the government and the private sector can resolve some of the challenges in the current electoral system in America.
The pandemic — and our most recent woes with the post office — have only made it more clear that our remote voting options (mail-in ballot, fax, or email) are falling short on feasibility, accessibility, security, and privacy. Mail systems in more than 70 countries are not delivering to the U.S., meaning deployed military and overseas citizens in those countries simply won’t get to vote this year. That is unacceptable.
Increasing individual participation in civic society is an area in which technology should be able to play a meaningful role, providing additional methods for people to vote and ensuring that the systems are designed properly and governed responsibly. But public trust and buy-in is key for us to achieve higher participation in elections and give us a truly representative administration.
For this to happen, we need a way forward that provides a shared language and accountability between tech companies and the government, to ensure public trust and safety in the platforms that play such important roles in every part of our lives. And for that, I call for the development of a set of technology standards that provides that base of trust and that can ensure public safety in our elections.
It’s a new space; we’ve set groundwork towards privacy, but not the safety and trust of technology as a whole. We have an opportunity to set the foundation of this new way forward. My hope is that this administration or the next will lead the world in establishing these standards and that today’s industry leaders embrace the opportunity to keep building to ensure that the technology that touches the most important parts of our society does so in a way that benefits us all.
We are at a point where, if we do not provide a path to produce the tools that ensure each person still has a voice that matters, I fear we will simply continue to fuel emotional fires that will burn the remaining bridges of society for greater and greater profit.
Tom Williams is founder of Heron Rock , where he oversees multi-family office investment strategies focused on investments in technology, consumer and media, and life sciences companies around the world. Prior to Heron Rock, he founded BetterCompany and co-founded Miavita, an online health and nutrition website that was later sold to Matria.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,096 | 2,020 |
"Companies have long embraced job 'ownership,' but there's a better model | VentureBeat"
|
"https://venturebeat.com/business/companies-have-long-embraced-job-ownership-but-theres-a-better-model"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Companies have long embraced job ‘ownership,’ but there’s a better model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
“Who owns the Refund Processing for American Builders?” Jane came rushing to my desk on a rainy January afternoon with a look of urgency. Our office was always bustling with engineers, product managers, and business development folks hustling to create something new. “Well, you know how I feel about ‘ownership’… what problem are we trying to solve?” I replied. Jane knew the drill but still couldn’t break the habit, having spent 10 years working with product owners in companies like Twitter and Google, and two smaller startups in the gaming industry. It took us 10 minutes to understand the issue at hand and dispatch a task force comprised of two engineers, neither of them ever having “owned” the refund payment processing before. Nevertheless, they were fully qualified to support our partner, and by the end of that day, they had identified the problem and its root cause, issued a fix, and written a quick wiki page on how to troubleshoot it in the future. Not bad for a project without an owner.
Our approach relies on Dynamic Team Assignment (DTA). And while it may sound counterintuitive, abandoning the ownership model was the best thing I ever did for my team.
Similar to my colleague Jane, technical leaders often default to the idea of ownership. Everyone from project managers to engineers are encouraged to “own” their domain, becoming a subject-matter expert and single point of contact for their respective area of responsibility. As with many trends that emerge from Silicon Valley, this model may be best exemplified by Apple, which famously assigns a directly responsible individual to every project.
The ownership model has worked well for some companies, however, team management doesn’t necessarily lend itself to a one-size-fits-all approach. Too much stability tends to stifle innovation, which is why the processes differ when working at an established company with thousands of employees. For example, not every product needs continued investment all the time, and during those times, you don’t want certain product owners to be underutilized. Additionally, we should remember the “ Bus Factor ” consideration: There’s the risk of losing a critical team member, without whom the project wouldn’t continue. In fact, the ownership model can disincentivize people from sharing knowledge and working toward shared conventions, as their exclusive knowledge makes them indispensable.
As we reimagine workplace norms today, leaders are being given an unprecedented opportunity to rethink the structure of their teams’ growth. In hyper-growth environments and times of fast-paced change (amplified by external pressure like the COVID-19 crisis ), a high level of adaptiveness and agility is essential. Without even realizing it, companies and their employees are embracing the concept of “ anti-fragility ” and are shifting toward more flexible ways of thinking and operating. For example, while previous generations tended to stay with one company for life, today’s employees will hold an average of 12 jobs or more. It makes perfect sense as an adaptation to a world where jobs are changing more rapidly than ever. Stay flexible, the thinking goes, and you’ll always be ready for the next big thing.
Nonetheless, companies applying the same agile approach to their team structure are still few and far between. Facebook has mused publicly about switching models for a while, while Asana has published tips on how to mix together ownership with distributed responsibility. But these are fringe cases; most companies still see the ownership model as sacred and are not planning to rock the boat.
I strongly believe it is time to make a change and practice more intentional team agility.
Practicing the technique of DTA — the deliberate method of assigning people to priority projects as needed — is the antifragility we need to survive today. One of my esteemed colleagues, McKinsey’s Yuval Atsmon, wrote an article depicting the future of work that explains that “as jobs evolve, appear and disappear, adaptability will be more valued than longevity.” The best employees will still care deeply about their projects; and their “ownership” mindset should apply to the system as a whole, along with the company’s mission and the journey to get from A to B. This way, teams can band and disband as often as needed, based on priorities and projects.
DTA became habitual for me early in my career, during my time as a management consultant. Then I experimented with it in tech leadership after attending Prof. Boaz Ronen’s Focused Operations Management workshop, which trains leaders on how to do more with their existing resources. Recognizing that our biggest asset is talented employees, I set out to see if I could apply his optimization theories to team structure. The result was a more agile team, more resilient systems design, and a culture of flexibility.
I’ve implementing this method numerous times since then and have learned that every business is different and every team has its own ethos of success. The first step is to decide whether dynamic team assignment is indeed right for you. Here are a few steps to consider: Talk to your team and engage the senior leadership.
Don’t be surprised if you experience some initial pushback, especially in mature organizations with established procedures. They may not have even considered that another model is possible, especially if their current model is not evidently broken. Most leaders, however, are willing to experiment when you explain the benefits. After leading dynamic teams at both large organizations and small startups, I have seen that this approach can work for companies of any size, as long as there’s buy-in from the top.
Get your team to embrace continuous change (if and when leadership is on board). Many people ultimately love the chance to challenge themselves and try new things, but it may take some convincing — and possibly some goodbyes. Not everyone is suited to an agile mode, and that’s OK. Work to create a team driven by a shared mission. Be open when talking about your team’s fears, and try to align on why this is a great opportunity for everyone to learn and develop.
Set new hires up for success by making the structure clear from the get-go.
Screen candidates for agility and flexibility, in addition to technical skills and aptitude, to qualify for this type of environment. Once they join, review their performance and collaborative abilities regularly. This can help address relationship friction quickly and give you insight into the types of projects your team members are best at.
Here’s the heart of it: At least once a quarter, shuffle your teams to reassign people to different projects and priorities — even when those teams are working successfully. Encourage individuals to take on a variety of temporary leadership roles, letting them experience the responsibilities of management without the burden of choosing a professional ladder to climb. This is a big deal for my team. The decision to pursue a Principal Architect or an Engineering Director role is a career choice that is often hard to make. With flexible team assignments, people can find out what suits them before they have to commit.
Rent the Runway generously shared this idea in their engineering ladder blueprint, describing jobs as tasks instead of titles, and encouraging engineers to try as many as possible.
Implement clear conventions and standards.
Some expert engineers write highly complicated code only they can understand, which makes it hard — and less likely — for other engineers to further adapt their work.
Martin Fowler calls this build-up of low-quality code “cruft,” and it inevitably leads to technical debt. Clear conventions and standards can combat this. On my team, we regularly push our engineers to write code that is “open-source quality,” because someone else will be taking over their code when they move to the next assignment. No matter what the role, using conventions, standards, and best practices will help your employees make their solutions future-proof and transferrable.
Be prepared for pushback.
There’s still a great deal of skepticism about abandoning the ownership model, so you’re likely to get some pushback until the results begin to speak for themselves. Conviction is key. For anyone who’s not quite on board, coaching, training, and reiterating the benefits can make a big difference. And for those who are intently not playing along, consider addition by subtraction. Remember, change resilience is a muscle: The more you flex it, the stronger it becomes.
Last week, I was on vacation and keeping up with the action by passively listening into some of our Slack channels. It was such a delight to see an ad-hoc team get formed to address a customer problem — fully organically, with no top-down mandate, two senior folks and a couple of newbies formed a team — just because the need was evident and they wanted to self-organize quickly. The first question they asked was “What problem are we trying to solve?” And I knew right then that the team was headed in the right direction and I could truly enjoy vacation.
If you’re ready for a new, more flexible way to work, give dynamic team assignment a try. And be prepared — you may never go back.
Ran Harpaz is the Chief Technology Officer of Hippo Insurance.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,097 | 2,020 |
"Apple's latest App Store and notarization problems reveal a giant seeking agility | VentureBeat"
|
"https://venturebeat.com/business/apples-latest-app-store-and-notarization-problems-reveal-a-giant-seeking-agility"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Apple’s latest App Store and notarization problems reveal a giant seeking agility Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
There are several schools of thought regarding Apple’s growth over the last two decades and its recent ascent into $1 trillion and $2 trillion market valuations, but their common theme is that Apple has grown from David into Goliath — arguably too big for everyone’s good save its own. What once appeared to be Apple’s small “walled garden” of an app ecosystem has become one of the world’s largest software stores, if not the largest, and detractors have increasingly characterized the company as a domineering and unsympathetic villain , crushing smaller developers at will.
My own take is that Apple’s behavior is better understood by reference to an aphorism alternately credited to Napoleon Bonaparte and Robert J. Hanlon, most often referred to as Hanlon’s Razor : “Never ascribe to malice that which is adequately explained by” incompetence (Napoleon) or stupidity (Hanlon). In either case, the broad idea is the same; absent evidence to the contrary, presume that bad things are the result of poor judgment or mistakes, rather than evil intent.
Apple isn’t a perfect company, nor is it entitled to a presumption of purely good intent. Over the years, it has vacillated between populism and benevolent dictatorship, echoing Henry Ford by suggesting (quietly) that it knows its users’ needs better than they do. There have been times when it has acted with a heavy hand , and certainly examples of when it has put its own best interests ahead of users’ needs.
But user satisfaction remains a significant factor in its decisions and successes. Even if a given decision is controversial , Apple’s overall track record of creating intuitive hardware, software, and services have defined the company, and it has been rewarded with unfathomable riches for delivering best-of-class solutions at global scale. It’s now a giant, though seemingly trying hard not to be a lumbering one.
This week, security researcher Patrick Wardle provided his latest example of Apple screwing up: evidence that the company inadvertently “notarized” a piece of macOS malware , enabling it to run without objection on even recent Macs. If you don’t recall Apple’s Notarization requirement , it was announced back in 2018 as a way for developers to reassure users that apps distributed outside the Mac App Store were malware-free. Viewed in the worst possible light, Notarization was yet another example of Apple trying to exert control over everything that runs on its computers, despite the company’s benevolent explanation: “Notarization gives users more confidence that the Developer ID-signed software you distribute has been checked by Apple for malicious components.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The problem Wardle identified was that Apple somehow gave the thumbs up to malicious adware payloads containing OSX.Shlayer malware — notarizations it “quickly-ish” revoked once notified. Wardle rightfully poked Apple for “promis[ing] trust, yet fail[ing] to deliver” with Notarization, suggesting that a security system that doesn’t work as marketed could “ultimately put more users at risk.” That’s where Hanlon’s Razor comes in. Notarization has been around for a while, yet there haven’t been many issues with malware getting notarized. Bear in mind that Mac malware issues tend to be called out exclusively by security researchers rather than end users, as unpatched, in-the-wild exploits are nearly as rare as public user Mac malware complaints, which hasn’t been true on Windows PCs for decades. The fact that Apple’s screening process screwed up this time — or the implication that the screening system may have a bigger hole — wouldn’t mean that it’s neither trying to screen properly nor succeeding broadly at keeping users safe. In other words, this isn’t an example of security theater, but rather mistakes that should be addressed.
Earlier today, Apple reminded developers of some important App Store policy changes announced during this year’s WWDC : They can now appeal decisions that App Store submissions violated Apple’s guidelines, suggest changes to the guidelines, and not see their bug fix updates delayed over alleged guideline violations (apart from legal issues). This isn’t to say that the legions of small and large developers who have been upset with Apple over App Store guideline issues will suddenly be happy with the company — least of all Epic Games — but that Apple isn’t standing still, and is seemingly trying to take at least some developer requests into account when making decisions.
It’s tempting to take Apple’s gestures as evidence that it’s attempting to remain nimble and flexible despite its growing size, a challenge it has faced every time it has reached a new height of success. Some might view the very concept of app notarization to be overbearing, but instead of maintaining an impenetrable gate, Apple’s screening system isn’t as strong as it could be, and it’s responding quickly to reports of problems. Similarly, to the extent the process of App Store approval may have felt unilateral or unnecessarily brutal to some developers, Apple is opening the door to discussion and evolution. That sounds like a positive set of developments.
Having watched Apple spend years seemingly ignoring bug reports from users and developers, however, my biggest concern is that its invitations to appeal or change guidelines will similarly fall into a dark chasm, the digital equivalent of a suggestions box that empties out into a trash can. And what I’m inclined to see as imperfect execution or short-sighted decisions could be clearly revealed to be something worse.
It’s going to take a little time to see whether Hanlon’s Razor applies here. Apple has a chance to prove definitively that it’s not a malicious actor, just one that hasn’t performed ideally in the past, and is doing its best to be better — at least not obviously stupid — in the future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,098 | 2,020 |
"Aclima says bad air quality from California's fires is affecting millions | VentureBeat"
|
"https://venturebeat.com/business/aclima-says-bad-air-quality-from-californias-fires-is-affecting-millions"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aclima says bad air quality from California’s fires is affecting millions Share on Facebook Share on X Share on LinkedIn When 11,000 lightning strikes hit the San Francisco Bay Area on August 16, they ignited two of the largest fires in California history.
Aclima had the pollution measurement devices in place to record the effects on local air quality. The company said the results show millions of Californians are breathing bad air, including many who may not realize it.
Twelve days later, three fires are still burning, and Aclima scientists have examined the impact of these lightning complex fires on air quality. The result is perhaps the most scientific analysis of data from a big fire in modern times (see the video above). For its analysis of the Bay Area fires, Aclima analyzed both its own data and the data collected by regulatory agencies and reported to the Environmental Protection Agency (EPA).
Aclima can measure air quality on a “hyperlocal” level using a fleet of electric cars with pollution-measuring sensors. The company gathers a massive amount of data compared to other pollution measurement collectors through this mobile method. Aclima previously used this data to assess the pandemic’s effect on car travel and pollution levels in San Diego.
Since the fires began, California’s inland counties have on some days experienced worse sustained daily air quality than Bay Area counties. But Bay Area counties saw larger maximum levels or spikes before the wind dispersed the smoke plumes and blew them inland. This has been tough for me, as I’ve been out jogging almost every day. Of the 168 days of lockdown, I have jogged for 159 days. For seven more of those days I have been writing an indoor exercise bike, and most of those due to the smoke.
Nights are better for walks When looking at diurnal or day-to-night hourly patterns, Bay Area counties experienced worsening daytime and improving nighttime air quality, on average, from August 16 to August 25. That means it’s better to take the dog for a walk at night or in the early morning, said Aclima chief scientist Melissa Lunden in an interview with VentureBeat.
“I was seeing that by four or five o’clock that the levels were falling to a level where I could open the windows and cool the house down,” said Lunden, who has her own measurement device in her home. “We also have regular afternoon winds, and that blows it all inland.” For this analysis, Aclima’s scientists focused on the daily average levels of fine particulate matter (PM2.5), which is a harmful pollutant at least 50 times smaller than a grain of sand and typically invisible to the eye. Even if you can’t see or smell smoke, you may be breathing air with unhealthy levels of particles generated by the wildfires.
Above: The air quality in the Bay Area changes during the day.
To produce the embedded video, Aclima scientists analyzed regulatory air quality data from the state’s stationary monitors positioned at sea level throughout California and calculated changing daily average levels of fine particulate matter on a county-by-county basis throughout the state from August 15-25. The scientists then overlaid satellite-detected VIIRS fire hotspot data from NASA’s Fire Information for Resource Management System (FIRMS) to illustrate the changing air quality in relation to the locations of fire hotspots as seen from space.
As you can see, on many days the wildfires appear to impact daily average PM2.5 levels in inland counties more than Bay Area counties as the wind blew the smoke well beyond the fires.
In addition to analyzing daily air quality in California following the lightning complex fires, Aclima scientists analyzed data generated from the company’s mobile sensor network, which measures air pollution and greenhouse gases block by block across the Bay Area, day and night, weekdays and weekends.
Above: On August 22, the daily average PM2.5 levels in the Bay Area were lower than inland, but the daily maximums were at least as high in the Bay Area as inland.
In Bay Area counties, a diurnal or day-to-night pattern showed cleaner air at the ground level in the evenings and early mornings, with the highest levels of PM2.5 at midday. Unlike regulatory monitors, the Aclima mobile sensor network takes measurements at various elevations on all publicly accessible streets.
Why is air better at night? Lunden said on summer evenings an inversion layer of cool marine air is trapped beneath a layer of warmer air. The marine layer is stable, meaning there is no exchange of air between this lower level and the air above it. As the sun comes out and heats the ground, the height of this layer increases and there is more mixing of air both far above and near ground level. The difference in atmospheric pressure between the cool Pacific and the heating inland regions results in an onshore wind that starts to build mid-morning to a strong flow by late afternoon. As the sun sets, the evening inversion layer forms again.
Emissions from the fires are often found at higher elevations in the hills, and smoke rises high into the atmosphere. In the evening, this smoky layer is above the inversion layer and does not descend to the ground level. As this boundary layer grows during the morning, however, the smoke that has been held high above the ground mixes into this layer and the concentrations on the ground increase. Direct smoke emissions are also more likely to mix into this layer during the day. As the winds pick up, this smoky air is blown out of the Bay Area toward the east. And as the sun goes down, the reformation of the evening inversion layer results in clearer air being closer to the ground and where we breathe.
This isn’t to say that air quality at night has been good, or better everywhere, but it has shown a strong tendency to be measurably better on a county level throughout the Bay Area. For communities directly impacted by fire, unhealthy levels of smoke — not to mention the danger from the fire itself — have occurred at any time of day or night. And it’s important to note that these observed patterns hold true for what happened, but if the winds change then the patterns will too. A good resource for air quality is here.
People have tragically lost their lives to these historic fires, and many others have lost their homes. That’s not to mention the animals and habitats lost and the tens of thousands of people displaced due to evacuations. Meanwhile, millions of people are being exposed to unhealthy air quality across and well beyond California, Aclima said.
Air pollution is not confined to county, state, or country borders, and it is harming human and planetary health everywhere. By better understanding the impacts of climate change events — like lightning complex fires — Aclima said we can make informed decisions to protect ourselves and build a more resilient and equitable future.
Measurement challenges You can check the health level of the air by zip code using Air Now , but you may not be able to entirely trust that estimate. That’s because the data is based on the government’s regulatory sensors operating in stationary places, and that data is then extrapolated to cover a much larger region. It’s not based on the fine-grained data Aclima collects with its cars, but it’s pretty much the best measure available at the moment.
Lunden said you can’t judge whether the air is safe enough to take a walk based on what you smell.
“What you smell from a fire is like the organic olfactory compounds that get emitted, and that stuff is pretty reactive in the atmosphere and disappears after some number of hours, but the smoke is still there,” Lunden said. “It just doesn’t smell like smoke anymore. It could still have pretty high concentrations.” On top of that, in the Bay Area you can’t judge air quality by how blue the sky is. The inversion layer may or may not be in place, and you can’t see it. In other words, there isn’t a perfect way to know whether your air is clean or not. As for Aclima’s data from its cars, it isn’t analyzed in real time at the moment, so the company has to use it to analyze long-term trends, not the hourly changes in pollution levels you would need in order to understand whether it’s safe to go outside.
“The real strength and power in our data comes from the persistent differences we see,” Lunden said. “And those persistent differences come from repeat measurements over time. So we’ll be in your zip code on any given day, and then we’ll be there another two weeks later, and so on. And as we continue to do that mapping, you get those persistent differences.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,099 | 2,020 |
"Twitter labels video shared by Trump aide as 'manipulated media' | VentureBeat"
|
"https://venturebeat.com/ai/twitter-labels-deepfake-video-shared-by-trump-aide-as-manipulated-media"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter labels video shared by Trump aide as ‘manipulated media’ Share on Facebook Share on X Share on LinkedIn A screen displays the stock price of Twitter above the floor of the New York Stock Exchange (NYSE) shortly after the opening bell in New York, U.S., January 31, 2017.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Twitter labeled a tweet by White House director of social media Dan Scavino as “manipulated media” today. The original, unaltered video shows Harry Belafonte asleep in an interview with a local TV station. The video shared by Scavino replaces Belafonte’s face with Democratic presidential candidate Joe Biden. The Scavino video has been seen more than 1 million times since being shared roughly one day ago on Twitter. CBS Sacramento anchor John Dabkovich tweeted that he knew the video was a fake six hours ago.
At the time this story was published at 1 pm PT Monday, Twitter hadn’t removed the video, and Scavino had not deleted the post sharing the doctored video. Ultimately, the video was removed in response to a report by the copyright owner.
Twitter introduced its synthetic and manipulated media policy in February, and it considers whether shared content is manipulated with editing to be deceptive or capable of causing serious harm. Whether a video is labeled or removed entirely is decided on a case-by-case basis. Accounts associated with repeat offenses of the synthetic and manipulated media policy can be permanently suspended, according to the policy.
This is fake. You know how I know? I was the coanchor in studio.
We were interviewing Harry Belafonte.
https://t.co/gPRU9JGyI7 — John Dabkovich (@JohnDabkovich) August 31, 2020 VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Scavino was the first known Twitter user to have a tweet labeled as manipulated media for sharing an altered video in March, which Trump retweeted, that falsely depicts Biden endorsing Trump. A former Trump golf caddie and longtime aide to President Trump, Scavino was appointed to the position of White House social media director earlier this year and last week spoke at the Republican National Convention.
Earlier today, Twitter also labeled a video shared by House Minority Whip Steve Scalise (R-LA) as manipulated media for sharing a video interview related to police funding, which Scalise eventually deleted, according to CNN.
Democratic presidential candidate Joe Biden has not said he’s in favor of defunding the police.
Concern over manipulated media and disinformation is high as U.S. citizens prepare to vote to elect the next president of the United States on November 3.
VentureBeat pinged the White House for comment on whether the White House uses any tools to detect deepfake videos and whether Scavino will take down the video.
Updated 12:10 pm PT Sept. 1 to state that the doctored video was taken down due to a report by the copyright owner.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,100 | 2,020 |
"Researchers built a data set for training AI to detect natural disasters from social media images | VentureBeat"
|
"https://venturebeat.com/ai/researchers-built-a-data-set-for-training-ai-to-detect-natural-disasters-from-social-media-images"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers built a data set for training AI to detect natural disasters from social media images Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
This week, people in California and Gulf Coast states experienced the impact of historic natural disasters. Called signs of climate change, both are unique: The California fires were started by hundreds of lightning strikes, creating some of the largest fires recorded in state history, and Hurricane Laura hit Louisiana harder than any hurricane in more than 150 years.
To assist humanitarian groups and first responders, AI researchers created the Incidents data set, which they call one of the largest ever assembled for detecting accidents and natural disasters people share on social media platforms like Flickr and Twitter. Creators of the Incidents data set said they hope it spurs the creation of AI that uses computer vision to recognize natural disasters and flag incidents for humanitarian organizations and emergency responders.
The Incidents data set contains 1.1 million images and spans 43 categories of accidents or natural disasters, ranging from car accidents to volcanic eruptions. Images contain location labels as well like beach, bridge, forest, or house.
A paper about the Incidents data set was published this week as part of the European Conference on Computer Vision (ECCV).
“Our dataset is significantly larger, more complete, and much more diverse than any other available dataset related to incident detection, enabling the training of robust models able to detect incidents in the wild,” the paper reads.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Incidents data set contains nearly 447,000 images labeled as accidents or natural disasters and 697,000 labeled images without any accident or natural disaster. The data set was assembled by researchers from MIT, Qatar Computing Research Institute, and the Universitat Oberta de Catalunya in Spain. Photos were obtained from Google Images searches and labeled by Mechanical Turk employees. Labeled images were only accepted after achieving 85% accuracy.
Researchers pointed out that images labeled as negative were critical to making robust models. “We can observe that, without using the class negatives during training, the model is not able to distinguish the difference between a fireplace and a house on fire or detect when a bicycle is broken because of an accident,” the paper reads.
To test the effectiveness of Incidents, researchers used the data set to train a convolutional neural network and found an average precision of 77% across earthquakes and floods on Twitter. The experiment includes analysis of 900,000 Twitter photos from five earthquakes and two floods. The data created AI capable of recognizing earthquakes and floods from nearly a million Twitter photos with an average precision of about 74% and 89%, respectively.
Researchers also conducted experiments with 40 million geotagged Flickr images to analyze emergency event detection from earthquakes and volcanic eruptions. They found the AI capable of recognizing the location of earthquakes and volcanic events.
A variety of AI models exist today to identify natural disasters and their impact. Beyond weather forecasting models, there’s AI for predicting when floods will happen along the Ganges River in India or how a wildfire may spread after ignition ; for detecting when a wildfire starts using satellite imagery, though satellites can be obstructed by smoke or clouds; and for assessing flood and fire damage.
AI systems can identify natural disasters from the words people use in social media — but few are made for detecting disasters from images shared on social media. In the coming months, a U.S. federal agency will introduce the full ASAPS data set to spur the creation of AI tools that automatically detect police, fire, or medical emergencies in real time from social media photos and videos. Some coauthors of the Incidents data set paper introduced in 2017 an AI system for analyzing natural disasters as shared on Twitter, but it could only recognize three kinds of disasters.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,101 | 2,020 |
"ProBeat: Amazon Halo is surveillance capitalism in a $100 fitness wearable | VentureBeat"
|
"https://venturebeat.com/ai/probeat-amazon-halo-surveillance-capitalism-fitness-wearable"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Amazon Halo is surveillance capitalism in a $100 fitness wearable Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Amazon this week walked into the health gadget market with a $100 fitness wearable and a $4 per month subscription service. Amazon Halo stands out not for the lack of a screen, but rather its two “innovative” features: Body and Tone.
The former uses computer vision, machine learning, and “a suite of algorithms that can generate your personalized 3D body model, BFP, and body model slider, a visual of how your body could change as you gain or lose body fat.” The latter uses machine learning to “analyze the positivity and energy of your voice — positivity is measured by how happy or sad you sound, and energy is how excited or tired you sound.” Did Amazon learn nothing from the criticisms of its inaccurate facial recognition tech Rekognition ? Actually, I think it learned a lot. This time, it’s not selling a problematic AI product to law enforcement (for now). Instead, it’s falling back on its tried and true strategy of going straight to the source: consumers.
It was impressive enough that Amazon single-handedly created the voice-activated smart speaker category with a surprise November 2014 announcement , ultimately getting the Echo into millions of homes. But the Echo is limited. Most variants don’t have a camera , and all versions are stationary. This is a tough problem, especially given that consumers have (so far) shunned cameras in wearables (see Google Glass ). Amazon’s solution? Use your existing smartphone to upload pictures of yourself in your underwear and buy a cheap band to record what you say wherever you are.
In other words, surveillance capitalism under the guise of fitness.
Amazon’s take on fitness Alexa, am I healthy? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I’ve long argued that the future of wearables is proactive health care.
I’m extremely bullish on the idea of a device that can give you food recommendations, rest reminders, and exercise suggestions personalized specifically to what you need based on quantitative data measured from your body.
But this is not that. This is a step in the wrong direction. The same week that Fitbit announced Sense , which has more sensors than any other Fitbit device in an attempt to empower you with actionable health data, Amazon has unveiled the inherently flawed Halo.
While Fitbit backs up its health data with encouragement to work with health professionals, the only mentions of “doctor” in Amazon’s announcement is with respect to how Halo can replace them. Furthermore, Amazon offers no scientific evidence that applying AI to pictures and your voice can produce an “accurate body fat percentage” and “a more complete view of health and wellness.” Never mind that Amazon does not cite any studies showing AI can determine your health from pictures and sound. Just like with Rekognition, it will take years for those to be conducted independently, if they ever show up at all. The scientific method takes time — time that Amazon can better spend giving users questionable insights and collecting valuable personal data.
Amazon’s take on privacy Alexa, please tell Halo to stop recording everyone I talk to and everything in my vicinity.
Amazon is well aware of Halo’s potential privacy problems. The company knows that despite vowing not to use Fitbit data to target ads , Google’s $2.1 billion acquisition is facing a full EU antitrust probe.
Amazon saw a potential data collection backlash coming from miles away. While the shortest section in the Halo press release is about privacy, it does link to a separate Amazon Halo Privacy page that addresses some key concerns right off the bat. I took the liberty of adding some notes: You can easily download and delete your Halo health data at any time directly from the Settings section of the app. Boom.
Will Amazon delete Halo health data if I stop using the device or paying my subscription? Data is always encrypted in transit (“in transit” is when it moves between your phone and your band, or between your phone and the cloud).
So it’s not always encrypted? We only move data when absolutely necessary, and we process it as close to the source as possible. For example, Tone speech samples are processed right on your phone and then automatically deleted — they never go to the cloud, and no one ever hears them.
But the resulting Tone voice profile is fair game? Your body scan images are processed in the cloud and automatically deleted. After that, your 3D body model and scan images only live on your phone unless you have explicitly opted in to cloud backup. All that to say, only you ever see them (unless you choose to show them off yourself).
How can I verify that my underwear pictures were deleted from your servers? That’s just the “quick summary” at the top of the page. Amazon does a decent job of explaining the things it wants to explain. It’s what’s missing on the page that is really telling. My biggest question: Why doesn’t Amazon promise not to sell my Halo data, or sell against my Halo data? Amazon’s take on ads Alexa, please tell Amazon to stop shipping me calming teas and dumbbells.
And therein lies the whole point of this exercise, no pun intended. We covered the surveillance part, so now let’s talk about the capitalism part. Amazon knows that health care is massively lucrative, especially in the fundamentally broken U.S. market. But the long-term revenue bet here isn’t in the $100 per device nor the $48 annual fee. It’s in ads.
In addition to dominating retail and cloud, Amazon has a small but steadily growing ads business. In Q2 2020 , Amazon’s “other” revenue category that mainly covers its advertising business was up 41% to $4.22 billion.
Amazon is the only company currently able to challenge the Google-Facebook duopoly in digital ads. Indeed, in June, eMarketer replaced that common description of the digital ad market with “the Google-Facebook-Amazon triopoly.” In addition to knowing your purchase history, plus what you browse and search for, Amazon wouldn’t mind tracking your health. The company could easily sell you even more stuff if it thought it knew how your body was doing and how you were feeling. Letting advertisers target you with that data, à la Facebook and Google, couldn’t hurt.
For Amazon, fitness wearable = surveillance + capitalism.
ProBeat is a column in which Emil rants about whatever crosses him that week.
Update on August 31 : Amazon provided the following statement.
Privacy is foundational to how we designed and built Amazon Halo, and suggesting it is intended for surveillance is completely false. Amazon Halo health data is not used for marketing, product recommendations, or advertising. We do not sell customers’ Amazon Halo health data. Customers can delete all Amazon Halo health data associated with their profile at any time from the Halo settings. Once a customer deletes their data in Amazon Halo, they’re automatically logged-out of the app and cannot log back in with their profile until all of the health data associated with the profile has been deleted. All Amazon Halo data is encrypted in transit, including going to/from the cloud or between your band and the Amazon Halo app on your phone. Health data is encrypted while being stored in the secure Amazon cloud. Tone and Body data is stored securely on your phone, including using available full disc encryption and other protections provided by your phone manufacturer, before being automatically deleted after processing. If you choose to create a voice profile and enroll in Tone, your voice profile will be stored in the app on your phone. It never goes to the cloud and no one ever hears it. You can delete the profile at any time by going to Settings > Profile > Voice ID > Delete Your Voice ID in the Amazon Halo app. We published an FAQ and whitepaper for customers who’d like to learn even more about Amazon Halo privacy features.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,102 | 2,020 |
"Neuralink demonstrates its next-generation brain-machine interface | VentureBeat"
|
"https://venturebeat.com/ai/neuralink-demonstrates-its-next-generation-brain-machine-interface"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Neuralink demonstrates its next-generation brain-machine interface Share on Facebook Share on X Share on LinkedIn Neuralink's surgical robot concept.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Scientists at Elon Musk-backed Neuralink gave a progress update during a conference streamed online from the company’s headquarters in Fremont, California. This came just over a year after Neuralink, which was founded in 2016 with the goal of creating brain-machine interfaces, revealed its vision, software, and implantable hardware platform. Little of what was discussed today was surprising, but it provided assurances the pandemic hasn’t prevented Neuralink from moving toward its goals.
Readings from a pig’s brain were shown onscreen in a live demo. When the pig touched an object with its snout, neurons captured by Neuralink’s technology (which had been embedded in the pig’s brain two months prior) fired in a visualization on a television monitor. That isn’t novel in and of itself — Kernel and Paradromics are among the many outfits developing brain-reading neural chips — but Neuralink uniquely leverages flexible cellophane-like conductive wires inserted into tissue using a “sewing machine” surgical robot. Musk says Neuralink received a Breakthrough Device designation in July and is working with the U.S. Food and Drug Administration (FDA) on a future clinical trial for people with quadriplegia.
Neuralink founding team members Tim Hanson and Philip Sabes, who both hail from the University of California, San Francisco, pioneered the technology with University of California, Berkeley professor Michel Maharbiz. Musk calls the version demonstrated today “V2,” and it represents an improvement over what was shown last year. He’s confident it will someday be possible to embed it within a human brain in under an hour without using general anesthesia. He also says it will be easy to remove and leave no lasting damage, should a patient wish to upgrade or discard the interface.
V2 Neuralink collaborated with Woke Studios, a creative design consultancy based in San Francisco, to design the plastic casing (but not the technical components) of the robot sewing machine. The machine employs optical coherence tomography for real-time brain tracking and five axes of motion to access implant sites around a patient’s head, as well as a 150-micron gripper for grasping and releasing threads using a 40-micron needle.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Woke began working with Neuralink over a year ago on a behind-the-ear concept Neuralink presented in 2019, and the two companies reengaged shortly afterward for the surgical robot.
“The design process was a close collaboration between our design team at Woke Studios, the technologists at Neuralink, and prestigious surgical consultants who could advise on the procedure itself,” Woke head designer Afshin Mehin told VentureBeat via email. “Our role specifically was to take the existing technology that can perform the procedure and hold that against the advice from our medical advisors, as well as medical standards for this type of equipment, in order to create a nonintimidating robot that could perform the brain implantation.” The surgery consists of three parts: opening, inserting, and closing. A neurosurgeon takes care of opening, which involves creating an incision in the skin and removing a small piece of skull and any nearby dura membrane. Then the robot uses its cameras and sensors to insert the wires (or threads, as Neuralink calls them) into the brain while avoiding vasculature up to a depth of up to six millimeters. (Although the robot can physically insert deeper, doing so has not been tested, a Neuralink spokesperson told VentureBeat via email.) Finally, the surgeon secures the implant such that it replaces the piece of removed skull and closes.
The wires — which measure a quarter of the diameter of a human hair (4 to 6 μm) — link to a series of electrodes at different locations and depths. At maximum capacity, the machine can insert six threads containing 192 electrodes per minute.
A single-use bag attaches with magnets around the machine’s head to maintain sterility and allow for cleaning, and a table attached to a mechanical fixture ensures a patient’s skull remains in place during insertion. The machine’s “body” attaches to a base, which provides weighted support for the entire structure, concealing the other technologies that enable the system to operate.
Mehin danced around the question of whether the prototype would ever make its way to clinics or hospitals, but he noted that the design was intended for “broadscale” use. “As engineers, we know what’s possible and how to communicate the design needs in an understandable way. And likewise, Neuralink’s team is able to send over highly complex schematics that we can run with,” he said. “We imagine this is a design that could live outside of a laboratory and into any number of clinical settings.” The Link As Neuralink detailed last year, its first in-brain interface designed for trials — the N1, alternatively referred to as the “Link 0.9” — contains an ASIC , a thin film, and a hermetic substrate that can interface with upwards of 1,024 electrodes. Up to 10 N1/Link interfaces can be placed in a single brain hemisphere, optimally at least four in the brain’s motor areas and one in a somatic sensory area.
Musk says the interface is dramatically simplified compared with the concept shown in 2019. It no longer has to sit behind the ear, it’s now the size of a large coin (23 millimeters wide and 8 millimeters thick), and all the wiring the electrodes need connects within a centimeter of the device itself.
Above: Elon Musk holding a prototype neural chip.
During the demo, the pig with the implant — named Gertrude — playfully nuzzled her handlers in a pen adjacent to pens containing two other pigs, one of which had the chip installed and later removed. (The third pig served as a control and hadn’t had a chip implanted.) Pigs have a dura membrane and skull structure similar to that of humans, Musk explained, and they can be trained to walk on treadmills and perform other activities useful in experiments. This is why Neuralink chose them as the third animals to receive its implants, after mice and monkeys.
Neuralink’s prototype can extract real-time information from many neurons at once, Musk reiterated during the stream. The electrodes relay detected neural pulses to a processor that is able to read information from up to thousands of channels, roughly 15 times better than current systems embedded in humans. It meets the baseline for scientific research and medical applications and is potentially superior to Belgian rival Imec’s Neuropixels technology, which can gather data from thousands of separate brain cells at once. Musk says Neuralink’s commercial system could include as many as 3,072 electrodes per array across 96 threads.
Above: AI predicts a pig’s limb motions from Neuralink implant data.
The interface contains inertial measurement sensors, pressure and temperature sensors, and a battery that lasts “all day” and inductively charges, along with analog pixels that amplify and filter neural signals before they’re converted into digital bits. (Neuralink asserts the analog pixels are at least 5 times smaller than the known state of the art.) One analog pixel can capture the entire neural signals of 20,000 samples per second with 10 bits of resolution, resulting in 200Mbps of neural data for each of the 1,024 channels recorded.
Once the signals are amplified, they’re converted and digitized by on-chip analog-to-digital converters that directly characterize the shape of neuron pulses. According to Neuralink, it takes the N1/Link only 900 nanoseconds to compute incoming neural data.
Above: Neuralink’s N1/Link sensor, shown at Neuralink’s conference in 2019.
The N1/Link will pair wirelessly through the skin via Bluetooth to a smartphone up to 10 meters away. Neuralink claims the implants will eventually be configurable through an app and that patients might be able to control buttons and redirect outputs from the phone to a computer keyboard or mouse. In a prerecorded video played at today’s conference, the N1/Link was shown feeding signals to an algorithm that predicted the positions of all of a pig’s limbs with “high accuracy.” One of Neuralink’s loftier goals is to allow a tetraplegic to type at 40 words per minute. Eventually, Musk hopes Neuralink’s system will be used to create what he describes as a “digital super-intelligent [cognitive] layer” that enables humans to “merge” with artificially intelligent software. Millions of neurons could be influenced or written to with a single N1/Link sensor, he says.
Potential roadblocks High-resolution brain-machine interfaces (BCI), are predictably complicated — they must be able to read neural activity to pick out which groups of neurons are performing which tasks. Implanted electrodes are well-suited to this, but hardware limitations have historically caused them to come into contact with more than one region of the brain or produce interfering scar tissue.
That has changed with the advent of fine biocompatible electrodes, which limit scarring and can target cell clusters with precision (though questions around durability remain). What hasn’t changed is a lack of understanding about certain neural processes.
Above: The N1/Link’s capabilities.
Rarely is activity isolated in brain regions, such as the prefrontal lobe and hippocampus. Instead, it takes place across various brain regions, making it difficult to pin down. Then there’s the matter of translating neural electrical impulses into machine-readable information — researchers have yet to crack the brain’s encoding. Pulses from the visual center aren’t like those produced when formulating speech, and it is sometimes difficult to identify signals’ origination points.
Neuralink will also need to convince regulators to approve its device for clinical trials. Brain-computer interfaces are considered medical devices that require specific consent from the FDA, which can be time-consuming and costly to obtain.
Perhaps anticipating this, Neuralink has expressed interest in opening its own animal testing facility in San Francisco (though a Neuralink spokesperson says the reports “aren’t correct”), and the company last month published a job listing for candidates with experience in phones and wearables. In 2019, Neuralink claimed it performed 19 surgeries on animals and successfully placed wires about 87% of the time.
The road ahead These hurdles haven’t discouraged Neuralink, which has over 90 employees and has received $158 million in funding, including at least $100 million from Musk. However, the challenges may have been exacerbated by what STAT News described as a “chaotic internal culture.” Responding to STAT , a Neuralink spokesperson said many of STAT’s findings were “either partially or completely false.” While Neuralink expects that inserting the electrodes will initially require drilling holes through the skull, it hopes to soon use a laser to pierce bone with a series of small holes, which might lay the groundwork for research into alleviating conditions like Parkinson’s and epilepsy and help physically disabled patients hear, speak, move, and see.
That’s less far-fetched than it might sound. Columbia University neuroscientists have successfully translated brain waves into recognizable speech. A team at the University of California, San Francisco built a virtual vocal tract capable of simulating human verbalization by tapping into the brain. In 2016, a brain implant allowed an amputee to move the individual fingers of a prosthetic hand with their thoughts. And experimental interfaces have allowed monkeys to control wheelchairs and type at 12 words a minute using only their minds.
“I think at launch, the technology is probably going to be … quite expensive. But the price will very rapidly drop,” Musk said. “Inclusive of surgery … we want to get the price down to a few thousand dollars, something like that. It should be possible to get it similar to Lasik [eye surgery].” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,103 | 2,020 |
"Is your small business ready for the new world of work? (VB Live) | VentureBeat"
|
"https://venturebeat.com/ai/is-your-small-business-ready-for-the-new-world-of-work-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live Is your small business ready for the new world of work? (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Docusign Navigating today’s new world of work does not have to be complex and daunting. Learn about the industry-leading technology digital tools that enable seamless ways of doing business, modernize processes and accelerate sales, even in challenging times, in this VB Live event.
Register here for free.
As the business world reckons with the COVID-19 pandemic, companies are finding that to ensure continuity and safety, and to stay competitive in a challenging time, they need to have the right processes and tools in place.
For many businesses, that means embracing an increasingly digital way of doing work, and modernizing processes and systems in order to move forward. But for small businesses, going all in on the next tech advance can be overwhelming, especially when cash, time and resources are at a premium.
However, not all tech advances will break the bank, and some are so affordable to invest in and simple and seamless to implement that they end up essentially paying for themselves, whether in time saved, bottom line boosted, or customer satisfaction improved — sometimes all three.
Here are five ways small businesses can embrace the new world of technology, to help meet the challenges of the new world of work.
Accounting software: With advances in automation, machine learning, cloud computing and more, accounting software has grown increasingly sophisticated. But it’s also incredibly accessible, available at myriad price points and feature levels. Most accounting software can keep you on top of your financial health from mobile, desktop, and tablets, letting you manage your finances, import documents, manage compliance, and even handle invoices, payroll, and billing. That cloud connectivity which lets you access your software from most devices also keeps your data secure.
Digital signatures: Somehow, pen-and-paper forms, faxes, and scanners are still used in businesses, despite being slow and cumbersome ways of conducting business. Digital signatures make creating agreements, sending them out, and getting them back significantly more efficient, and far more secure than paper methods. For more than 20 years, they’ve been proven to be reliable, and fast. And being able to sign things digitally, on any device, means protecting your customers and employees.
Customer Relationship Management (CRM) software: A CRM gives you a single source of truth for all of your business activity and helps you scale as you grow. It allows you to better track, manage, and serve your customers from one platform. You can manage all your contacts and track all your relationship and transactional information, including contracts and other documents in one place. It also offers data analysis to help you make data-driven decisions, lets you automate administrative tasks, report on sales activity and personalize communications — plus, most integrate with esignature technology.
Collaboration tools: In a world where remote work is quickly becoming the norm, or even when it’s just not possible to gather your team together, collaboration tools are becoming increasingly essential to efficient work. With a digital collaboration platform, your team can meet face-to-face to brainstorm, plan, and review, share documents and collaborate in real time with a single shared view, and stay on track with project management software that offers shared calendars, automated routing, alerts and approvals. Most platforms integrate other essential digital tools, like esignatures to speed up collaboration and improve productivity.
Digital marketing tools.
Online marketing isn’t for big companies — today’s platforms offer customized marketing strategies and tools to build websites that solidify your brand, capture leads and create integrated campaigns, without needing to hire expensive developers. Adding an ecommerce solution allows you to broaden the scope of your products and services as well. Ensure that the marketing solution you choose fits your needs now, but can be easily expanded to fit new marketing strategies as your business grows.
To learn more about the digital tools that help your business and your employees be more productive in the new normal, don’t miss this VB Live event.
Register for free here.
In this webinar, you’ll: Hear from industry experts from DocuSign, Stripe, and ChowNow Understand market trends impacting the speed of business transactions Learn how technology platforms are helping businesses shift to the new world of work Enable your business to use digital tools as a competitive advantage and key differentiator Speakers: Jeanne DeWitt Grosser, Head of Revenue and Growth for the Americas, Stripe Dave Simon , Vice President, Small Business Commercial Sales, DocuSign Brent Kraus, SVP Sales & Restaurant Success, ChowNow Stewart Rogers, Moderator, VentureBeat More speakers to be announced soon! The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,104 | 2,020 |
"IBM will use AI to pipe in simulated crowd noise during the U.S. Open | VentureBeat"
|
"https://venturebeat.com/ai/ibm-will-use-ai-to-pipe-in-simulated-crowd-noise-during-the-u-s-open"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM will use AI to pipe in simulated crowd noise during the U.S. Open Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As it has every year since around 2015, longtime U.S. Open sponsor and media partner IBM detailed the AI technologies it will use to support the tournament beginning on August 31. In a typical year, these technologies aren’t anything out of the ordinary. In 2019, IBM demoed AI that measures player performance and selects highlights, for example. But the pandemic introduced challenges that have required the company — along with countless internet service providers and utilities — to develop new solutions.
The first is AI Sounds, which aims to recreate the ambient noise normally emanating from the stadium. IBM says it leveraged its AI Highlights platform to digest video from last year’s U.S. Open and rank the “excitement level” of various clips, which it compiled into a reel and classified to give each a crowd reaction score. IBM used hundreds of hours of footage to extract crowd sounds, which it plans to make available to ESPN production teams that will serve it dynamically based on play.
How natural these AI-generated sounds will be remains a question. Some fans have taken issue with the artificiality of noises produced by platforms like Electronic Arts’ Sounds of the Stands, which simulates crowd sounds using technology borrowed from the publishers’ FIFA game series. The NBA has reportedly considered mixing in audio from NBA 2K during its broadcasts, and the NFL is expected to use artificial fan noise for its live games this year if they’re played in empty stadiums.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But even if AI Sounds is a dud, IBM hopes to engage fans with Watson Discovery, its text analytics and natural language processing service. The company says it will facilitate tennis debates between viewers on USOpen.org, starting with questions like “Is Billie Jean King the most influential tennis player in history?” As these debates occur, Watson Discovery will analyze “millions” of news and sports sources for insights to deliver pro and con arguments.
Watson Discovery will also provide “AI-powered” insights through USOpen.org and US Open smartphone apps ahead of each tennis match, courtesy of a feature called Match Insights. As its name suggests, Match Insights will search for and analyze articles, blogs, statistics, and more to gather relevant information about opponents and players and translate that information into narrative form.
IBM says AI Sounds and the Watson Discovery features were developed by IBM iX, its in-house digital design agency, in 75 days. (The United States Tennis Association made the decision to hold the U.S. Open without fans on June 17.) A team of IBM iX employees collaborated virtually with the United States Tennis Association digital team, cocreating the fan experiences to fast-track their deployment.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,105 | 2,020 |
"How to tell if computer vision can transform your business | VentureBeat"
|
"https://venturebeat.com/ai/how-to-tell-if-computer-vision-can-transform-your-business"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to tell if computer vision can transform your business Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
During my career as an automotive engineer at Ford Motor Company in the early 2000s, world-class vision system technology was already routinely being used for various applications on programs and production. However, the automation around the analysis had not yet come full circle. Today, it has. And with the latest iterations of smartphones for facial recognition and other functionality, most anyone with a smartphone already has a device using computer vision.
Machine and computer vision are also being used in applications such as satellite geo-analytics, food safety and processing, agriculture operations, augmented reality, human emotion analysis, medical diagnostics, robotic guidance, quality control, transportation coordination, utilities, security surveillance, and more.
As the technology proliferates across new sectors, it will be critical for business leaders to assess how computer vision could impact the trajectory of their organization. So let’s take a deeper look at the real-world use cases emerging today. (For the sake of this discussion, I’ll use computer vision and machine vision interchangeably, even though there are slight differences.) A quick primer First, a quick look at where computer vision fits within the world of artificial intelligence: Deep learning is a sub-category of machine learning, which is a sub-category of artificial intelligence. Computer vision and machine vision usually rely on deep learning algorithms for proper weighting and training of visual data. Machine vision may also use a hybrid of various machine learning techniques to get to the desired level of output reasoning. From a business perspective, what you need to know is that this is one of the fastest growing sub-categories under the artificial intelligence umbrella for real-world applications.
Machine vision gives hardware the ability to observe and interpret its environment from what it “sees.” It requires input from vision systems, which have been around for decades within many industries.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Top-line growth is king, but bottom-line growth is the throne. With computer vision, you get both, and this is why we’re seeing such a positive market outlook for this technology. Studies conducted by various market research firms show the global market for computer vision for 2019 at around $10 billion and project we’ll see it grow to $14 – $33 billion by 2025. Regionally, Asia-Pacific and the United States are leading in global market share today.
Why you should care now: The market drivers My wife, who wears contacts, told me this week, after her most recent reorder from 1-800 Contacts, that they now use the camera on your phone or laptop to assess prescription updates for your order. This is computer vision. A closer look at this case study reveals that 1-800 Contacts acquired Israeli-based startup 6over6 Vision in December to implement this now deployed computer vision technology to all of its customers. Without publicly published results, we can only imagine both the top-line impact (wouldn’t you prefer to order a commodity like contacts without needing to leave your home to see a doctor?), and bottom-line impact this has for 1-800 Contacts’ core business. There are not many technologies that give an organization those kinds of results, with that kind of scale, that quickly.
In a separate example using the same core computer vision technology for energy utilities, I’ve seen case studies that show an 80% reduction in image recognition, classification, and response time for grid monitoring, while increasing the inspection coverage area on the grid by 15x. In addition to the day-to-day operational benefits this tech is having on the utility ecosystem, imagine the positive impact it has for states with increasingly complex and overwhelmed grid operations that need to be able to stand up to wildfires, flooding, power outages, and other key challenges.
Eyecare and utilities are completely different industries, yet computer vision applications are having significant impact on both. And those are just two examples from a growing number of case studies. In fact, we’re now seeing a big uptick in computer vision applications across many sectors. Why now? Because, when you look at the basic economics, the market drivers outweigh the technology barriers: Market drivers for computer vision: Increasing inspection, safety, and security needs Need to optimize daily operation activities Need to scale automation to increase bottom line Decreasing cost to purchase and scale the technology Decreasing time to build world-class computer vision and machine intelligence applications Increased accuracy of computer vision and artificial intelligence algorithms Increased activity in the capital markets to resource the product development In a worldwide pandemic with increased remote work required, this technology provides a solution for workers to “see” what’s happening remotely.
Barriers to adoption: Executive decision-makers face a learning gap when it comes to understanding the massive positive impact on their organizations Companies specializing in computer vision are still grappling with correct pricing and determining which business models will become standard It’s not a simple technology to build and scale properly; requires highly-skilled teams Development costs in the initial phases are still high enough to prevent massive development and adoption across the board for smaller businesses Real-time processing requires good communication connectivity.
The potential More exciting advancements in computer vision applications can also be seen coming from the Digital Health and Telehealth sectors. In a world where we are now experiencing new rules around human interaction that affect in-person medical check-ups, computer vision is a fantastic solution to help providers with various medical diagnostics and primary care options. It can also be seen in clinical diagnostics, such as interpretation of radiological imaging. In biotechnology, computer vision is starting to make its way into genomic and proteomic analysis. And in agriculture tech, companies such as John Deere have made acquisitions to get in the front of the wave to leverage applications helping farmers assess crops/crop yields at a fraction of the time and cost. As this expands, it could have a huge impact on the world’s food supplies and mitigating shortages.
Finding a fit If you and your organization don’t have a process for implementing innovation and transformation yet, there is no better time than now. There is a lot of information available around lean/ agile development, and design thinking constructs that help businesses with this exact thing. If you already have something in place, start small with a pilot and go from there. If you’re an executive in the organization, you should always consider which business units the innovation touches most and how to properly implement within and across your business units. Digital transformation for an enterprise has more hurdles to address with the soft side of human management than it does with technology. In my experience, technology is never the constraint, it’s the proper implementation for the culture of your organization and the culture you want to build that is really the true driver for successful adoption. Your people and teams should learn to love it like all tools that make their life better, but there is nuance in executing that for any executive team.
If you’re in ideation mode and exploring or brainstorming around where and how artificial intelligence applications can help drive new opportunities for your business, think of how nature works, since learning from nature gives you the best path of understanding application. The question to really ask yourself is, where can our senses — sight, touch, hearing, smell, and taste — be used to transform core business functions if combined with almost limitless computation? It’s safe to say that if sight is needed for any transformational move your company does or will need to do, you can find a way to streamline the automation of it via computer vision technology as a great point on your organization’s continuous innovation journey.
Adrian Walker specializes in artificial intelligence strategy and product development. Today, he is CEO of AIZA , a machine vision and autonomous systems company, and Managing Partner of Telescopic Ventures , an early-stage venture capital firm. Prior to that, he worked as an automotive engineer as well as a utility inspection quality engineer.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,106 | 2,020 |
"How Kabbage processed $7 billion in Paycheck Protection Program loans with machine learning (VB Live) | VentureBeat"
|
"https://venturebeat.com/ai/how-kabbage-processed-7-billion-in-paycheck-protection-program-loans-with-machine-learning-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live How Kabbage processed $7 billion in Paycheck Protection Program loans with machine learning (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Amazon Web Services Machine learning is helping companies conquer pressing business challenges during the pandemic in unprecedented new ways. For real-world ML success stories, best practices, and key learnings, don’t miss this VB Live event with experts from Kabbage, Novetta, and Amazon Web Services.
Register here for free.
The smallest businesses have been among the hardest hit by the pandemic. Many relied on foot traffic which became non-existent as soon as stay-at-home directives took hold. And the median small business with more than $10,000 in monthly expenses had only about two weeks of cash on hand at the end of March.
The Paycheck Protection Program (PPP), a loan offered by the Small Business Administration (SBA) to provide a direct incentive for small businesses to keep their workers on the payroll, offered the possibility of relief. However, with phenomenal demand, there was a lot of confusion at the outset and small business owners were scrambling to qualify.
“When we discovered that the government was going to be providing billions of dollars in relief to small businesses, we thought it was important to help them get it,” says Kathryn Petralia, co-founder and president of fintech company Kabbage. “We knew we could serve smaller businesses well, and we started running in that direction as fast as we could.” Over the course of the PPP sign-up period, Kabbage processed $7 billion in program loans. This meant providing support to nearly 300,000 small businesses and preserving an estimated 945,000 jobs at businesses from restaurants, gyms, and retail stores, to zoos, shrimp boats, and beekeepers.
It took only two weeks to build and implement this solution to benefit small businesses. The model training process began with humans reviewing documents to develop a training set that would help the model identify file types, the information needed for each identified file, and where to find it. When they first started processing the PPP applications, about 20% of their applications were fully automated, but by the time they started the second tranche, that number had grown to 80%.
Interestingly, even though 100% of Kabbage’s PPP customers had a bank account, they couldn’t get a PPP loan through their bank, Petralia says. Not because the banks aren’t sympathetic, but simply because they didn’t have the means to process the number of loan applications coming through, and tended to prioritize the larger loans from larger companies.
By August 8, which is when the extension of the program ended, Kabbage had processed nearly 300,000 approved loans with the help of ML, making Kabbage the second largest issuer of PPP loans in the country.
“AWS technology enabled us to serve more customers who are more vulnerable because they were smaller and didn’t have access,” she says. “For every 790 employees at these major banks, we have one — and we surpassed the biggest bank in the nation by application volume. That really demonstrates the power of the automation and the technology.” Among the applicants was Kristy Kowal, a swimmer on the National Olympic Team for over 10 years. She holds eight American records, one World record, and won the silver medal in the 2000 Olympic games for the 200-Meter Breaststroke. She’s now an educator and athlete-development specialist, and was hit hard when COVID-19 resulted in pools around the country closing down. After spending more than two months trying to get relief from the Pandemic Unemployment Assistance (PUA) in California and the Employment Development Department (EDD), encountering roadblock after roadblock, she was finally able to complete the PPP loan process quickly with Kabbage’s automated solution.
Going forward, Petralia plans to bring this ML solution to their cash flow management platform for small businesses, including the checking account product they recently launched.
“There’s a lot we can do there to help businesses spend less money in overdraft fees and get better access to services and get access to their deposited funds more rapidly,” she says. “We can use the AWS machine learning to build the models that help manage the risk for the smallest of small businesses.” Join a round table with leaders from Kabbage and Novetta, as well as Michelle K. Lee, VP of the Amazon Machine Learning Solutions Lab, to learn more about the impact these machine learning solutions delivered and the lessons learned along the way.
Register here for free.
You’ll learn: How to get started on your AI/ML journey during these uncertain times How to adapt and leverage your existing ML expertise as new challenges arise How to avoid common pitfalls and apply lessons learned How to get the most out of AI/ML and the impact it can have on your business, and society, in increasingly uncertain times Speakers: Michelle K. Lee, Vice President of the Amazon Machine Learning Solutions Lab, AWS David Cyprian, Product Owner, Novetta Kathryn Petralia, Co-founder and President, Kabbage The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,107 | 2,020 |
"Google researchers investigate how transfer learning works | VentureBeat"
|
"https://venturebeat.com/ai/google-researchers-investigate-how-transfer-learning-works"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google researchers investigate how transfer learning works Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Transfer learning’s ability to store knowledge gained while solving a problem and apply it to a related problem has attracted considerable attention. But despite recent breakthroughs, no one fully understands what enables a successful transfer and which parts of algorithms are responsible for it.
That’s why Google researchers sought to develop analysis techniques tailored to explainability challenges in transfer learning. In a new paper , they say their contributions help clear up a few of the mysteries around why machine learning models transfer successfully — or fail to.
During the first of several experiments in the study, the researchers sourced images from a medical imaging data set of chest X-rays (CheXpert) and sketches, clip art, and paintings from the open source DomainNet corpus. The team partitioned each image into equal-sized blocks and shuffled the blocks randomly, disrupting the images’ visual features, after which they compared agreements and disagreements between models trained from pretraining versus from scratch.
The researchers found the reuse of features — the individual measurable properties of a phenomenon being observed — is an important factor in successful transfers, but not the only one. Low-level statistics of the data that weren’t disturbed by things like shuffling the pixels also play a role. Moreover, any two instances of models trained from pretrained weights made similar mistakes, suggesting these models capture features in common.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Working from this knowledge, the researchers attempted to pinpoint where feature reuse occurs within models. They observed that features become more specialized the denser the model (in terms of layers) and that feature-reuse is more prevalent in layers closer to the input. (Deep learning models contain mathematical functions arranged in layers that transmit signals from input data.) The researchers also found it’s possible to fine-tune pretrained models on a target task earlier than originally assumed — without sacrificing accuracy.
“Our observation of low-level data statistics improving training speed could lead to better network initialization methods,” the researchers wrote. “Using these findings to improve transfer learning is of interest for future work.” A better understanding of transfer learning could yield substantial algorithmic performance gains. Google is using transfer learning in Google Translate so insights gleaned through training on high-resource languages — including French, German, and Spanish (which have billions of parallel examples) — can be applied to the translation of low-resource languages like Yoruba, Sindhi, and Hawaiian (which have only tens of thousands of examples). Another Google team has applied transfer learning techniques to enable robot control algorithms to learn how to manipulate objects faster with less data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,108 | 2,020 |
"Facebook open-sources Opacus, a PyTorch library for differential privacy | VentureBeat"
|
"https://venturebeat.com/ai/facebook-open-sources-opacus-a-pytorch-library-for-differential-privacy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook open-sources Opacus, a PyTorch library for differential privacy Share on Facebook Share on X Share on LinkedIn A woman looks at the Facebook logo on an iPad in this photo illustration.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook today open-sourced Opacus , a library for training PyTorch models with differential privacy that’s ostensibly more scalable than existing methods. With the release of Opacus, Facebook says it hopes to provide an easier path for engineers to adopt differential privacy in AI and to accelerate in-the-field differential privacy research.
Typically, differential privacy entails injecting a small amount of noise into the raw data before feeding it into a local machine learning model, thus making it difficult for malicious actors to extract the original files from the trained model. An algorithm can be considered differentially private if an observer seeing its output cannot tell if it used a particular individual’s information in the computation.
“Our goal with Opacus is to preserve the privacy of each training sample while limiting the impact on the accuracy of the final model. Opacus does this by modifying a standard PyTorch optimizer in order to enforce (and measure) differential privacy during training. More specifically, our approach is centered on differentially private stochastic gradient descent,” Facebook explained in a blog post. “The core idea behind this algorithm is that we can protect the privacy of a training dataset by intervening on the parameter gradients that the model uses to update its weights, rather than the data directly.” Opacus uniquely leverages hooks in PyTorch to achieve an “order of magnitude” speedup compared with existing libraries, according to Facebook. Moreover, it keeps track of how much of the “privacy budget” — a core mathematical concept in differential privacy — has been spent at any given point in time to enable real-time monitoring.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Opacus also employs a cryptographically safe, pseudo-random, GPU-accelerated number generator for security-critical code, and it ships with tutorials and helper functions that warn about incompatible components. The library works behind the scenes with PyTorch, Facebook says, producing standard AI models that can be deployed as usual without extra steps.
“We hope that by developing PyTorch tools like Opacus, we’re democratizing access to such privacy-preserving resources,” Facebook wrote. “We’re bridging the divide between the security community and general machine learning engineers with a faster, more flexible platform using PyTorch.” The release of Opacus follows Google’s decision to open-source the differential privacy library used in some its core products, such as Google Maps, as well as an experimental module for TensorFlow Privacy that enables assessments of the privacy properties of various machine learning classifiers. More recently, Microsoft released WhiteNoise , a platform-agnostic toolkit for differential privacy in Azure and in open source on GitHub.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,109 | 2,020 |
"Cosmose AI raises $15 million to track in-store shoppers using smartphone data | VentureBeat"
|
"https://venturebeat.com/ai/cosmose-ai-raises-15-million-to-track-shoppers-in-store-behavior-using-smartphone-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cosmose AI raises $15 million to track in-store shoppers using smartphone data Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Location data analytics provider Cosmose AI today announced it raised $15 million in a funding round valuing the company at over $100 million. A spokesperson told VentureBeat the capital will be used to drive customer acquisition and product R&D.
Keeping apprised of shopping trends online is straightforward enough — whole categories of startups achieve this with modeling. But what about when that shopping takes place in-store? Despite (or perhaps because of) the pandemic, physical store brands see tracking the behaviors of mall, outlet, and department shoppers as of critical importance because of its potential to boost engagement (and sales).
To meet demand, in 2014, Miron Mironiuk founded Cosmose within a startup program hosted by venture capital firm Founders Factory. Brands like LVMH, Richemont, Walmart, L’Oreal and Samsung already use the company’s tools to track visitors’ habits and target them with online ads via WeChat, Weibo, Facebook, Google, and over 100 other internet platforms.
Mironiuk says there’s been an uptick in interest, particularly in Asia, following the easing of pandemic-related lockdown restrictions. Earlier this year, the Cosmose platform gathered data from over 360,000 stores — including over 600 luxury and beauty brands and shopping malls in mainland China, Hong Kong, and Macau — to assess how brands might recover from the pandemic-related fall in foot traffic. More recently, Cosmose inked contracts with Marriott and Walmart, expanded its operations to Japan and Paris, and partnered with a “leading Japanese ecommerce company” to bring the Cosmose platform to new clientele.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cosmose leverages a combination of machine learning tech and telemetry to deliver a “holistic view” of in-store shoppers. Using an agentless component that draws on data from over 400,000 social media, ride-sharing, and weather apps installed across more than a billion smartphones, Cosmose claims it’s able to pinpoint customers’ locations down to store aisles (about two meters) with 73% accuracy.
Above: The mobile Cosmose AI app.
An analytics dashboard lets Cosmose customers compare the performance of stores and analyze shoppers’ behavior through a number of lenses (e.g., brands and categories), with the goal of gleaning insights like the best location for a new store. Meanwhile, a predictive product — Cosmose Brain — anticipates when and where shoppers will go by correlating offline purchasing habits with online advertising and behavioral data, spotlighting customers most likely to convert at any moment.
All this helps Cosmose’s advertising orchestration dashboard, Cosmose Media, to show the impact of online ads on visits and measure conversions for every ad format, for each day and store. Cosmose Media also segments customers by their behaviors, like those who leave stores before making a purchase versus those who stop in a fitting room but don’t buy anything.
Cosmose says it doesn’t collect personally identifiable information like phone numbers, email addresses, or serial numbers. Instead, it uses anonymized data from data providers operating under local laws such as the EU’s General Data Protection Regulation. Cosmose says it generates a nine-digit number called an OMNIcookie for tracking a minimum of 100 smartphones offline and claims that shoppers can opt out of tracking by filling out a form on its website.
By 2022, Cosmose, which recently opened an office in Poland, aims to expand its ecosystem to over 2 billion smartphones and 10 million stores across Asia. Later this year, the company plans to launch products in Southeast Asia, the Middle East, and India, which it believes will propel it to profitability sometime in 2021.
Cosmose’s series A round was led by Tiga Investments with participation from OTB Ventures, TDJ Pitango, and a number of “ultra-high net worth” individuals in Asia. It follows a $ 12 million seed round in September 2019, bringing the startup’s total raised to date to $27 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,110 | 2,020 |
"Apple launches AI/ML residency program to attract niche experts | VentureBeat"
|
"https://venturebeat.com/ai/apple-launches-ai-ml-residency-program-to-attract-niche-experts"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple launches AI/ML residency program to attract niche experts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As Apple’s artificial language and machine learning initiatives continue to expand, its interest in attracting talent has grown — a theme that’s barely under the surface of the company’s occasionally updated Machine Learning Research blog. Now Apple is openly seeking to recruit U.S. and European candidates with niche expertise for a yearlong AI/ML residency program , promising immersion and mentoring that will advance their careers.
The goal is apparently to find people whose interests aren’t necessarily AI/ML specific, then give them the knowledge and tools to apply machine learning and deep learning to their disciplines — a process that will widen Apple’s ability to solve users’ problems in those disciplines. Apple says its ideal candidates would come from fields such as cognitive science, psychology, physics, robotics, public health, or computer graphics, but in any case should have programming proficiency and either a graduate degree or equivalent industry experience.
Residencies are currently being offered in Cupertino, California; Seattle, Washington; Cambridge, U.K.; Zurich, and “various locations within Germany” for a summer 2021 start date, with assignment descriptions that vary between locations. In Cambridge , candidates are explicitly offered the chance to work on Apple’s personal assistant Siri, while people located in other cities will apparently have opportunities in other AI/ML fields. Each residency will kick off with machine and deep learning coursework before moving into machine learning projects, potentially with the subsequent opportunity to present the work at an academic conference.
Like several of its tech giant peers, Apple has invested significantly in improving the performance of its software and services using AI and ML. While Microsoft has largely wound down consumer access to the digital personal assistant Cortana , Apple has continued to augment its rival service Siri and conspicuously added ML features to its iPhone cameras, the Apple Watch, and iOS and Mac photo software. Some of the features, including the iPhone camera detail-enhancing Deep Fusion , have received widespread praise, while Siri and others have continued to be problematic.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Apple is far from the only technology company with an AI residency program, but has lagged behind rivals such as Google , IBM , Microsoft , and Nvidia in offering one.
Facebook canceled its 2020-2021 AI residency program in May, blaming the COVID-19 pandemic while noting the experience wouldn’t be successful in a remote format.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,111 | 2,020 |
"Allen Institute open-sources AllenAct, a framework for research in embodied AI | VentureBeat"
|
"https://venturebeat.com/ai/allen-institute-open-sources-allenact-a-framework-for-research-in-embodied-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Allen Institute open-sources AllenAct, a framework for research in embodied AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Researchers at the Allen Institute for AI today launched AllenAct, a platform intended to promote reproducible research in embodied AI with a focus on modularity and flexibility. AllenAct, which is available in beta, supports multiple training environments and algorithms with tutorials, pretrained models, and out-of-the-box real-time visualizations.
Embodied AI, the AI subdomain concerning systems that learn to complete tasks through environmental interactions, has experienced substantial growth. That’s thanks in part to the advent of techniques like deep reinforcement learning and innovations in computer vision, natural language processing, and robotics. The Allen Institute argues that this growth has been mostly beneficial, but it takes issue with the fragmented nature of embodied AI development tools, which it says discourages good science.
In a recent analysis, the Allen Institute found that the number of embodied AI papers now exceeds 160 (up from around 20 in 2018 and 60 in 2019) and that the number of environments, tasks, modalities, and algorithms varies widely among them. For instance, 10% of papers list 6 modalities, while 60% test against just 1. Meanwhile, 10% of papers address 4 benchmark tasks, while 20% only cover 2.
Above: Growth and fragmentation of embodied AI.
“Just as we now expect neural architectures to be evaluated across multiple data sets, we must also start evaluating embodied AI methods across tasks and data sets … It is crucial to understand what components of systems matter most and which do not matter at all,” Allen Institute researchers wrote in a blog post today. “But getting up to speed with embodied AI algorithms takes significantly longer than ramping up to classical tasks … And embodied AI is expensive [because] today’s state-of-the art reinforcement learning methods are sample-inefficient and training competitive models for embodied tasks can cost tens of thousands of dollars.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AllenAct aims to address challenges around embodied AI data replication, ramp-up time, and training costs by decoupling tasks and environments and ensuring compatibility with specialized algorithms that involve sequences of training routines. It ships with detailed startup guides and code and models for a number of standard embodied AI tasks, as well as support for embodied AI scenarios and so-called grid-worlds like MiniGrid. AllenAct’s visualizations integrate with TensorBoard , an analysis module for Google’s TensorFlow machine learning framework. And the Allen Institute claims AllenAct is one of the few reinforcement learning frameworks to target Facebook’s PyTorch.
Above: Some of the embodied AI environments actively used in research today.
“Just as the early deep learning libraries like Caffe and Theano, and numerous online tutorials, lowered entry barriers and ushered in a new wave of researchers towards deep learning, embodied AI can benefit from modularized coding frameworks, comprehensive tutorials, and ample startup code,” the researchers wrote. “We welcome and encourage contributions to AllenAct’s core functionalities as well as the addition of new environments, tasks, models, and pre-trained model weights. Our goal in releasing AllenAct is to make embodied AI more accessible and encourage thorough, reproducible research.” AllenAct is open source and freely available under the MIT License.
The release of AllenAct comes after the Allen Institute encountered embodied AI research roadblocks arising from the pandemic. They had planned to launch the RoboTHOR challenge earlier this year, which would have involved deploying navigation algorithms in a robot — the LocoBot — and running it through a physical environment at the nonprofit’s labs. But due to the pandemic, all Allen Institute employees were working from home, preventing them from running experiments on LocoBot for the foreseeable future. They decided to pare down the challenge to only simulated scenes.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
4,112 | 2,020 |
"AI Weekly: Facebook's discriminatory ad targeting illustrates the dangers of biased algorithms | VentureBeat"
|
"https://venturebeat.com/ai/ai-weekly-facebooks-discriminatory-ad-targeting-illustrates-the-dangers-of-biased-algorithms"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Facebook’s discriminatory ad targeting illustrates the dangers of biased algorithms Share on Facebook Share on X Share on LinkedIn A woman looks at the Facebook logo on an iPad in this photo illustration.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
This summer has been littered with stories about algorithms gone awry. For one example, a recent study found evidence Facebook’s ad platform may discriminate against certain demographic groups. The team of coauthors from Carnegie Mellon University say the biases exacerbate socioeconomic inequalities, an insight applicable to a broad swath of algorithmic decision-making.
Facebook, of course, is no stranger to controversy where biased, discriminatory, and prejudicial algorithmic decision-making is concerned. There’s evidence that objectionable content regularly slips through Facebook’s filters, and a recent NBC investigation revealed that on Instagram in the U.S. last year, Black users were about 50% more likely to have their accounts disabled by automated moderation systems than those whose activity indicated they were white. Civil rights groups claim that Facebook fails to enforce its hate speech policies, and a July civil rights audit of Facebook’s practices found the company failed to enforce its voter suppression policies against President Donald Trump.
In their audit of Facebook, the Carnegie Mellon researchers tapped the platform’s Ad Library API to get data about ad circulation among different users. Between October 2019 and May 2020, they collected over 141,063 advertisements displayed in the U.S., which they ran through algorithms that classified the ads according to categories regulated by law or policy — for example, “housing,” “employment,” “credit,” and “political.” Post-classification, the researchers analyzed the ad distributions for the presence of bias, yielding a per-demographic statistical breakdown.
The research couldn’t be timelier given recent high-profile illustrations of AI’s proclivity to discriminate. As was spotlighted in the previous edition of AI Weekly, the UK’s Office of Qualifications and Examinations Regulation used — and then was forced to walk back — an algorithm to estimate school grades following the cancellation of A-levels, exams that have an outsize impact on which universities students attend. (Prime Minister Boris Johnson called it a “mutant algorithm.”) Drawing on data like the ranking of students within a school and a school’s historical performance, the model lowered 40% of results from teachers’ estimations and disproportionately benefited students at private schools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elsewhere, in early August, the British Home Office was challenged over its use of an algorithm designed to streamline visa applications.
The Joint Council for the Welfare of Immigrants alleges that feeding past bias and discrimination into the system reinforced future bias and discrimination against applicants from certain countries. Meanwhile, in California, the city of Santa Cruz in June became the first in the U.S. to ban predictive policing systems over concerns the systems discriminate against people of color.
Facebook’s display ad algorithms are perhaps more innocuous, but they’re no less worthy of scrutiny considering the stereotypes and biases they might perpetuate. Moreover, if they allow the targeting of housing, employment, or opportunities by age and gender, they could be in violation of the U.S. Equal Credit Opportunity Act, the Civil Rights Act of 1964, and related equality statutes.
It wouldn’t be the first time. In March 2019, the U.S. Department of Housing and Urban Development filed suit against Facebook for allegedly “discriminating against people based upon who they are and where they live,” in violation of the Fair Housing Act. When questioned about the allegations during a Capital Hill hearing last October, CEO Mark Zuckerberg said that “people shouldn’t be discriminated against on any of our services,” pointing to newly implemented restrictions on age, ZIP code, and gender ad targeting.
The results of the Carnegie Mellon study show evidence of discrimination on the part of Facebook, advertisers, or both against particular groups of users. As the coauthors point out, although Facebook limits the direct targeting options for housing, employment, or credit ads, it relies on advertisers to self-disclose if their ad falls into one of these categories, leaving the door open to exploitation.
Ads related to credit cards, loans, and insurance were disproportionately sent to men (57.9% versus 42.1%), according to the researchers, in spite of the fact more women than men use Facebook in the U.S. and that women on average have slightly stronger credit scores than men.
Employment and housing ads were a different story. Approximately 64.8% of employment and 73.5% of housing ads the researchers surveyed were shown to a greater proportion of women than men, who saw 35.2% of employment and 26.5% of housing ads, respectively.
Users who chose not to identify their gender or labeled themselves nonbinary/transgender were rarely — if ever — shown credit ads of any type, the researchers found. In fact, across every category of ad including employment and housing, they made up only around 1% of users shown ads — perhaps because Facebook lumps nonbinary/transgender users into a nebulous “unknown” identity category.
Facebook ads also tended to discriminate along the age and education dimension, the researchers say. More housing ads (35.9%) were shown to users aged 25 to 34 years compared with users in all other age groups, with trends in the distribution indicating that the groups most likely to have graduated college and entered the labor market saw the ads more often.
The research allows for the possibility that Facebook is selective about the ads it includes in its API and that other ads corrected for distribution biases. Many previous studies have established Facebook’s ad practices are at best problematic.
(Facebook claims its written policies ban discrimination and that it uses automated controls — introduced as part of the 2019 settlement — to limit when and how advertisers target ads based on age, gender, and other attributes.) But the coauthors say their intention was to start a discussion about when disproportionate ad distribution is irrelevant and when it might be harmful.
“Algorithms predict the future behavior of individuals using imperfect data that they have from past behavior of other individuals who belong to the same sociocultural group,” the coauthors wrote. “Our findings indicated that digital platforms cannot simply, as they have done, tell advertisers not to use demographic targeting if their ads are for housing, employment or credit. Instead, advertising must [be] actively monitored. In addition, platform operators must implement mechanisms that actually prevent advertisers from violating norms and policies in the first place.” Greater oversight might be the best remedy for systems susceptible to bias. Companies like Google , Amazon , IBM , and Microsoft ; entrepreneurs like Sam Altman ; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.
For years, some U.S. courts used algorithms known to produce unfair, race-based predictions more likely to label African American inmates at risk of recidivism. A Black man was arrested in Detroit for a crime he didn’t commit as the result of a facial recognition system. And for 70 years, American transportation planners used a flawed model that overestimated the amount of traffic roadways would actually see, resulting in potentially devastating disruptions to disenfranchised communities.
Facebook has had enough reported problems, internally and externally , around race to merit a harder, more skeptical look at its ad policies. But it’s far from the only guilty party. The list goes on, and the urgency to take active measures to fix these problems has never been greater.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.