id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
3,413 | 2,022 |
"Data intensity: The key to a data-driven future | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/data-intensity-the-key-to-a-data-driven-future"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Data intensity: The key to a data-driven future Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
What does the data-driven future look like? It will consist of systems that are: Highly automated and use data to make trusted, fair, split-second decisions.
Personalized and situationally aware to cater to user needs.
Able to address data movement, geographic distribution, governance, privacy and security; and Decentralized, address data ownership and work in tandem with centralized systems to allow sharing of data for the greater good.
But we don’t need to wait for all of that.
The data-driven future is already here.
An autonomous vehicle is an intensely data-driven system, sensing in real-time its environment and translating that into vehicle operations. At a level below autonomy, assistive technologies are also data-driven, relying on real-time data to produce insight — i.e., the blind-spot detection system sends an alert — or to make decisions about when to employ anti-lock brakes and crash avoidance systems.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Successfully enabling such applications and use cases to be more data-driven is a journey that requires addressing complexity and adopting new approaches that enable you to better manage systems through maturity and sophistication. To assess digital maturity and resilience and level up your data-driven business, think in terms of data intensity.
Data intensity is multi-variable and changes sharply as you move in more than one dimension. The data-intensity of an application depends on data volume, query complexity, query latency, data ingest speed and user concurrency. Additional dimensions might include hybrid workloads (transactional and analytics), multi-modal analytics (operational analytics, machine learning, search, batch and real-time), elasticity, data movement requirements and so on.
Data intensity is increasing Data intensity isn’t just about data volume, it’s about what you do with your data. However, as data volumes increase, intensity grows. The intensity ramps up exponentially when the data also comes faster, creating the need for an application to handle 10 times more users while meeting the same (or better) latency SLAs. Intensity also increases sharply when the analysis of operational data in real-time combines with natural language interaction and recommendations.
We live in a data-intensive era, and intensity is growing as organizations increase their reliance on data to better understand their customers and shape experiences. How your organization responds in the data-intensive era can either add more complexity and friction for you and your customers — or it can provide you with new opportunities for differentiation and growth.
Choosing an approach that leads to greater complexity and friction is clearly counterproductive. Yet historically, many organizations have worked from the assumption that different workloads require different architectures and technologies, and that transactional and analytical workloads have to be separate. Managing data intensity in this environment creates inherent complexity , friction and data movement that adds latency and works against real-time insights.
Fortunately, you now have the chance to revisit and challenge traditional assumptions to embrace, enable and get the greatest benefit from the data-intensive era. You can leverage cloud computing, which delivers unprecedented scale and flexibility and the opportunity for organizations to innovate and experiment; separation of storage and compute, which disentangles storage and compute requirements; and modern solutions that combine transactional and analytical workloads in a single engine for all workloads.
In a data-driven organization, the day-to-day business operations, analytic insights from the operations and customer experiences become one – in real time. That is intense: data-intense.
Oliver Schabenberger is the chief innovation officer at SingleStore.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,414 | 2,022 |
"Bam! AI exits the Batcave to confront the jobs market | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/bam-ai-exits-the-batcave-to-confront-the-jobs-market"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Bam! AI exits the Batcave to confront the jobs market Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
What is the impact of artificial intelligence (AI) in an economy that has been whipsawed by worker resignations on one hand, and layoffs and hiring freezes on the other? As I think about this one-two punch, Batman comes to mind. Hear me out.
For years, AI-driven automation has been seen as a potential job killer. The thinking: robots and drones would replace the hands-on work of builders and doers. We’re getting a glimpse of this with driverless cars and automated factories.
But it’s possible that AI could have the opposite effect and drive demand for skilled workers in new jobs. In this scenario, rote administrative work might indeed give way to algorithmic processes, but new opportunities are created for workers in data-intensive businesses.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So, which is it? Does AI take jobs or create jobs? Back to Batman. A perennial favorite in popular culture, Batman is sometimes viewed warily by those that may not fully understand him. He’s both the Dark Knight and a force for good.
AI has its own duality, the dark side being its stereotyped reputation as a job killer. But because AI can spur new-style jobs while also driving efficiencies, the business world, like Gotham City, will be a better place.
I’m convinced that AI will be a net positive for today’s workforce, as well as for businesses that are trying to strike the right balance in a global economy that rewards operational efficiency yet punishes those unable to attract and retain talent.
AI-driven processes and applications push both of those levers. They can increase business productivity while also establishing high-value jobs. Those do not have to be competing interests, nor should they.
The productivity boost — as much as 40%, according to Accenture and Frontier Economics — comes in the form of automation.
At the same time, the thing that AI does really well is provide the underpinnings for a data-driven business environment. This is where job creation or what we might call “job metamorphosis” happens, as even entry-level workers take a bigger role in the data value chain. Instead of work that depends on monotonous routines, or is arduous or even dangerous, AI can free up people to focus on tasks that engage their human ingenuity.
These data-driven, AI-enabled jobs are the ones that will attract and retain a modern workforce, and there are many ways to do it. I worked with a company that had a cadre of employees whose jobs entailed creating routine marketing reports each week by hand-compiling incoming data from the company’s multitude of regions and business units. It was repetitive, assembly-line work, without much of a career path.
The company replaced that workbench approach with an AI-driven, self-service model that gave business units more flexibility to do their own data crunching. That freed up the reports team to pursue more innovative analysis and intellectually engaging projects. In the process, the company was able to trim costs by reducing its dependency on outside agencies it had relied on for the deep insights that the in-house reports team now had time for.
A force for good It’s understandable that people may not be 100% comfortable about the impact that AI can have on jobs. We’ve heard the dystopian predictions — disappearing jobs, AI bias, even our inability to “trust” AI.
To take the Dark Knight analogy one step further, if CEOs had a Batphone on their desk during the Great Resignation , many would have called for help. As recently as May, there were 11.3 million unfilled jobs in the United States, according to the Bureau of Labor Statistics.
Notably, it was the COVID pandemic, not AI, that caused the jobs crisis. But AI is now viewed by many business leaders as a potential solution to all those unfillable jobs. When talent is hard to find, workplace efficiency becomes a necessity. And AI excels at that.
The World Economic Forum’s Future of Jobs Report 2020 shows both sides of this long-term trend. It forecasts that 85 million jobs will be displaced by automation by 2025. At the same time, 97 million “jobs of tomorrow” will be created, resulting in a net gain of 12 million jobs.
Let’s pave career paths on the data continuum The types of jobs that involve data know-how are fast expanding. We see evidence of this every day across industries, including automotive, financial services and manufacturing. Even farms are using sensors and data intelligence to grow corn and soybeans.
At the center of the activity are data practitioners — data scientists, data engineers, data architects and business analysts. Increasingly, however, even workers who may not have four-year college degrees are becoming part of the data continuum. For example, in a data-first retailing operation to which all employees have access and are encouraged to participate, a sales associate in a sports store may make note of growing interest in a new style of running shoe, providing input into the enterprise-wide system.
The career path for these workers can be enriched and made more valuable, benefiting employers and employees alike, when data touchpoints are among the job responsibilities.
As more businesses move in this direction, it’s important to understand that the objective is not simply to accumulate and process more data. Many organizations already have more data than they can manage, and it just keeps growing. AI, by sifting through mountains of data, can empower humans to act upon business-expanding insights.
The key to success is creating actionable data, and CEOs don’t need a cape to do that. It starts with a data-driven, AI-enabled culture that includes the entire workforce.
Florian Douetteau is cofounder and CEO of Dataiku.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,415 | 2,022 |
"Afraid to delete data? Think again | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/afraid-to-delete-data-think-again"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Afraid to delete data? Think again Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data is a valuable corporate asset, which is why many organizations have a strategy of never deleting any of it. Yet as data volumes continue to grow, keeping all data around can get very expensive. An estimated 30% of data stored by organizations is redundant, obsolete or trivial (ROT), while a study from Splunk found that 60% of organizations say that half or more of their data is dark — which means its value is unknown.
Some obsolete data may pose a risk as companies are dealing with the increasing threats of ransomware and cyberattacks; this data may be underprotected and valuable to hackers. Adding to that, internal policies or industry regulations may require that organizations delete data after a certain period – such as ex-employee data, financial data or PII data.
Another issue with storing large amounts of obsolete data is that it clutters file servers, draining productivity. A 2021 survey by Wakefield Research found that 54% of U.S. office professionals agreed that they spend more time searching for documents and files than responding to emails and messages.
Being responsible stewards of the enterprise IT budget means that every file must earn its keep down to the last byte. It also means that data should not be prematurely deleted if it has value. A responsible deletion strategy must be executed in stages: inactive cold data should consume less expensive storage and backup resources and when data becomes obsolete, there is a methodical way to confine and delete it. The question is — how to efficiently create a data deletion process which identifies, finds and deletes data in a systematic way? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Barriers to data deletion Cultural: We are all data hoarders by nature and without some analytics to help us understand what data has truly become obsolete, it’s hard to change an organizational mindset of retaining all data forever. This unfortunately is no longer sustainable, given the astronomical growth in recent years of unstructured data — from genomics and medical imaging to streaming video, electric cars and IoT products. While deleting data that has no present or potential future purpose is not data loss, most storage admins have suffered the ire of users who inadvertently deleted files and then blamed IT.
Legal/regulatory: Some data must be retained for a given term, although usually not forever. In some cases, data can only be held for a given time according to corporate policy — such as PII data. How do you know what data is governed by what rule and how do you prove you are complying? Lack of systematic tools to understand data usage: Manually figuring out what data has become obsolete and getting users to act on it is tedious, time-consuming and hence never gets done.
Tips for data deletion Create a well-defined data management policy Developing a sustainable data lifecycle management policy requires the right analytics. You’ll want to understand data usage to identify what data can be deleted based on data types, such as interim data, and data use, such as data not used in a long time. This also helps gain buy-in from business users because deletion is based on objective criteria rather than a subjective decision.
With this knowledge, you can map out how data will transition over time: from primary storage to cooler tiers, possibly in the cloud, to archive storage, then confined out of the user space in a hidden location and, finally, deletion.
Considerations that may impact the policy include regulations, potential long-term value of data and the cost of storage and backups at every stage from primary to archive storage. These decisions can have enormous consequences if, say, datasets are deleted and then later needed for analytics or forecasting.
Develop a communications plan for users and stakeholders For a given workload or dataset, data owners should understand the cost versus benefits of retaining data. Ideally, the decision for data lifecycle policy is one agreed upon by all stakeholders — if not dictated by an industry regulation. Communicate the analytics on data usage and the policy with stakeholders to ensure they understand when data will expire and if there is a grace period that data is held in a confined or “undeleted” container. Confinement makes it easier for users to agree to data deletion workflows when they realize that if they need the data they can “unconfine” it within the grace period and get it back.
For long-term data that must be retained, ensure users understand the cost and any extra steps required to access data from deep archival storage. For example, data committed to AWS Glacier Deep Archive may take several hours to access. Egress fees will often apply.
Plan for technical issues that may arise Deleting data is not a zero-cost operation. We usually think only of R/W speeds, but deletion consumes system performance as well. Take this example from a theme park: photos of guests (100K) per day are retained for up to 30 days after the customer has left the park. On day 30, the workload for the storage system is double; it needs the capacity to ingest 100K photos and delete 100K.
Workarounds for delete performance, known as “lazy deletes,” may deprioritize delete workload – but if the system can’t delete data at least as fast as new data is ingested, you will need to add storage to hold expired data. In scale-out systems, you may need to add nodes to handle deletes.
A better approach is to tier cold data out of the primary file system and then confine and delete it, mitigating the issue of unwanted load and performance impact on the active filesystem.
Put the data management plan into action Once the policy has been determined for each dataset, you will need a plan for execution. An independent data management platform provides a unified approach covering all data sources and storage technologies. This can deliver better visibility and reporting on enterprise datasets while also automating data management actions. Collaboration between IT and LOB teams is an integral part of execution, leading to less friction as LOB teams feel they have a say in data management. Department heads are often surprised to find that 70% of their data is infrequently accessed.
Given the current trajectory of data growth worldwide — data is projected to nearly double from 97 ZB in 2022 to 181 ZB in 2025 — enterprises have little choice than to revisit data deletion policies and find a way to delete more data than they’ve done in the past.
Without the right tools and collaboration, this can turn into a political battlefield. Yet by making data deletion another well-planned tactic in the overall data management strategy, IT will have a more manageable data environment that delivers better user experiences and value for the money spent on storage, backups and data protection.
Kumar Goswami is CEO and cofounder of Komprise.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,416 | 2,022 |
"3 things that need to happen for Web3 to (really) take off | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/3-things-that-need-to-happen-for-web3-to-really-take-off"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 3 things that need to happen for Web3 to (really) take off Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Web3 is the next generation of the internet that will redefine our everyday digital experiences. Leveraging cryptography and distributed-ledger technology, Web3 is laying the framework for a user-owned and controlled internet. A tsunami of Web3 projects has emerged, unlocking new opportunities for various industries like financial services, gaming, esports, media, entertainment, retail, and more.
The Web3 ecosystem is currently undergoing significant growth in terms of funding from venture capital. There is an ever-expanding list of Web3 startups, be it DeFi protocols, NFTs, decentralized autonomous organizations (DAOs), play-to-earn (P2E) games, data storage and social media services.
According to a report by DappRadar , venture capital funds and investors have already invested more than $2.5 billion into blockchain gaming and related infrastructure during the first quarter of 2022 alone. That’s an enormous increase relative to the $4 billion total invested in 2021 and the $80 million in 2020. And this is just one aspect of the expansive Web3 ecosystem.
Another report, published on GitHub , suggests that there are more than 18,000 active developers in the Web3 ecosystem who commit their code to open-source blockchain projects at least once every month. The report further clarifies that the real number is likely higher as it doesn’t consider the development work done on proprietary Web3 projects.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! By all metrics, the growth of Web3 has been unprecedented. But it still has a long way to go before entering the mainstream-adoption phase. Although investor and user interest in Web3 products and services is increasing, several factors need to be addressed to accelerate the ongoing transition.
For Web3 to truly thrive, there are three critical areas that Web3 investors, developers and users need to address.
1. Users need to shift their mindset to a “user-owned” model In the current Web2, “as-a-service” iteration of the internet, users essentially don’t have a say in the future direction of the products or services they utilize. In most cases, users and owners of the platform or service tend to be separate until said platform or service lists in a public stock market, which allows for greater ownership accessibility amongst users.
Sure, shareholders are invited to vote on specific initiatives, but ordinary investors are far from being the driving force of corporate change. Even after purchasing shares, the amount of decision-making power granted to smaller shareholders via ownership is relatively minimal, preventing them from taking a seat at the table with institutional investors or funds that have more power to influence corporate decisions.
The Web3 model, on the contrary, offers true ownership. Tokens enable early and decentralized ownership of the platform or service users enjoy. Existing users who previously compromised on near-zero ownership in private companies must become acquainted with the responsibilities of ownership and governance. They need to realize the power of this “ownership” and the extent to which they can contribute and influence the development direction of a product or service.
By investing at an early stage, even an average individual can become part of the project’s governing body, thereby driving the product roadmap in conjunction with the community. The decision-making process becomes transparent, inclusive and fair — attributes that don’t ordinarily exist in the Web2 ecosystem.
2. Investors need to change to a “community-driven, collaborative and participatory” mentality In the Web2 paradigm, investors vie for percentage of control and board seats in order to ensure value capture and governance oversight.
However, this approach is less effective in Web3. Decentralized ownership is a key founding principle of Web3. Network effect can be best accelerated via decentralized ownership among community members who can have multiple roles (user of service, investor, supplier, business partner) within the ecosystem.
3. Projects need to think of a sustainable way to attract users Usually, projects generate immense hype within a short period of time with token incentives. There is no doubt that such campaigns quickly attract users and liquidity providers, which ramps up the key metrics that everyone evaluates.
However, this practice has its drawbacks. It tends to attract mercenary capital and token hunters who have no appreciation or loyalty for the platform’s purpose and long-term vision. Second, artificially pumped key metrics driven by short-term incentives tend to obscure an accurate evaluation of product-market fit. Third, over-expensing token reserve is equivalent to wasting market budgeting on things that don’t truly matter in the long run, leaving the projects with much less “ammunition” in their war chests down the road.
Instead, every project should design the tokenomics thoughtfully. It is a good idea to start allocating tokens only after projects have found the right audience that share similar interests and goals.
Conclusion We are still in the first inning of Web3. Although the phrase “We’re still early” has been overused, it is not a trope. It sounds cliche because it’s true — we are still very early. I hope that I have provided some food for thought for those of us who are hard at work building the next generation of the internet.
Emma Cui is CEO and founding partner of LongHash Ventures.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,417 | 2,022 |
"Meeting the challenge of migration, scale and SLAs with fully managed databases | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/meeting-the-challenge-of-migration-scale-and-slas-with-fully-managed-databases"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Meeting the challenge of migration, scale and SLAs with fully managed databases Share on Facebook Share on X Share on LinkedIn Presented by MongoDB “Identity is the new security perimeter,” says Shiv Ramji, chief product officer at Auth0. “Think about how many applications you logged into just this morning. All of those need a seamless, highly secure login experience.” Ramji spoke about how mission-critical user identity and authentication has become during his fireside chat at at VB Transform 2022, “How to re-engineer global platforms for our multi-cloud reality.” This issue today is that too many companies rely on custom integrations to implement secure login, single sign-on (SSO) and other identity needs across each of their touchpoints. From there, issues like inconsistencies in the user experience, which can create friction or even barriers for access, can creep in. And as customer identity ( CIAM ) technology has evolved, many also have struggled to easily add multi-factor authentication and passwordless or social login. And all those challenges together mean everything from lower conversion rates to security issues and increased developer costs.
Auth0, which was recently acquired by Okta , was built to meet these challenges for its customers. The platform currently powers login experiences for consumer and SaaS applications that are built from the start on the MongoDB developer data platform. Okta now serves more than 15,800 customers globally.
“As the company started scaling, we had to make sure that our login service can scale to billions of logins per month — we have customers with millions of consumers who are constantly logging in to access services,” Ramji says. “We have to be able to support that level of scale.” The database is obviously not the only part of the stack. But it is the foundation, making it possible to more easily build the application and the database that powers it behind the scenes, end to end, with security best practices built in. Here’s a look at Auth0’s database journey.
Choosing a database layer Choosing the right database provider has been key to the company’s success from the start. The database layer sits in a critical part of the Auth0 stack, as a piece of the platform’s authentication and authorization pipeline. Customers can be deployed in either a shared or multi-tenant environment, or in dedicated environments, depending on their needs and what makes sense for their architecture. The service was initially on AWS primarily, but they have also embraced Azure, and Ramji expects they will continue to expand.
“When we thought about the database technology that we were using or picking, we wanted to make sure we could use the same technology regardless of the cloud provider,” he says. “In the future, as customers demand support for other clouds, we should be able to support that with technology that works.” The other areas in which their choice of database was critical were availability and geo-failover capabilities — especially given the SLA commitments they’ve made to their customers. And the last piece is scale.
“We wanted the database technology to be able to scale as our customers scale or as we scale. It was critical for us to work with a database technology that’s going to be able to scale,” he says. “And like any other SaaS platform, we’re always looking to have operational efficiencies, removing any undifferentiated heavy lifting that we’re doing, and working with vendors who can take that on for us.” From self-managed to automation Auth0 began with a self-managed solution, with MongoDB instances — and that became a bottleneck. Initially, when scaling with different geographies and customers, a big need was the ability to launch new environments. But the time for environment creation kept creeping up, until it was taking three or four months in some instances.
When Ramji joined Auth0, part of his goal was to methodically bring that down from three months to three weeks. He’s surpassed that goal. Recently, between replatforming their private cloud offering onto Kubernetes and adopting MongoDB Atlas, a fully managed database service, that time is now down to two hours.
“Just think about the orders of magnitude impact that we can have in terms of the ability to spin up new environments and meet our customers’ needs,” he said.
The operational burden on the engineering teams has lifted significantly, and now they’re able to meet their commitments to an RPO target of less than one minute, and an RTO target of less than 15 minutes. Atlas has also helped Auth0 embrace migrations as a core capability for its customers, which are constantly right-sizing their environments, launching in new geos or markets, and needing to spin up new environments and migrate customers over quickly.
“Some of our customers in the dedicated environment could have 30,000 tenants,” he said, “We want to be able to seamlessly move them over into a new environment. That was why we went down this route of moving to a managed service, so that we could serve our customers the way we want to.” Meeting the challenges of migrations at scale In large-scale migrations, with thousands of customers, who then have thousands of tenants, and then millions of consumers who are logging in, production scenarios are always a very delicate balance, Ramji said.
The first challenge was to right-size its large collections in its self-managed and self-hosted environments, because some of those services were storing far too much unnecessary data. From there, they ironed out the kinks in a staging environment before moving to the production environment. It took a tremendous amount of testing, and initially launching in smaller or lower traffic regions to lower the impact of any quirks. Eventually that progressive rollout strategy helped them grow skilled enough to make rollouts invisible to customers.
The MongoDB team helped Auth0 manage the process of replatforming and rolling out seamlessly, Ramji said, from developing a strategy and testing support to feedback on challenges and offering internal or external resources.
Best practices and lessons learned Whether you’re beginning database migrations, or taking on a big platform re-architecture or modernization, there are some key things to keep in mind before you ever start. Lesson number one for Ramji was reframing the objective of the project to ensure executive buy-in. Rather than positioning it as a way to pay down the tech debt, which was slowing the company down, they looked at it as a way to unlock future value.
“We really made sure that the positioning of this was focused on customer value, as opposed to internal hurdles,” he explained. “In our case, future value means we can deploy to a new cloud, Azure. We can deploy features faster because we have parity across both deployment types.” And secondly, when you’re taking on a big database migration, cut the problem down into smaller bites. For instance, Auth0 reduced the size of their collections before starting the migration, and made sure they had tested it in different environments before starting the production migration.
“And where we didn’t have the answers, we partnered with teams like MongoDB to ensure that if there were things we didn’t know, we were able to get the right contextual help that was required to do that,” Ramji said.
Learn more here about MongoDB Atlas, our multi-cloud developer data platform and try it free forever here.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,418 | 2,022 |
"Hazelcast launches serverless offering to accelerate real-time applications | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/hazelcast-launches-serverless-offering-to-accelerate-real-time-applications"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hazelcast launches serverless offering to accelerate real-time applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
California-based Hazelcast , a company that provides a platform to act on streaming data in real-time, has announced a new serverless solution under its cloud-managed service portfolio. The offering gives enterprises an easier way to access the company’s platform to build real-time applications.
Serverless technologies have been on the rise across sectors. According to Cisco’s 2022 global hybrid cloud trends report, 40% of enterprises that are using cloud-native technologies have already switched to serverless. Now, Hazelcast is building on this trend with its new Hazelcast Viridian serverless platform.
Hazelcast Viridian serverless The product enables self-service provisioning, according to the company. Users don’t have to worry about setting up the underlying hardware and handling operational complexities, such as resource planning, to leverage Hazelcast capabilities. They just have to sign up and define a few parameters to have a functional cluster in a matter of minutes. It grows and shrinks according to the workload, giving users a horizontally scalable real-time data platform at their disposal.
Hazelcast’s platform offers a high-speed data store, distributed stream processing capabilities, and real-time data management to discover and act on patterns, trends, and anomalies. These features, combined with the benefits of serverless, can help organizations accelerate web and mobile applications. They can gain a 360-degree customer view, track assets in real-time, prevent fraudulent transactions, deliver personalized recommendations, create real-time promotional offers, and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Since the whole thing works on the pay-as-you-consume model, enterprises can also run cost-effective testing for various applications.
“Viridian serverless is the next step in providing a truly real-time cloud, all while making it even easier to develop, configure and deploy innovative applications,” Manish Devgan, chief product officer of Hazelcast, said. “This is the most seamless way to build and deploy cloud-native, real-time applications that will drive the next generation of competitive advantage.” Availability Hazelcast says that the serverless offering is available in public beta on AWS and will soon debut on the Google Cloud Platform. The company did not share the exact timeline of GCP availability, but as part of the launch, it is providing a free-forever tier that includes a limited-time offer of up to 2 Gibibytes (GiB) of data storage.
Other players helping enterprises mobilize real-time are StarRocks , DeltaStream , Confluent’s ksqlDB , Azure Stream Analytics and GCP DataFlow.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,419 | 2,022 |
"How Spike increases productivity with its conversational email platform | VentureBeat"
|
"https://venturebeat.com/business/how-spike-increases-productivity-with-conversational-email-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Spike increases productivity with its conversational email platform Share on Facebook Share on X Share on LinkedIn While digital transformation is accelerating rapidly, enterprises are still facing challenges like siloed communication, which often makes collaborative team efforts nearly impossible. A 2019 report by Forrester revealed that while marketing and business intelligence teams often communicate, the two groups aren’t doing so effectively because they are both often siloed. That report was three years ago, but not much has changed since then. Today, many organizations are still swimming in vast data lakes that often have silos. This can often lead down an ever-complex path to improving organizational productivity, bettering customer experience and getting incredible returns on their investments.
As the quest toward a truly digitally transformed enterprise ecosystem continues, experts note that organizations who want to drive a real digital transformation agenda must stop working in silos.
To keep up with ever-evolving business demands, companies keep adding more tools to address their needs, until they can no longer keep track of the sprawl they have. This is where Israel-based productivity startup Spike — which claims it offers the “first collaborative email platform that helps teams of all sizes connect, create and collaborate to accomplish more” — wants to change things.
Cofounded by Dvir Ben Aroya (CEO) and Erez Pilosof (CTO) in 2014, Spike offers a conversational email platform that empowers teams to interact and communicate more efficiently with its prioritization and organizational features.
Aroya told VentureBeat in an interview that Spike’s multi-platform technology was built on top of an email protocol that allows it to provide a complete messaging experience for individuals’ and teams’ business needs all in one platform.
Whether for internal or external conversations, Aroya said, email is still the most popular form of communication in the world, but the problem is that organizations are often spread across multiple tools and that impacts communication and productivity negatively. He claims that Spike’s offering helps businesses to save money, all while unifying communications into one place. It allows users to do anything right from their inboxes — including communicate, organize, plan, share notes and projects, video conference and monitor schedules.
Unifying communications is a top priority According to Aroya, one of the most compelling trends Spike has seen across the enterprise landscape is more businesses and individuals are seeking alternatives that will unify their communication solutions. In addition, he said people are looking for a modern, natural real-time communication experience.
For example, Aroya noted that what makes the company stand apart is its prioritized organization. Spike’s priority inbox removes distractions, enabling teams to focus and be more productive. Prioritized messages are displayed first, while less essential communications are relegated to the side to reduce clutter. With the platform’s “super search,” function, users can save time by not having to comb through threads to get the information they need. To see every file a user has ever sent someone, they simply need to click on the contact, Aroya noted, adding that Spike’s file management helps users view files without downloading them, which saves time.
Aroya further said Spike eliminates the idea of threads altogether and groups emails by contact to make it simpler to discover what users search for, showing chats as one continuous inline message between “conversation partners.” The features of the platform work with a Spike user’s contacts even if that contact themselves is not also using Spike, but Gmail, for instance — the Spike user can still view and quickly manage files in a conversation with someone outside their organization.
Beyond an email platform Spike’s competitors include Google Workspace and Microsoft 365 (in the email client arena) as well as Slack, Discord and others (in the messaging world), Pilosof added that Spike is more than an email platform.
According to Pilosof, “Spike is leading the market by providing both email client experience and chat capabilities in one platform and many additional built-in tools like video meetings and calls, collaborative notes, calendar and more.” For upcoming events, unfinished projects, or various social network channels, Pilosof said, users can also build a group chat in Spike for more effective communication. He added Spike’s unique features are why many Gen Z users are flocking to the platform, making it become famous across social networking sites.
Pilosof further said Spike also offers various customizability options, so clients can use the platform however it best suits them, adding that enterprise businesses seeking a more flexible communication channel have started customizing the platform. Brands that trust Spike include Philips, Zoom, Zillow, Sephora, Shopify, Fiverr and more. The platform offers a free version, a pro plan ($5 per user/month billed annually), a business plan ($10 per user/month billed annually) and a custom option for larger enterprises.
Aroya said Spike envisions a world in which people can conduct all their businesses and communications through a single app. Spike has raised $30 million in total funding to date, with its last funding round of $15 million, led by Insight Partners with the participation of Mozilla Corp, coming in Q3 of 2021. The company currently has a headcount of 35 and plans to double that number in the next year to support the scale and demand for its services.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,420 | 2,022 |
"Adding AI to knowledge management is a $4.5M game-changer (VB On-Demand) | VentureBeat"
|
"https://venturebeat.com/business/adding-ai-to-knowledge-management-is-a-4-5m-game-changer-vb-on-demand"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight Adding AI to knowledge management is a $4.5M game-changer (VB On-Demand) Share on Facebook Share on X Share on LinkedIn Presented by Pryon Learn how Fortune 500 companies are applying AI-powered knowledge management and realizing a 60% reduction in overall IT ticket volume, thousands of hours of time-saving, $4.5 million annual savings in IT support costs and more, in this VB On-Demand event! Watch on demand here Enterprise knowledge management is critical: everything that can move your business forward already exists in many forms in your organization, says Igor Jablokov, CEO & founder, Pryon.
“If you look at the enterprises that have been able to rise in value, like Amazon and Netflix, they’re the ones that can most effectively deal with the signals coming from customers, partners, employees and investors,” Jablokov says. “Getting everyone the information they need to be successful at their tasks usually turns into economic outcomes that they can measure.” There’s also an increasing number of concrete reasons that enterprise knowledge management has become a top CEO priority — and the competition for talent is number one on the list, says Chris Mahl, president & CRO, at Pryon.
“People that are the best of the best want to be places where at their fingertips is the wisdom of the company, the leverage of the company,” he explains. “A big part of what attracts talent is not just the DNA and the brand and the aperture of the company, but the experience of being enriched. That’s all about the effective distribution of knowledge.” But right now, knowledge silos are common, creating inefficiencies and hamstringing decision-making, customer service and employee satisfaction. Workers waste 1.8 hours every day, or 9.3 hours per week , on average, in the search for information they need to do their jobs — instead of being able to actually do their jobs. The frustration curve for employees then becomes incredibly high. They’re quitting in favor of the employer that provides a better professional experience, serving up the information and intelligence they need to prosper in their job.
From the talent crisis to the global recession, it’s especially urgent now to do more with less, be more intelligent, and be more digitally fluid. So why are so many enterprises finding it difficult to deliver on this need for information, and get content from across the enterprise into the hands and systems where it’s needed most? The challenges of enterprise knowledge management There is so much information scattered across every enterprise, stored in as many different repositories as there are types of information, to support every internal and customer need, and so much of it is buried deep.
“The information users need can be incredibly arcane, and the question is, how do you get the right pieces of information together?” Mahl says. “In some cases, the questions we see are so bizarrely unique, but it turns out that with a knowledge system, you can get right to the original answer, even if it’s living in a nine-year-old visit dossier with some photographs and some notes. Can you imagine that gold?” Enterprise knowledge management isn’t a new concept, and smart companies have been turning to tools like chatbots and intelligent virtual assistants to manage some of the load of scaling and automating knowledge delivery. But implementing these technologies requires a lot of heavy lifting. About 90% of the data being generated today is unstructured , and has to be recreated or refactored for these tools to actual leverage the data. It’s a great deal of manual effort, unsurprisingly error-prone, and regularly falling out of date, leaving stakeholders without the information they need.
“Executives are making attempts at knowledge management but not getting the return they expect, or that they need,” Mahl explains. “They feel like they’re pushing piles of information to try to get breakthrough, but can’t get at the access and immediacy of correct response that’s required for ROI.” So, how do you get content from those repositories into the hands (or the systems) where the information is needed most? It takes the unification, automation and digitization of enterprise knowledge in a knowledge operating system.
Leveling up enterprise knowledge management A leap beyond traditional enterprise knowledge tools, a knowledge operating system can bring information from around the enterprise into a single, fully accessible AI-enabled environment for easy consumption across the whole organization.
Most companies have an array of content and knowledge platforms, different for each line of business. For example, an HR policy and benefit intranet, product knowledge bases, cybersecurity policy databases, regulatory information — and all of them living in separate environments.
“A knowledge platform can ingest the content, create the model, and then organize the information as an asset that everyone, from the newest employee down to the deepest research scientists and innovators, can access in the way that makes the most sense for their department,” Mahl says. “And does that securely and safely, only to the people who should see it.” Once one group implements a knowledge platform, and starts sharing their department-specific knowledge and insight, other groups want to get in on it, sharing all their own information and insight, from policies, procedures, field service notes, to RMAs, legal information, HR information, cybersecurity knowledge and so on, enriching the whole enterprise.
“I think the most exciting thing is the fact that many sources can be combined into this one representation of knowledge,” Jablokov says. “And the fact that they can actually start sharing that interactive knowledge with other business units as well. Once people start seeing it, it becomes something that they all want in order to show their gifts to their peer divisions.” How Pryon is changing the game Pryon is the first full-stack AI platform, offering point-and-click creation when collecting enterprise content, which is then processed with AI techniques and optimized for question and answering. Technologies like AI, NLU and Pryon’s vector database model eliminate silos and static buckets of data. Instead, the enterprise’s full store of information can be integrated into the platform, and transformed into digitally fluid neural networks organized around users, easily surfaced by fully formed semantic queries.
These are living, breathing collections of knowledge, where knowledge is constantly refined and updated as users interact with the system, and the effectiveness of the content keeps growing.
“We get such aha moments, because these executives say, I have 5,000 articles and I’ve been publishing for years, but this is the first time I’ve ever seen them as a whole,” Mahl says. “And I can actually see the value where they work, where they conflict, where they debate, and where there are holes. Your knowledge mass all of a sudden is the best it can be, which means when new people come, they get that step ahead.” To learn more about how knowledge management systems are transforming the way enterprises access and leverage information, the ROI they’re achieving, don’t miss this VB Live event.
Start streaming now.
Agenda Increase productivity 2x by leveraging your existing investments in content assets, knowledge bases and human capital – without doubling budgets Exceed the performance of your current customer support chatbots with a next-gen strategy Drive repeatable, simultaneous digital transformation across multiple business units Invest in an AI platform that pays for itself in weeks, not years Presenters Igor Jablokov , CEO & Founder, Pryon Chris Mahl , President & CRO, Pryon Art Cole , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,421 | 2,022 |
"Responsible AI must be a priority — now | VentureBeat"
|
"https://venturebeat.com/ai/responsible-ai-must-be-a-priority-now"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Responsible AI must be a priority — now Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Responsible artificial intelligence (AI) must be embedded into a company’s DNA.
“Why is bias in AI something that we all need to think about today? It’s because AI is fueling everything we do today,” Miriam Vogel , president and CEO of EqualAI , told a live stream audience during this week’s Transform 2022 event.
Vogel discussed the topics of AI bias and responsible AI in depth in a fireside chat led by Victoria Espinel of the trade group The Software Alliance.
Vogel has extensive experience in technology and policy, including at the White House, the U.S. Department of Justice (DOJ) and at the nonprofit EqualAI , which is dedicated to reducing unconscious bias in AI development and use. She also serves as chair of the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the President and the White House on AI policy.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As she noted, AI is becoming ever more significant to our daily lives — and greatly improving them — but at the same time, we have to understand the many inherent risks of AI. Everyone — builders, creators and users alike — must make AI “our partner,” as well as efficient, effective and trustworthy.
“You can’t build trust with your app if you’re not sure that it’s safe for you, that it’s built for you,” said Vogel.
Now is the time We must address the issue of responsible AI now, said Vogel, as we are still establishing “the rules of the road.” What constitutes AI remains a sort of “gray area.” And if it isn’t addressed? The consequences could be dire. People may not be given the right healthcare or employment opportunities as the result of AI bias , and “litigation will come, regulation will come,” warned Vogel.
When that happens, “We can’t unpack the AI systems that we’ve become so reliant on, and that have become intertwined,” she said. “Right now, today, is the time for us to be very mindful of what we’re building and deploying, making sure that we are assessing the risks, making sure that we are reducing those risks.” Good ‘AI hygiene’ Companies must address responsible AI now by establishing strong governance practices and policies and establishing a safe, collaborative, visible culture. This has to be “put through the levers” and handled mindfully and intentionally, said Vogel.
For example, in hiring, companies can begin simply by asking whether platforms have been tested for discrimination.
“Just that basic question is so extremely powerful,” said Vogel.
An organization’s HR team must be supported by AI that is inclusive and that doesn’t discount the best candidates from employment or advancement.
It is a matter of “good AI hygiene,” said Vogel, and it starts with the C-suite.
“Why the C-suite? Because at the end of the day, if you don’t have buy-in at the highest levels, you can’t get the governance framework in place, you can’t get investment in the governance framework, and you can’t get buy-in to ensure that you’re doing it in the right way,” said Vogel.
Also, bias detection is an ongoing process: Once a framework has been established, there has to be a long-term process in place to continuously assess whether bias is impeding systems.
“Bias can embed at each human touchpoint,” from data collection, to testing, to design, to development and deployment, said Vogel.
Responsible AI: A human-level problem Vogel pointed out that the conversation of AI bias and AI responsibility was initially limited to programmers — but Vogel feels it is “unfair.” “We can’t expect them to solve the problems of humanity by themselves,” she said.
It’s human nature: People often imagine only as broadly as their experience or creativity allows. So, the more voices that can be brought in, the better, to determine best practices and ensure that the age-old issue of bias doesn’t infiltrate AI.
This is already underway, with governments around the world crafting regulatory frameworks, said Vogel. The EU is creating a GDPR -like regulation for AI , for instance. Additionally, in the U.S., the nation’s Equal Employment Opportunity Commission and the DOJ recently came out with an “unprecedented” joint statement on reducing discrimination when it comes to disabilities — something AI and its algorithms could make worse if not watched. The National Institute of Standards and Technology was also congressionally mandated to create a risk management framework for AI.
“We can expect a lot out of the U.S. in terms of AI regulation,” said Vogel.
This includes the recently formed committee that she now chairs.
“We are going to have an impact,” she said.
Don’t miss the full conversation from the Transform 2022 event.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,422 | 2,022 |
"How Peloton is using computer vision to strengthen workouts | VentureBeat"
|
"https://venturebeat.com/ai/how-peloton-is-using-computer-vision-to-strengthen-workouts"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Peloton is using computer vision to strengthen workouts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As you do push-ups, squats or ab work, heft dumbbells, jump or stretch, a device on your TV follows you throughout your workout.
You are tracked on your form, your completion of an exercise (or lack thereof); you receive recommendations on what cardio, bodyweight, strength training or yoga workout to do next; and you can work toward achievement badges.
This is the next-level home fitness experience enabled by Peloton Guide, a camera-based, TV-mounted training device and system powered by computer vision, artificial intelligence (AI), advanced algorithms and synthetic data.
Sanjay Nichani, leader of Peloton ’s computer vision group, discussed the technology’s development — and ongoing enhancement — in a livestream this week at Transform 2022.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI-driven motivation Peloton Guide’s computer vision capability tracks members and recognizes their activity, giving them credit for completed movements, providing recommendations and real-time feedback. A “self mode” mechanism also allows users to pan and zoom their device to watch themselves on-screen and ensure they are exhibiting proper form.
Nichani underscored the power of metric-driven accountability when it comes to fitness, saying that “insight and progress are very motivating.” Getting to the final Peloton Guide commercial product was an “iterative process,” he said. The initial goal of AI is to “bootstrap quickly” by sourcing small amounts of custom data and combining this with open-source data.
Once a model is developed and deployed, detailed analysis, evaluation and telemetry are applied to improve the system continuously and make “focused enhancements,” said Nichani.
The machine learning (ML) flywheel “all starts with data,” he said. Peloton developers used real data complemented by “a heavy dose of synthetic data ,” crafting datasets using nomenclature specific to exercises and poses combined with appropriate reference materials.
Development teams also applied pose estimation and matching, accuracy recognition models and optical flow, what Nichani called a “classic computer vision technique.” Diverse attributes affecting computer vision One of the challenges of computer vision, Nichani said, is the “wide variety of attributes that have to be taken into account.” This includes the following: Environmental attributes : background (walls, flooring, furniture, windows); lighting, shadows, reflections; other people or animals in the field of view; equipment being used.
Member attributes : gender, skin tone, body type, fitness level and clothing.
Geometric attributes : Camera-user placement; camera mounting height and tilt; member orientation and distance from the camera.
Peloton developers performed extensive field-testing trials to allow for edge cases and incorporated a capability that “nudges” users if the camera can’t make them out due to any number of factors, said Nichani.
The bias challenge Fairness and inclusivity are both paramount to the process of developing AI models, said Nichani.
The first step to mitigating bias in models is ensuring that data is diverse and has enough values across various attributes for training and testing, he said.
Still, he noted, “a diverse dataset alone does not ensure unbiased systems. Bias tends to creep in, in deep learning models, even when the data is unbiased.” Through Peloton’s process, all sourced data is tagged with attributes. This allows models to measure performance over “different slices of attributes,” ensuring that no bias is observed in models before they are released into production, explained Nichani.
If bias is uncovered, it’s addressed — and ideally corrected — through the flywheel process and deep dive analysis. Nichani said that Peloton developers observe an “equality of odds” fairness metric.
That is, “for any particular label and attribute, a classifier predicts that label equally for all values of that attribute.” For example, in predicting whether a member is doing a crossbody curl, a squat, or a dumbbell swing, models were built to factor in attributes of body type (“underweight,” “average,” “overweight”) and skin tone based on the Fitzpatrick classification — which although is widely accepted for classifying skin tone, notably still has a few limitations Still, any challenges are far outweighed by significant opportunities, Nichani said. AI has many implications in the home fitness realm — from personalization, to accountability, to convenience (voice-enabled commands, for example), to guidance, to overall engagement.
Providing insights and metrics help improve a user’s performance “and really push them to do more,” said Nichani. Peloton aims to provide personalized gaming experiences “so that you’re not looking at the clock when you’re exercising.” Watch the full-length conversation from Transform 2022.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,423 | 2,022 |
"How edge data is training AI for accurate, real-time response | VentureBeat"
|
"https://venturebeat.com/ai/how-edge-data-is-training-ai-for-accurate-real-time-response"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How edge data is training AI for accurate, real-time response Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Autonomous driving is seen as the future of mobility, thanks to companies like Tesla that have developed AI-driven advanced driving assistance systems (ADAS) to help users navigate from one point to another under certain conditions.
The progress has been astonishing to many, but the fact remains: We are still nowhere near truly autonomous vehicles. In order to achieve true autonomy, self-driving vehicles should be able to perform better than human drivers in all conditions, be it a densely populated urban area, a village or an unexpected scenario along the way.
“Much of the time, autonomous driving is actually kind of easy. It’s sometimes as simple as driving on an empty road or following a lead vehicle. However, since we’re dealing with the real world, there’s a wide variety of ‘edge cases’ that can occur,” Kai Wang, the director of prediction at Amazon-owned mobility company Zoox , said at VentureBeat’s Transform 2022 conference.
These edge cases create trouble for algorithms. Imagine a group of people stepping onto the street from a blind corner or a pile of rubble lying in the way.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Training effort from Zoox Humans are pretty good at recognizing and responding to almost all kinds of edge cases, but machines find the task difficult as there are so many possibilities of what can happen on the road. To solve this, Zoox, which is building fully autonomous driving software and a purpose-built autonomous robotaxi, has taken a multi-layered approach.
“There’s not really a single solution that will solve all these cases. So, we try to build in different types of mitigations at our whole system level, at each layer to give us the best chance at handling these things,” Wang said.
First, as the executive explained, Zoox enables the perception of different conditions/objects by bringing in data from the sensor pods located on all four corners of its vehicle.
Each pod features multiple sensor modalities — RGB cameras, Lidar sensors, radars and thermal sensors — that complement each other. For instance, RGB cameras can sense detail in imagery but fail to measure depth, which is handled by Lidar.
“The job of our perception system is to use all these sensors together, and fuse them to produce just a single representation for all the objects around us. This gives the best chance at recognizing all the things in the world around us,” Wang said.
Once the surrounding agents are recognized, the system models where they will end up in the next few seconds. This is done with data-driven deep learning algorithms that come up with a distribution of future potential trajectories. Post this, it considers all the dynamic entities and their predicted trajectories and takes a decision on what to do or how to safely navigate through the current scenario to the target destination.
Teleguidance While the system is effectively modeling and handling edge cases, it could run into certain novel situations on the road. In those cases, the system stops and uses teleguidance capabilities to bring in a human expert for help (while checking for collisions and obstacles with other agents at the same time).
“We have a human operator dialed into the situation to suggest a route to get through the blockage. So far, we have received teleguidance for less than 1% of our total mission time in complex environments. And as our system gets more mature, this percentage should go down further,” Wang said.
After moving on, the data associated with the edge case goes to the company through a feedback loop, allowing it to use the scenario and its variants in simulations to make the software system more robust.
Don’t miss the full session on how edge data is training AI to be more accurate and responsive.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,424 | 2,022 |
"How edge computing is accelerating innovation across hardware, software and service provider domains | VentureBeat"
|
"https://venturebeat.com/ai/how-edge-computing-is-accelerating-innovation-across-hardware-software-and-service-provider-domains"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How edge computing is accelerating innovation across hardware, software and service provider domains Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
An increasing number of enterprises are placing more emphasis on edge computing. According to a report from AT&T Cybersecurity , 75% of security leaders are either planning, in the process of deploying, or have fully deployed an edge use case. This is largely attributed to the technology’s capacity to conserve bandwidth, speed up response times and enable data processing with fewer restrictions. In fact, the State of the Edge study from the Linux Foundation predicts that by 2028, enterprises will be using edge computing more extensively.
During VentureBeat’s Transform 2022 virtual conference , David Shacochis, vice president of product strategy for enterprise at Lumen , moderated a panel discussion to talk about how edge computing is transforming use cases and strategies for some of the real giants of the industry, across hardware, software and service provider domains.
The discussion also featured Shacochis’s colleague Chip Swisher, who runs the internet of things (IoT) practice for Lumen; Rick Lievano, CTO for the worldwide telecommunications industry at Microsoft; and Dan O’Brien, general manager for HTC Vive.
Technology evolutionary cycles Shacochis said computing power has gone through evolutionary cycles that oscillate back and forth between centralized and distributed models. Looking across periods of technological achievement, Shacochis said steam power enabled mass production industries, while electrical distribution fueled the modern industrial economy that brought about the dawn of computing power in microprocessing. This, he said, has now led to the present day and what is being called the Fourth Industrial Revolution.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He further noted that the dawn of mainframe is the original dawn of computing power, distributing out to client server models, then consolidating back toward the cloud and bringing all the business logic into more centralized postures.
“Now we’re seeing this explosion of all the different sources of data, the different ways to process that data, the different kinds of sensor actuator cycles that can really add a lot of value to customer experiences and industrial efficiency,” Shacochis said. “All these different kinds of business outcomes from the many different ways to leverage the edge. So, those industrial cycles occurring across decades, the computing cycles occurring across even smaller periods of years, have really led us to this exciting time in the industry.” The Fourth Industrial Revolution Examining the Fourth Industrial Revolution era from a hardware perspective, O’Brien said HTC started off as an original design manufacturer (ODM) company. He said HTC was making motherboards and chipsets for other companies and other products and PCs, using immersive silicon. He added that the company moved very quickly into application-specific integrated circuit (ASIC) chips and GPUs that evolved to smartphone technology.
O’Brien noted that “many people don’t realize that was the dawn of what we see today in the extended reality [XR] world, building these new types of immersive products. It actually evolved from so much of the chipsets and evolved so much from the smartphones. What’s in modern virtual reality [VR] headsets and displays is a smartphone panel that was powered by the need to have higher visual quality and fidelity inside of a smartphone.” “Now we’re seeing where we need even more processing power,” he continued, “We need even more visual quality and performance inside of VR headsets for an XR headset and an augmented reality [AR] type of solution. We’re seeing this increase in terms of the demand and the overall performance needs. The additional products require large PCs and GPUs to make this stuff all work. Now, we’re actually moving all of this into a cloud environment.” He added that there’s also now artificial intelligence (AI) and machine learning (ML) that will optimize the processes for all the virtual contents and interactions.
Additionally, Lievano said the cloud really has changed everything, and the edge is an extension of the cloud. He noted that at Microsoft, they talk quite a bit about this notion of the intelligent cloud and intelligent edge, which he believes is a way to deliver applications across the entire computing canvas to where they’re needed.
“As a developer, you like to think that you can build an app once,” Lievano said. “You want to use the latest technologies, which right now is cloud-native principles, and you want to be able to deploy it anywhere, whether it’s a public cloud off in the sky, or an edge location. So, this vision that we have of intelligent cloud and intelligent edge is largely dependent on our telco partners because at the end of the day, they provide that connectivity — the connective tissue that’s required for this vision to become a reality. But the cloud needs to connect to the edge. And without telcos like Lumen, there’s no intelligent edge.” According to Lievano, this is unlike the move from mainframe to client server, where each mainframe and cloud server had their own unique developed models that had their own governance. The cloud-native capabilities are the same, whether they’re available in Azure or in the cloud, he said.
He also noted that on the edge, you may have a subset of those cloud capabilities because of scale, but the programming model, devops model, management portals, management interfaces, and APIs are all the same. He also said the advertisement becomes another cloud region for a developer to deploy their applications to, and that’s a huge difference between a mainframe and a client-server.
“Again, as a developer, I’m amazed at the advances and tooling, especially in the last few years,” Lievano said. “AI, for example, has had an incredible influence not only in the applications that we create as developers, but also in the applications that we write and how we develop those applications. So, the cloud gives you limitless compute capabilities [that are] really at your fingertips. Again, scale is not an issue, but features like serverless computing, for example, enable you to take your applications to the next level. In science, you will be able to create and deploy complex applications using microservices.” Evolution of IoT From a solutions and service provider perspective, Shacochis said the cloud and some of its tools make some things easier, but the opportunities and customer expectations make things more complex. However, Swisher, speaking from his specialty around IoT, said while some say IoT is a new concept, in reality, it’s been around for more than 20 years. It’s a concept that explains the ability to take data off machines and devices and do certain operations with it, Swisher said.
“I’ve experienced the wave of what I call IoT 2.0, where you may be held in a factory floor, a localized production line control machine that was doing processing there locally,” Swisher noted. “Then we saw the advent of moving that out to the cloud, and different stovepipe cloud providers providing centralized end-to-end solutions in that space. Now we’re really seeing the need for integration on the IoT 2.0, where we’re starting to see cross-stovepipe use cases, having data coming from multiple different IoT infrastructures and IoT paradigms and being able to bring all that data together into a single view.” Swisher added that machine learning is the next evolution of having full visibility across everything that’s going on across the city, plant, warehouse and distribution to bring data together.
He noted that IoT 2.0 “creates new challenges both from a compute standpoint and network integration and services standpoint, where there’s a need to compute even closer to those aspects because building all those things together, we really need the ability to have that happen even more in real time to be able to adjust as we need it. The concept of using compute on a premise, you know, compute in a metro edge or a near-edge capability, as well as the cloud, and being able to have all those out there to bring all of those pieces together and be able to move, compute around those different locations really has become critical.” Don’t miss the full discussion of how edge computing is transforming use cases and strategies for some of the real giants of the industry, across hardware, software and service provider domains.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,425 | 2,022 |
"How AI is improving warehouse performance and easing supply chain disruptions | VentureBeat"
|
"https://venturebeat.com/ai/how-ai-is-improving-warehouse-performance-and-easing-supply-chain-disruptions"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI is improving warehouse performance and easing supply chain disruptions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Unlocking greater performance gains in warehouses using artificial intelligence (AI) and machine learning (ML) helps make supply chains more resilient and capable of bouncing back faster from disruptions. Unfortunately, the severity and frequency of supply chain disruptions are increasing, with McKinsey finding that, on average, companies experience a disruption of one to two months in duration every 3.7 years.
Over a decade, the financial fallout of supply chain disruptions in the consumer goods sector can equal 30% of a year’s earnings before interest, taxes, depreciation and amortization (EBITDA). However, Fortune 500 companies with resilient supply chains achieved a 7% premium on their stock price and market capitalization.
Resilient supply chains are the shock absorbers that keep ecommerce, retail, grocery, and post and parcel businesses running despite the quickening pace of disruptions. Hardening supply chains to make them more resilient pays.
Closing warehouse gaps strengthens supply chains Unexpected delays and undiscovered warehouse mistakes cost the most to fix and wreak havoc across supply chains. Warehouse managers, planners and expeditors rely on decades-old processes based on Microsoft Excel spreadsheets. But, with increasing costs, pace and severity of disruptions, warehouses can’t react fast enough with these manual systems. As a result, “Operations managers are spending hours collecting data and entering it manually into Excel spreadsheets, taking valuable time away from managing and optimizing warehouse operations,” Akash Jain, Honeywell connected enterprise general manager for connected warehouse, told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Warehouse accuracy and performance further slow down because decisions made on the warehouse floor that impact margins, costs and revenue trade-offs often don’t make it to the top floor. Senior executives need to know how split-second decisions on which orders to ship impact inventory carrying costs and total inventory value. Runaway inflation makes inventory valuation one of the most expensive risks to manage today.
Stress-testing supply chains often uncovers the largest and most costly gaps in warehouse performance down to the asset level. Asset performance management (APM) must be a core part of managing a warehouse, so the cost, risk and machinery used can be optimized with real-time data.
For warehouses to absorb disruptions and keep working, the managers running them need a continual stream of near real-time data from supervised ML algorithms to optimize their operations’ many constraints. “Many distribution businesses were caught completely by surprise when ecommerce demand took off at the start of the pandemic. Many were running multiple shifts to keep up with demand, with little to no time to keep machinery and warehouse assets maintained so they wouldn’t break down,” Jain told VentureBeat.
How AI is closing warehouse gaps The more fragile supply chains become, the more important it is to find where warehouse gaps are and close them. By using supervised ML algorithms and convolutional neural networks, it is possible to use the real-time data streams generated from warehouses to pinpoint where gaps are. However, identifying just how wide these gaps are, their impact on daily warehouse operations and their financial impact on a business has proven elusive.
Cloud-based enterprise performance management (EPM) platforms are taking on that challenge. They’re combining APM with site operations applications to identify how warehouse sites perform against plan, helping managers identify bottlenecks and solve them before they impact performance. Leading EPM providers rely on APIs to integrate with current and legacy warehouse management systems, differentiating themselves by functional area and vertical market. Oracle, SAP, IBM, Anaplan, OneStream Software and Honeywell Connected Warehouse offer EPM platforms today.
Of the many approaches enterprise software vendors are taking today, Honeywell’s Connected Warehouse platform strategy and use of AI and machine learning are noteworthy. It leads the EPM platform market in using advanced ML techniques and constraint modeling to identify warehouse and logistics bottlenecks.
AI and ML are designed into the foundation of Honeywell’s Forge platform and portfolio of products. The company has more than 150 AI and data science experts on staff, concentrating on the Honeywell Forge roadmap, future innovations and new patent opportunities.
All these AI and ML investments translate into continual improvement in providing real-time insights and contextual intelligence that improves warehouse and supply chain performance. The goal is to provide distribution businesses with a real-time system of record they can use to identify gaps in warehouse performance and better manage machinery and assets, said Jain.
Honeywell’s Connected Warehouse uses ML to analyze real-time data and make recommendations based on constraints while monitoring machinery to see how its performance can be optimized. The dashboard below combines real-time updates for outbound operations, tracking current progress on packed and shipped cartons against the plan.
Real-time data, analyzed using analytics and ML algorithms, keeps the dashboard current. Constraint-based ML algorithms also calculate planned performance in real time and are used for tracking asset downtime. In addition, Honeywell recently introduced an APM that predicts when warehouse machinery needs preventative maintenance and updates.
Anticipate more supply chain disruptions Stress-testing supply chains needs to start in the warehouse, where small process improvements made at scale can make a difference in keeping distribution centers and networks running efficiently. What’s been missing is a 360-degree view of warehouse performance that can identify how fast bottlenecks are growing and their financial impact. Combining AI, ML, and real-time OT and IT data, cloud-based EPM platforms are taking on this challenge.
It’s a certainty that more supply chain disruptions are on their way. Using AI and machine learning to optimize warehouse operations will help absorb those shocks. AI- and ML-based warehouse management is a necessity today for high-velocity distribution businesses, including ecommerce, retail, grocery, and post and parcel, to reduce the impact of supply chain disruptions.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,426 | 2,022 |
"Vulnerability management: All you need to know | VentureBeat"
|
"https://venturebeat.com/security/vulnerability-management-all-you-need-to-know"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vulnerability management: All you need to know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents What is vulnerability management? Vulnerability management lifecycle: Key processes Top 8 best practices for vulnerability management policy in 2022 Be wiser than the attackers Vulnerability management is an important part of any cybersecurity strategy. It involves proactive assessment, prioritization and treatment, as well as a comprehensive report of vulnerabilities within IT systems. This article explains vulnerability management in reasonable detail, as well as its key processes and the best practices for 2022.
The internet is a vital worldwide resource that many organizations utilize. However, connecting to the internet can expose organizations’ networks to security risks. Cybercriminals get into networks, sneak malware into computers, steal confidential information and can shut down organizations’ IT systems.
As a result of the pandemic, there has been an increase in remote work , which has raised security risks even higher, leading any organization to be the target of a data leak or malware attack.
According to the Allianz Risk Barometer, cyberthreats will be the biggest concern for organizations globally in 2022.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Before 2025, about 30% of critical infrastructure organizations will experience a security breach that will shut down operations in the organizations,” Gartner predicts.
This is why, for both large and small organizations, proactively detecting security issues and closing loopholes is a must. This is where vulnerability management comes in.
What is vulnerability management? Vulnerability management is an important part of cybersecurity strategy. It involves proactive assessment, prioritization and treatment, as well as a comprehensive report of vulnerabilities within IT systems.
A vulnerability is a “condition of being open to harm or attack” in any system. In this age of information technology, organizations frequently store, share and secure information. These necessary activities expose the organizations’ systems to a slew of risks, due to open communication ports, insecure application setups and exploitable holes in the system and its surroundings.
Vulnerability management identifies IT assets and compares them to a constantly updated vulnerability database to spot threats, misconfigurations and weaknesses. Vulnerability management should be done regularly to avoid cybercriminals exploiting vulnerabilities in IT systems, which could lead to service interruptions and costly data breaches.
While the term “vulnerability management” is often used interchangeably with “patch management,” they are not the same thing. Vulnerability management involves a holistic view to making informed decisions about which vulnerabilities demand urgent attention and how to patch them.
[ Related: Why edge and endpoint security matter in a zero-trust world ] Vulnerability management lifecycle: Key processes Vulnerability management is a multistep process that must be completed to remain effective. It usually evolves in tandem with the expansion of organizations’ networks. The vulnerability management process lifecycle is designed to help organizations assess their systems to detect threats, prioritize assets, remedy the threats and document a report to show the threats have been fixed. The following sections go into greater detail about each of the processes.
1.
Assess and identify vulnerability Vulnerability assessment is a crucial aspect of vulnerability management as it aids in the detection of vulnerabilities in your network, computer or other IT asset. It then suggests mitigation or remediation if and when necessary. Vulnerability assessment includes using vulnerability scanners, firewall logs and penetration test results to identify security flaws that could lead to malware attacks or other malicious events.
Vulnerability assessment determines if a vulnerability in your system or network is a false positive or true positive. It tells you how long the vulnerability has been on your system and what impact it would have on your organization if it were exploited.
A beneficial vulnerability assessment performs unauthenticated and authenticated vulnerability scans to find multiple vulnerabilities, such as missing patches and configuration issues. When identifying vulnerabilities, however, extra caution should be taken to avoid going beyond the scope of the allowed targets. Other parts of your system may be disrupted if not accurately mapped.
2.
Prioritize vulnerability Once vulnerabilities have been identified, they must be prioritized, so the risks posed can be neutralized properly. The efficacy of vulnerability prioritization is directly tied to its ability to focus on the vulnerabilities that pose the greatest risk to your organization’s systems. It also aids the identification of high-value assets that contain sensitive data, such as personally identifiable information (PII), customer data or protected health information (PHI).
With your assets already prioritized, you need to gauge the threat exposure of each asset. This will need some inquiry and research to assess the amount of danger for each one. Anything less may be too vague to be relevant to your IT remediation teams, causing them to waste time remediating low- or no-risk vulnerabilities.
Most organizations today prioritize vulnerabilities using one of two methods. They use the Common Vulnerability Scoring System (CVSS) to identify which vulnerabilities should be addressed first — or they accept the prioritization offered by their vulnerability scanning solution. It is imperative to remember that prioritization methods and the data that support them must be re-assessed regularly.
Prioritization is necessary because the average company has millions of cyber vulnerabilities, yet even the most well-equipped teams can only fix roughly 10% of them. A report from VMware states that “50% of cyberattacks today not only target a network, but also those connected via a supply chain.” So, prioritize vulnerabilities reactively and proactively.
3.
Patch/treat vulnerability What do you do with the information you gathered at the prioritization stage? Of course, you’ll devise a solution for treating or patching the detected flaws in the order of their severity. There are a variety of solutions to treat or patch vulnerabilities to make the workflow easier: Acceptance: You can accept the risk of the vulnerable asset to your system. For noncritical vulnerabilities, this is the most likely solution. When the cost of fixing the vulnerability is much higher than the costs of exploiting it, acceptance may be the best alternative.
Mitigation: You can reduce the risk of a cyberattack by devising a solution that makes it tough for an attacker to exploit your system. When adequate patches or treatments for identified vulnerabilities aren’t yet available, you can use this solution. This will buy you time by preventing breaches until you can remediate the vulnerability.
Remediation: You can remediate a vulnerability by creating a solution that will fully patch or treat it, such that cyberattackers cannot exploit it. If the vulnerability is known to be high risk and/or affects a key system or asset in your organization, this is the recommended solution. Before it becomes a point of attack, patch or upgrades the asset.
4.
Verify vulnerability Make time to double-check your work after you’ve fixed any vulnerabilities. Verifying vulnerabilities will reveal whether the steps made were successful and whether new issues have arisen concerning the same assets. Verification adds value to a vulnerability management plan and improves its efficiency. This allows you to double-check your work, mark issues off your to-do list and add new ones if necessary.
Verifying vulnerabilities provides you with evidence that a specific vulnerability is persistent, which informs your proactive approach to strengthen your system against malicious attacks. Verifying vulnerabilities not only gives you a better understanding of how to remedy any vulnerability promptly but also allows you to track vulnerability patterns over time in different portions of your network. The verification stage prepares the ground for reporting, which is the next stage.
5.
Report vulnerability Finally, your IT team, executives, and other employees must be aware of the current risk level associated with vulnerabilities. IT must provide tactical reporting on detected and remedied vulnerabilities (by comparing the most recent scan with the previous one). The executives require an overview of the present status of exposure (think red/yellow/green reporting). Other employees must likewise be aware of how their internet activity may harm the company’s infrastructure.
To be prepared for future threats, your organization must constantly learn from past dangers. Reports make this idea a reality and reinforce the ability of your IT team to address emerging vulnerabilities as they come up. Additionally, consistent reporting can assist your security team in meeting risk management KPIs, as well as regulatory requirements.
[Related: Everything you need to know about zero-trust architecture ] Top 8 best practices for vulnerability management policy in 2022 Vulnerability management protects your network from attacks, but only if you use it to its full potential and follow industry best practices. You can improve your company’s security and get the most out of your vulnerability management policy by following these top eight best practices for vulnerability management policy in 2022.
1.
Map out and account for all networks and IT assets Your accessible assets and potentially vulnerable entry points expand as your company grows. It’s critical to be aware of any assets in your current software systems, such as individual terminals, internet-connected portals, accounts and so on. One piece of long-forgotten hardware or software could be your undoing. They can appear harmless, sitting in the corner with little or no use, but these obsolete assets are frequently vulnerable points in your security infrastructure that potential cyberattackers are eager to exploit.
When you know about everything that is connected to a specific system, you will keep an eye out for any potential flaws. It’s a good idea to search for new assets regularly to ensure that everything is protected within your broader security covering. Make sure you keep track of all of your assets, whether they are software or hardware, as it is difficult to protect assets that you’ve forgotten about. Always keep in mind that the security posture of your organization is only as strong as the weakest places in your network.
2.
Train and involve everyone (security is everyone’s business) While your organization’s IT specialists will handle the majority of the work when it comes to vulnerability management, your entire organization should be involved. Employees need to be well-informed on how their online activities can jeopardize the organization’s systems. The majority of cyberattacks are a result of employees’ improper usage of the organization’s systems. Though it’s always unintentional, employees that are less knowledgeable about cybersecurity should be informed and updated so that they are aware of common blunders that could allow hackers to gain access to sensitive data.
Due to the increase in remote work occasioned by the pandemic, there’s been a major rise in cybercrime and phishing attacks. Most remote jobs have insufficient security protocols, and many employees that now work remotely have little or no knowledge about cyberattacks. In addition to regular training sessions to keep your IT teams up to date, other employees need to know best practices for creating passwords and how to secure their Wi-Fi at home, so they can prevent hacking while working remotely.
3. Deploy the right vulnerability management solutions Vulnerability scanning solutions come in a variety of forms, but some are better than others, as they often include a console and scanning engines. The ideal scanning solutions should be simple to use so that everyone on your team can use them without extensive training. Users can focus on more complicated activities when the repeated stages in the solutions have been automated.
Also, look into the false-positive rates of the solutions you are considering. The ones that prompt false alarms might cost you money and time because your security teams will have to eventually execute manual scanning. Your scanning program should also allow you to create detailed reports that include data and vulnerabilities. If the scanning solutions you’re using can’t share information with you, you may have to select one that can.
4.
Scan frequently The efficiency of vulnerability management is often determined by the number of times you perform vulnerability scanning. Regular scanning is the most effective technique to detect new vulnerabilities as they emerge, whether as a result of unanticipated issues or as a result of new vulnerabilities introduced during updates or program modifications.
Moreover, vulnerability management software can automate scans to run regularly and during low-traffic times. Even if you don’t have vulnerability management software, it’s probably still good to have one of your IT team members run manual scans regularly to be cautious.
Adopting a culture of frequent infrastructure scanning helps bridge the gap that can leave your system at risk to new vulnerabilities at a time when attackers are continually refining their methods. Scanning your devices on a weekly, monthly or quarterly basis can help you stay on top of system weak points and add value to your company.
5.
Prioritize scanning hosts Your cybersecurity teams must rank vulnerabilities according to the level of threats they pose to your organization’s assets. Prioritizing allows IT professionals to focus on patching the assets that offer the greatest risk to your organization, such as all internet-connected devices in your organization’s systems.
Similarly, using both automated and manual asset assessments can help you prioritize the frequency and scope of assessments that are required, based on the risk value assigned to each of them. A broad assessment and manual expert security testing can be assigned to a high-risk asset, while a low-risk asset merely requires a general vulnerability scan.
6.
Document all the scans and their results Even if no vulnerabilities are discovered, the results of your scanning must be documented regularly. This creates a digital trail of scan results, which might aid your IT team in identifying scan flaws later on if a potential vulnerability is exploited without the scan recognizing it. It’s the most effective technique to ensure that future scans are as accurate and efficient as possible.
However, always make sure that the reports are written in a way that is understandable not just by the organization’s IT teams, but also by the nontechnical management and executives.
7.
Do more than patching In the vulnerability management process, remediation must take shape in the context of a world where patching isn’t the only option. Configuration management and compensating controls, such as shutting down a process, session or module, are other remediation options. From vulnerability to vulnerability, the best remediation method (or a mix of methods) will vary.
To achieve this best practice, the organization’s cumulative vulnerability management expertise should be used to maintain an understanding of how to match the optimal remediation solution to a vulnerability. It’s also reasonable to use third-party knowledge bases that rely on massive data.
8.
Maintain a single source of truth When it comes to remediating vulnerability, most organizations have multiple teams working on it. For instance, the security team is responsible for detecting vulnerabilities, but it is the IT or devops team that is expected to remediate. Effective collaboration is essential to create a closed detection-remediation loop.
If you are asked how many endpoints or devices are on your network right now, will you be confident that you know the answer? Even if you do, will other people in your organization give the same answer? It’s vital to have visibility and know what assets are on your network, but it’s also critical to have a single source of truth for that data so that everyone in the company can make decisions based on the same information. This best practice can be implemented in-house or via third-party solutions.
Be wiser than the attackers As you continually change the cloud services, mobile devices, apps and networks in your organization, you give threats and cyberattacks the opportunity to expand. With each change, there’s a chance that a new vulnerability in your network will emerge, allowing attackers to sneak in and steal your vital information.
When you bring on a new affiliate partner, employee, client or customer, you’re exposing your company to new prospects as well as new threats. To protect your company from these threats, you’ll need a vulnerability management system that can keep up with and respond to all of these developments. Attackers will always be one step ahead if this isn’t done.
Read next: Malware and best practices for malware removal VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,427 | 2,022 |
"Report: Only 8 ransomware groups have attacked over 500 organizations | VentureBeat"
|
"https://venturebeat.com/security/report-only-8-ransomware-groups-have-attacked-over-500-organizations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Only 8 ransomware groups have attacked over 500 organizations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Kaspersky ’s threat intelligence team has conducted analysis into the most common tactics, techniques, and procedures (TTPs) used by 8 of the most prolific ransomware groups during their attacks. The research revealed that different groups share more than half of the cyber kill chain and execute the core stages of an attack identically.
The researchers looked at the activity of Conti/Ryuk, Pysa, Clop (TA505), Hive, Lockbit2.0, RagnarLocker, BlackByte and BlackCat. These groups have been active in the United States, Great Britain and Germany, and have targeted over 500 organizations within industries such as manufacturing, software development and small business, between March 2021 and March 2022.
The observed attacks were often predictable, following a pattern that includes compromising the corporate network or victim’s computer, delivering malware , further discovery, credential access , deleting shadow copies, removing backups and finally achieving their objectives.
The emergence of a phenomenon called ransomware-as-a-service (RaaS) has helped lead to the similarities in behavior. Under this model, ransomware groups do not deliver malware by themselves, but only provide the data encryption services. Since the people who deliver malicious files also want to simplify their lives, they use template delivery methods or automation tools to gain access.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers also noted that different groups have been reusing old and similar tools to make life easier for attackers and reduce the time it takes to prepare an attack. Although it is possible to detect recycled techniques, it’s hard to do so preventively across all possible threat vectors. Organizations can make themselves targets with slow installation of updates and patches.
Read the full report by Kaspersky.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,428 | 2,022 |
"Immue discovers new exploitation of Apple’s private relay | VentureBeat"
|
"https://venturebeat.com/security/immue-discovers-new-vulnerability-in-apples-private-relay"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Immue discovers new exploitation of Apple’s private relay Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Immue , an Israel-based cybersecurity company providing holistic anti-bot and anti-fraud defense solutions, claims it’s found concerning vulnerabilities in one of Apple ’s latest privacy features — the iCloud Private Relay.
While helping organizations across multiple industries stop cyber fraud and bot attacks targeted at their companies, Immue said it detected many of these attacks coming from internet protocols (IPs) associated with Apple and their two supporting Akamai and Cloudflare servers.
In an exclusive interview with VentureBeat at the ongoing CyberWeek Tel Aviv, cofounders Amit Yossi Siva Levi (CTO) and Shira Itzhaki (CEO) confirmed that threat actors take advantage of the anonymity and web browsing privacy features of Apple’s technology to mask their IPs and launch multiple untraceable attacks.
How Apple’s private relay works In June of 2021, Apple hosted its annual Worldwide Developers Conference to showcase its latest technologies. Among the technologies launched, the most significant and controversial was the private relay technology which would form part of the iCloud+ subscription. With this service, users on iOS 15, iPadOS 15 and macOS Monterey can browse securely without worrying about having their browsing activities tracked and sold to the highest bidder.
By enabling this feature on an upgraded Apple device, users’ browsing activities on Safari are routed through two separate internet “relays” using a sophisticated multi-hop architecture. This rerouting guarantees that no single party — including Apple — can track the exact origin of the request, making it impossible for websites to create a detailed profile of users. Some experts have even called it “ internet privacy on steroids.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The exploitation How private data is managed and shared has always been a concern for the average internet user. Mckinsey reports that internet users are becoming increasingly intentional about the kind of data they share online and with whom, as no industry reached a 50% trust rating. With multiple data breaches springing up globally, many providers and even the government have made efforts towards curbing the menace — so much so that Gartner predicts the personal data of over 75% of the global population will be protected by new privacy regulations by 2025.
The McKinsey report also revealed that these breaches have made users turn to tools that give them more control over their data and its privacy — like the private relay. However, in solving this problem, Apple has inadvertently created a leeway for cyberattackers to thrive.
In what Levi described as “a new kind of attack,” he explained that masking IP addresses with proxies, VPN or the Tor network to avoid IP-based detection (like rate limit or IP score) is the single most important rule in cyberattack. He added that in the last two months, Immue has seen attackers abuse Apple’s new feature to mask their IPs and send thousands of bots to attack their customers. These private relay IPs are also whitelisted by Apple, giving adversaries uninhibited access to any website. Immue reports the attackers used 192 different IPs to generate three attacks with a volume of up to 50,000 bot requests each time.
Although Apple said the private relay technology was fitted with anti-fraud and anti-abuse systems like rate-limiting, single-use authentication tokens and consistent IP address per browsing session, it advised that fraud detection systems relying only on IP addresses should be updated to control the situation.
Founded in January of 2021, Immue claims its offering is helping different organizations across multiple industries like travel, finance, ecommerce, cryptocurrency and more — to outwit the most experienced human fraudsters and undetectable bots. The company says it offers powerful anti-bot and anti-fraud defense in one holistic solution that mitigates the impact of cyberattacks on businesses.
Immue’s unique value proposition, according to its cofounders, is its ability to detect cyber threats that no one knows exist. The company does this by monitoring and gathering data about the latest fraud mechanisms, tools strategies and using that information to detect, prevent or stop cyberattacks before they even materialize.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,429 | 2,022 |
"Google announces big update to Password Manager | VentureBeat"
|
"https://venturebeat.com/security/google-password-manager"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google announces big update to Password Manager Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, Google released a blog post announcing some key changes to Password Manager. The new changes will enable users who have multiple passwords for the same sites or apps, to automatically group them on Chrome and Android devices.
At the same time, when entering passwords into online accounts, a feature called Password Checkup will warn users about compromised credentials and weak or reused passwords, and give them an option to change them.
This means that users will have the opportunity to automatically secure weak passwords that put them at risk of being hacked.
In addition, the changes will not only enable users to create strong unique passwords across platforms, but users on Android will be able to create a shortcut to Password Manager on their home screen so they can access their passwords with a single tap.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! An intermediate step to passwordless authentication The announcement comes as panic over compromised credentials has reached a breaking point with Verizon’s 2022 Data Breach Investigations Report highlighting that compromised credentials accounted for almost 50% of data breaches.
With the FIDO-passwordless movement fully underway, and providers like Microsoft, Google and Apple committing to deploy passwordless authentication options, these changes to Password Manager provide a welcome intermediary step for users that are still dependent on password-based security.
The new additions to Password Manager will help to flag insecure passwords and provide automatic support to change them so there’s less chance of them being compromised by unscrupulous attackers.
As Google researchers wrote in the company’s announcement blog post, “strong, unique passwords are key to helping keep your personal information secure online. That’s why Google Password Manager can help you create, remember and autofill passwords on your computer or phone, on the web in Chrome, and in your favorite Android and iOS apps.” “Google Password Manager can create unique, strong passwords for you across platforms, and helps ensure your passwords aren’t compromised as you browse the web,” the blog post said.
The global password management market Google is one of the key players in the password management market , which researchers valued at $1.25 billion in 2020 and expect will reach a value of $3.07 billion by 2026 as more organizations and users look for solutions to manage the passwords for their online accounts.
The organization is competing against a range of established providers including 1Password , a password manager that enables users to generate unique passwords for their online accounts with a single click across platforms including Windows, macOS, Linux, iOS and Android.
At the start of this year, 1Password raised $620 million in funding and achieved a $6.8 billion valuation.
Another prominent competitor is LastPass , which enables users to generate passwords with a random generator, and save passwords in the LastPass vault. The vault enables users to log in via the LastPass Authenticator without the need for a password.
Private equity firms acquired LastPass’s parent company for $4.3 billion in December of 2019.
Of course, the key differentiator for Google Password Manager is its direct link to Chrome and Android users and the wider Google product ecosystem, to ensure these users can seamlessly manage and update passwords to protect themselves from credential-based attacks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,430 | 2,022 |
"Research shows data security tools fail against ransomware 60% of the time | VentureBeat"
|
"https://venturebeat.com/security/data-security-ransomware"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Research shows data security tools fail against ransomware 60% of the time Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, data security provider, Titaniam Inc.
, released the State of Data Exfiltration & Extortion Report, which revealed that while over 70% of organizations have an existing set of prevention, detection and back solutions, nearly 40% have been hit with ransomware attacks in the last year.
The findings suggest that traditional data security tools, like secure backup and recovery tools, solutions that offer encryption at rest and in transit, tokenization and data masking, are failing to protect enterprises’ data against ransomware threats 60% of the time.
Above all, the research highlights that organizations cannot afford to be reliant on traditional data security tools alone to defend against data exfiltration and double extortion ransomware attacks, they need to be able to encrypt data-in-use to stop malicious actors in their tracks.
The problem with traditional data security tools The problem with traditional data security tools isn’t that they don’t have robust security measures, but that attackers can sidestep these controls by stealing credentials to achieve privileged access to critical data assets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “These traditional tools are ineffective against ransomware and extortion because the most common attacks aren’t about attackers “hacking” in but rather attackers “logging in” using stolen credentials.
When this happens, traditional security tools view attackers like they would valid users,” said founder and CEO of Titaniam, Arti Raman.
“In this scenario, as attackers move through the network, they can use their credentials to decrypt, detokenize and unmask data like a legitimate user or administrator would as they went about their day-to-day work. Once the data has been decrypted, attackers exfiltrate it and use it as leverage for extortion,” Raman said.
Raman notes that the shift toward exfiltration occurred around mid- to late-2020, when cybercriminals started incorporating data exfiltration to gain more leverage over victims using backup and recovery solutions.
The only way to defend against the intrusions typical of modern ransomware attacks is for organizations to deploy data security solutions with encryption-in-use. Encryption-in-use can help obscure data so that it can’t be exfiltrated by attackers who’ve obtained privileged access to enterprise resources.
The data encryption market The need for enhanced data protection has contributed to a significant growth in the data encryption market , which researchers valued at $9.43 billion in 2020 and anticipate will reach a value of $42.3 billion by 2030, as more organizations seek to keep out unauthorized users.
Gartner [subscription required] also anticipates that data encryption will grow more popular in the future, suggesting that by 2023, 40% of organizations will have a multisite, hybrid and multicolor data encryption strategy, up from less than 5% in 2020.
Titaniam is one of the latest entrants to the market, providing enterprises with a data security platform with encryption-in-use to protect it from unauthorized users who’ve gained privileged access, and raising $6 million as part of a seed funding round at the start of this year.
It’s competing against providers like IBM Security Guardium Data Encryption, which offers enterprises encryption, tokenization, data masking and key management capabilities to protect data in cloud, virtual and on-premise environments. IBM recently reported raising fourth quarter revenue of $16.7 billion.
Likewise, Fortanix occupies a significant position in the market with its Runtime Encryption platform that uses encryption to protect data from being exposed in plaintext. Fortanix most recently raised $23 million as part of a series B funding round in 2019.
The main differentiator between Titaniam and other data encryption providers is that it doesn’t rely on tokenization. This means encryption-in-use doesn’t disrupt full-feature search and analytics applications, providing an answer that balances greater security controls without impeding the user experience.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,431 | 2,022 |
"Cyberthreat analyst: Key job skills and expected salary | VentureBeat"
|
"https://venturebeat.com/security/cyber-threat-analyst-key-job-skills-and-expected-salary"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cyberthreat analyst: Key job skills and expected salary Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Table of contents Who is a cyberthreat analyst? Role of a cyberthreat analyst 1. Investigating security breaches 2. Locating vulnerabilities 3. Performing ethical hacking 4. Developing and implementing organization-wide security protocols 5. Installing and operating security software Expected salary and scale Top 10 must-have skills for a successful career as a cyber threat analyst 1. Intrusion detection 2. Incident response 3. Cyber threat intelligence 4. Knowledge of regulatory guidelines 5. Operating systems knowledge 6. Network security control 7. Controls and frameworks 8. Endpoint management 9. Data security 10. Programming Developing your skill set Cyber threat analysts are professional intelligence experts. They use scientific and technical skills to analyze and address cyber threats for defensive and remedial purposes. This article discusses the role of a cyber threat analyst, the expected salary and scales and some essential skills for a successful cyber threat analyst career.
The evolving cyberworld needs the human factor. This is because products and technologies for cyber defense have a limited scope of functionality. An understanding of the inner workings and motivations of hackers is only achievable when people who possess emotional intelligence and technical skills are involved. They can research to reveal the causes of cyberattacks and better guard against and combat them.
Cyber threats are not definite. Millions are created annually, with increasing potency, some of which include social engineering attacks , malware, distributed denial of service (DDoS) attacks, advanced persistent threats (APTs), trojans, wiper attacks, data destruction, manipulation, and so on. According to CISCO , total DDoS or social engineering attacks are expected to reach 14.5 million in 2022.
Not only does a cyberattack sabotage normal operations, but it may also inflict damage to important IT assets and infrastructure that can be impossible to recover from, especially without sufficient resources. To this end, cyber threat analysts are essential in any organization with digital affiliations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Who is a cyberthreat analyst? Cyber threat analysts are professional intelligence experts. They protect an organization from digital threats and actively develop programs used to respond to and subdue cyberattacks. Cyber threat analysts protect organizational infrastructure, such as networks, and relevant software or hardware components, like servers or workstations, from cybercriminals and hackers intending to cause damage or steal sensitive information.
As a trained professional, a cyber threat analyst specializes in network and IT infrastructure security. He also comprehensively understands cyberattacks, malware, and the nature of cybercriminals, and actively strives to anticipate and hinder these attacks.
Also known as threat intelligence analysts, cyber threat analysts analyze digital threats and give clear reports on any indicator of compromise (IoC) discovered and, based on their assessment, they take action to secure assets that are vulnerable to cyberattacks. The work of a cyber threat analyst requires meticulous attention to detail, research and technical skills, and creativity.
Also read: Report: Orgs with zero-trust segmentation avoid 5 major cyberattacks annually Role of a cyberthreat analyst Cyber threat analysts play a crucial role in protecting sensitive information. They work across departments and processes to spot and fix defects in an organization’s security systems and programs and recommend efficient strategies to improve the general security status of an organization.
In the world of cybersecurity, advanced persistent threats (APTs) and defenders are always trying to outwit each other. Information on a hacker’s idiosyncrasies is crucial to proactively tailor cyber defenses and forestall future attacks.
A cybersecurity analyst is tasked with protecting an organization’s hardware, software and networks from theft, loss or unauthorized access. At a small organization, they might be required to perform a variety of cybersecurity tasks, but at larger organizations, there is room for specialization as one part of a larger security team.
More streamlined duties of cyber threat analysts include: 1. Investigating security breaches Data breaches can be overwhelming for an organization. When these events occur, they can jeopardize public and consumer trust; in the worst cases, they can cost thousands or millions of dollars and result in credit card fraud, identity theft or other terrible financial losses. The Identity Theft Resource Center reported that more than 90% of data breaches are cyberattack-related, indicating a quick start to data breaches in 2022 after a record-setting 2021. The repercussions of these breaches may include database destruction, the theft of intellectual property, the leakage of secret information and regulatory responsibilities to notify and possibly compensate people impacted.
It is the responsibility of the cyber threat analyst to scrutinize security breaches to identify hostile hackers and strengthen the organization’s security. Also, cybersecurity analysts are responsible for performing digital forensics at a digital crime scene. They determine whether a real or attempted breach occurred, look for surviving security flaws or malware left behind and try to restore data.
2. Locating vulnerabilities One of the most important aspects of the role of cybersecurity analysts is finding vulnerabilities so they can be rectified before a breach occurs. You could carry out a vulnerability assessment to detect potential threats to organizational security. During this assessment, the analyst highlights the data and assets at risk and details the possible reasons for a future breach.
A vital part of this task is teamwork — not only with other members of the IT team, but also with the other nontechnical staff members whose jobs might be impacted by a setback in security. Cybersecurity analysts need to sustain open communication lines so that they can teach their non-cyber colleagues what they need to know about updated cybersecurity procedures and how to protect themselves from external attacks.
3. Performing ethical hacking Ethical hacking is another important role of security analysts. By applying this practice, cybersecurity professionals do not aim to crack security to steal data themselves; instead, they seek to discover security back doors and block them before malicious hackers leverage them. For people who love problem-solving and unraveling challenging security matters, penetration testing can be one of the most exciting parts of the job. They might use software applications or manual coding skills to virtually hack and exploit their system, so that they may determine how to fix it.
Tools like Kali Linux and Metasploit come in handy for penetration testing, and robust software programs that disclose and probe vulnerabilities across operating systems. Kali Linux and Metasploit are both used by cybercriminals and ethical hackers to point out the same weaknesses but with different motives. While the cybercriminal aims to attack, the security professional seeks to defend.
4. Developing and implementing organization-wide security protocols As a cybersecurity analyst, you will be required to develop security programs for an entire organization and its digital ecosystem. Because security affects everyone, regardless of how nontechnical their professions are, everyone in an organization must be aware of and follow security policies. As a cybersecurity analyst, you’ll have to create these standards while keeping in mind that a network’s security is only as strong as its weakest link.
5. Installing and operating security software Managing, installing and utilizing security software is a critical part of the cyber threat analyst’s role. As a cybersecurity analyst, you might install system-wide software for better email or login security, prevent malware from traveling into a network from an individual computer, improve security for mobile devices or bolster your network’s shield against unwelcome infiltrations. You may also use specialized software to boost your penetration testing, shield your organization’s website and guard network traffic.
Also, a cyber threat analyst needs to ensure that only the people who should be privy to delicate data have access to those systems. One of the most critical facets of this task is identity and access management (IAM). When executing IAM, analysts ensure that each user on the network is properly identified and that their network access levels are what they need to be while restricting the system’s vulnerability to security problems.
A recent World Economic Forum report revealed that 95% of cybersecurity breaches are caused by human error. Some of the most severe ransomware incidents of recent years were caused when non-tech-savvy employees unknowingly downloaded malware. Proper IAM can attenuate this risk when properly implemented.
Other responsibilities of cyberthreat analysts include: Developing security strategies to protect data systems from potential threats Analyzing security breaches and assessing damage extent Keeping abreast of current digital security trends to recommend best practices on security enhancement for an organization Fixing detected vulnerabilities, to maintain very low to nonexistent penetration risk Responding to cyberattacks quickly and efficiently so that minimal damage is incurred Spearhead cybersecurity training to ensure that all departments in an organization maintain high security standards Liaise with stakeholders about cybersecurity issues and provide future recommendations Also read: Cybersecurity landscape: The state of managed security services, 2022 Expected salary and scale Since everything from our social lives to important corporate data is moving online, cybersecurity has indeed become a significant priority for just about every organization. Consequently, cybersecurity analysts are often well-compensated for their skills.
According to the Bureau of Labor Statistics (BLS) , the average annual salary of a cybersecurity analyst is $103,590. Salaries vary depending on factors including skills, experience and qualifications, location and sector. This means that the longer you are in this field, the more you can make. Also, if you have a good degree and a specialized skill set, you may be able to make more. There will be different pay rates for various titles as well.
The average entry-level cybersecurity analyst salary in the United States is $72,215, but the salary range typically falls between $65,820 and $79,148. However, based on skill level, location, and years of experience, the salary expectation can rise to $111,432 annually.
Top 10 must-have skills for a successful career as a cyber threat analyst The job of a cybersecurity analyst is a specialized position that requires a unique set of skills. Cyber threat analysts use a combination of technical and workplace skills to examine vulnerabilities and respond to security incidents. Some of the top-tier skills of a cybersecurity analyst are: 1. Intrusion detection The major duty of a cyber threat analyst involves monitoring network activity for possible intrusions. Knowing how to use intrusion detection software, including security information and event management (SIEM) products, intrusion detection systems (IDS) and intrusion prevention systems (IPS) ensures that they can quickly spot suspicious activity or security infringements.
2. Incident response While prevention is the main goal of cybersecurity, quickly responding when security incidents occur is essential to keep damage and loss to a minimum. Effective incident dealing requires an understanding of an organization’s incident response plan, as well as skills in digital forensics and malware survey.
3. Cyber threat intelligence Cyber threat intelligence is a body of information that enables an organization to understand the threats that have targeted, will target, or are currently targeting the organization. Threat intelligence ensures anticipation, prevention and identification of cyber threats that try to take control of valuable resources.
Threat intelligence findings gather raw data about emerging or existing threats from several sources. This data is then analyzed and probed to produce threat intelligence feeds and management reports that contain information that can be used by computerized security control solutions. You can be a more valuable cybersecurity analyst by keeping up with the trends on the threat terrain.
4. Knowledge of regulatory guidelines Cybersecurity is meant to protect an organization from attack, theft and loss, as well as adhere to industry regulations. As a cyber threat analyst working for a company that does business around the globe, familiarity with General Data Protection Regulation (GDPR) could be beneficial. Data privacy is becoming an integral part of security and adherence for businesses. Cybersecurity analysts should know the rudiments of data privacy as well as the regulations around them such as GDPR, Health Insurance Portability and Accountability Act (HIPAA), and Children’s Online Privacy Protection Act (COPPA).
5. Operating systems knowledge Security hazards exist in all operating systems, both on computers and other portable devices. Building a deep familiarity with MacOS, Windows and Linux, as well as their command-line interfaces is setting yourself up for success as a cyber threat analyst. It might also be beneficial to study the threats and weaknesses associated with mobile operating systems, like iOS and Android.
6. Network security control Many cyber attacks happen across a network of connected devices. The very same applications that allow businesses to collaborate can also cause security vulnerabilities. To keep an organization secure, security analysts need knowledge of wired and wireless networks, and how to safeguard them.
7. Controls and frameworks A cybersecurity framework provides a compilation of the best strategies, programs, tools and security procedures designed to help secure an organization’s data and business processes. A control is a measure that a company applies to protect itself from vulnerabilities and invasions. The chosen framework will vary depending on the company and industry. It might be helpful to get familiarized with some of the most common cybersecurity frameworks.
8. Endpoint management As more people find themselves working from home, organizations need security professionals who know how to protect numerous endpoints, such as computers, phones and internet of things (IoT) devices. Common tools that help with this include firewalls, antivirus software, network access controls, and virtual private networks (VPNs).
9. Data security Data embodies a valuable aid for many organizations. Knowing how to protect it involves understanding encryption, access management, transmission control and internet protocols (TCPs and IPs), and the CIA triad: confidentiality, integrity, accessibility.
10. Programming Although progressions in technology are enabling cyber threat analysts to perform their work without having to write code, a fundamental understanding of digital languages like JavaScript, Python, and C/C++ could give you a competitive advantage.
Developing your skill set Even as cybersecurity analysis is a technical role with some job-specific capabilities, you’ll also want to develop your workplace skills. This is because technical skills alone are not enough if you intend to work in an organization or communicate with clients and peers.
Some vital workplace skills include: Communication : You may understand the threats to your company’s systems, but you need to be able to describe them in simple terms to others. You will have to communicate with others often and work with a team that is responsible for security. When security events occur, you will need to liaise with your security team and catalog the process of investigation and retrieval. You may also be tasked with training your colleagues in the best security practices.
Strong attention to detail : You need to be detail-oriented to do well in this role, paying rapt attention to the smallest adjustments and modifications in your organization’s network, because noticing a small irregularity could mean saving your company from a big data loss.
Critical thinking : Whether you’re reacting to a threat, repairing a vulnerability, or proposing new security protocols, critical thinking skills empower you to make data-driven decisions as a cyber threat analyst.
Read next: Top 20 cybersecurity interview questions to know in 2022 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,432 | 2,022 |
"Dark data: Managing the data you can’t see | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/dark-data-managing-the-data-you-cant-see"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Dark data: Managing the data you can’t see Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In today’s era of seemingly infinite data volume and complexity, many enterprises are unintentionally neglecting an entire category of data that is critical to their data protection and management practices. On average, more than 50% of a company’s data is “dark” – information held up in data repositories with no attached or determined value. In addition to costing an average $26 million in storage expenses per year, dark data poses significant risks to an enterprise’s security and compliance efforts, making it more important than ever to address the foundational issues that cause it.
Dark data threatens protection Most businesses lack clarity around the data they need to protect. Because dark data is often out of sight and out of mind for many enterprises, dark data reservoirs – holding sensitive and valuable data – become an enticing target for cybercriminals and ransomware attacks.
Additionally, nearly half of senior IT decision makers cannot confidently and accurately state the exact number of cloud services that their company is currently using, even as enterprises implement a multicloud approach with both on-premises and public cloud resources as part of their data infrastructure.
If an organization fails to shine a light on dark data, especially dark data stored in the cloud, multicloud approaches can further widen the door to cyberattacks and recovery at scale cannot be ensured.
Surviving any kind of ransomware attack requires an understanding of what and where your data is, as well as what it’s worth. The more organizations know about the data they hold, the more effective they will be in understanding how to protect it from risk and how to recover after an attack.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dark data threatens compliance Untagged and unstructured data also poses challenges to meeting regulatory landscapes that are constantly evolving. For example, the California Consumer Privacy Act – or CCPA – which is currently limited in scope but will become fully operative by January 2023, will require businesses – including data brokers – to give consumers notices explaining their privacy practices.
While we don’t yet have a federal data compliance law, states are following California’s lead. With data privacy laws expanding into Virginia, Colorado, Massachusetts and New York, companies that identify and catalog their most critical information, remove information that contains no value and ensure compliance with all local regulations are best suited to proactively manage information risk and eliminate gaps in data governance.
Tactically, enterprises may implement data capture, archiving and surveillance capabilities to follow data compliance requirements. Better management of dark data will help companies comply with stringent regulations and implement retention policies across their entire data estate.
Dark data and sustainability What’s more, dark data plays a significant role in an enterprise’s environmental compliance – another set of increasing regulations. As enterprises work to develop sustainability programs to meet carbon reduction standards, the environmental cost of dark data must be a priority. Dark data storage was estimated to emit 6.4 million tons of carbon dioxide into the atmosphere in 2020. And the future outlook is even worse – analysts predict an increase of 91 ZB of dark data by 2025 (over four times the volume in 2020). This means dark data will continue to emit carbon into the atmosphere at alarming rates.
To protect the planet from dark data’s waste, businesses must review their data management strategies, identify valuable data and rid their data centers and clouds of unnecessary data. By properly managing dark data, there is significant opportunity for enterprises to reduce their carbon footprint, comply with industry environmental regulations and meet sustainability goals that are increasingly important to a wide range of stakeholders.
Managing and protecting dark data It’s clear that dark data poses threats to an enterprise’s security and compliance. So how can data managers better identify, manage and protect dark data within their company? First, data officers must develop and act from a proactive data management frame of mind, which allows organizations to gain visibility into their data, take control of data-associated risks and make informed decisions on which data to keep versus delete before a critical security event takes place.
Some tactics data managers should implement to establish a proactive mindset are data mapping, used to discover all sources and locations of collected and stored data, and data minimization, used to reduce the amount of data being stored and confirm that retained data is directly related to the purpose in which it was collected.
Second, enterprises should also use technology advancements to their advantage. Artificial intelligence (AI) and machine learning (ML) offer significant opportunities to effectively identify, manage and protect large pools of untagged, unstructured data and play a vital role in data management processes.
The ultimate goal is to manage the information, not just the data, at the source (edge) by quickly scanning, tagging and classifying information to ensure that sensitive or risky data is properly managed and protected, regardless of where it lives. As such, transparent AI and ML policies help businesses gain full visibility into their data by sourcing vulnerabilities and securing risks. That’s the next frontier.
Properly managed dark data offers a more secure and compliant future for organizations, lowers costs and enables actions via previously untapped intelligence, opening possibilities for organizational optimization and innovation within any company.
Ajay Bhatia is vice president and general manager, data compliance and governance at Veritas Technologies.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,433 | 2,022 |
"Cut through the red tape by empowering citizen data analysts | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/cut-through-the-red-tape-by-empowering-citizen-data-analysts"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Cut through the red tape by empowering citizen data analysts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Whether you’re stuck in line at the DMV or in limbo getting a project approved, red tape is the enemy of progress — and it plagues enterprise data.
The growth of enterprise data has overwhelmed IT departments, which face a severe backlog that’s delayed the average company’s IT projects by three months to a year.
Employees are stuck waiting on an open IT ticket before they can access vital datasets, making it difficult to reap the benefits of next-generation data analytics.
Enter citizen data analysts: nontechnical business users who can access data, create their own data solutions and use analytics to inform and improve their work. By empowering employees outside the IT department to take ownership of their own data needs, enterprises unlock the full potential of data analytics — without the wait.
Take a load off IT’s plate Data touches every part of the modern enterprise, with data volumes projected to grow tenfold by 2025. From sales to finance to HR, every department can employ data to make sound decisions and track goals along the way. In fact, the average company uses five internal applications to support their decision-making — and one out of five uses more than 20.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Despite recognizing the value of data, however, 86% of businesses aren’t yet prepared to handle more data.
For many organizations, data governance functions as a bureaucracy. Business users rely on IT teams to handle an assortment of data processes: setting up server databases, managing data warehouses , integrating data into legacy systems and the growing array of cloud-based applications … the list of processes is never-ending. And when the time comes for end-of-quarter reports or updated metrics, IT teams provision access on a case-by-case basis, often building custom one-off integrations to connect platforms.
As organizations transition to hybrid work models and more data moves to the cloud, technical teams have far too much on their plates. Don’t assume your IT talent is safeguarded from this trend — 97% of data teams are currently at or over capacity and that number likely includes yours.
Democratizing datasets Nontechnical employees don’t need to be experts in configuring multiserver systems — that’s what your IT department is for. But they do benefit from quick, seamless data access that allows individual business users across departments to make data-driven decisions in real time.
The concept is similar to low-code/no-code tools, which enable citizen developers to create their own digital solutions with minimal IT intervention. Likewise, data-connectivity and integration-technology tools put data into the hands of business users on their own terms — and the opportunities are endless when that occurs.
Democratized datasets allow business users to access and analyze their data in the moment — enabling teams to make more informed, agile decisions aligned with fast-changing trends and on-the-ground developments. You can’t wait until the next quarterly review to change course.
For example, a demand-generation team analyzes software downloads to determine when customers are most active, while supply chain analysts track spikes in purchase orders during a busy month to increase manufacturing more rapidly.
IT still plays a role in these processes, but it no longer involves time-consuming, tedious tasks like replicating massive amounts of data and integrating platforms with the latest version of Salesforce or Paylocity.
Reduced burden on technical teams allows IT staffers to focus on what they do best: establishing proper business credentials and bolstering data security — ensuring that your technology infrastructure is helping your business instead of hindering it.
How to empower your citizen data analysts Opening opportunities for citizen analysts speeds up your operations and unlocks newfound business value.
More than two-thirds of organizations say data analytics help them make better strategic decisions, while half say it leads to enhanced operational process control, a better understanding of consumers and cost reductions.
Are you tired of waiting months or even years to put your data to use? Here are three considerations to empower and support a cadre of citizen data analysts at your organization: Put IT resources toward access permission Account executives need access to last week’s sales data, demand generation marketers need the latest campaign metrics, and logistics managers need today’s purchase orders and invoices.
Only the department working with the data knows what works for that team, so your IT department should focus on setting up the right controls and user permissions to open up data access to lines of business without compromising security. No matter what software or digital solutions your company turns to, this is one area where your IT team is needed.
Democratizing data access across your organization does not mean opening up the floodgates for anyone to see and manipulate secure data. By setting up the right user credentials within your data ecosystem, IT administrators can empower lines of business to easily work with the data that matters to them, and only that data.
Work with partners to achieve universal connectivity Workplace apps have skyrocketed in popularity, with the number of public APIs growing by the thousands annually.
Data is not only created in more places today; it also needs to connect with more platforms than ever before.
If you’re looking to foster cross-departmental collaboration, it doesn’t help if your datasets work with Hubspot but don’t mesh with Salesforce. You want to avoid a scenario in which departments each own a piece of the puzzle, but aren’t able to effectively work together to achieve big-picture goals.
Fortunately, software solutions on the market can help create universal connections. To find the right data connectivity solution for your organization, prioritize tools that offer seamless and straightforward connections to both your legacy system and cloud platforms.
Strive for real-time analytics Data has an expiration date. By the time enterprise data lands in the right hands by way of IT, the information is often obsolete.
Take advantage of real-time data platforms that move datasets directly from storage to analytics platforms without replicating entire datasets. Real-time connectivity allows business teams to build reports leveraging live data from an ever-expanding catalog of applications. taking on new applications for real-time data. That could entail a logistics team connecting their operations platform to a dashboard in their warehouses to give workers context on real-time inventory and shipment numbers.
Whether it’s the most up-to-date sales figures or minute-by-minute updates on marketing campaigns, real-time analytics provide accurate, timely insight into your business.
Data has the power to transform your business. But not if it’s waiting on overburdened IT teams to access data results in valuable insights and metrics left to sit in some dark, lonely warehouse instead of being put to use in a sales dashboard or upcoming strategic plan.
Bypassing the IT jam and equipping all employees with powerful data and analytic solutions opens the door to new insights and innovations across your organization. If given the right tools, citizen data analysts will lead the way. How are you helping them pave the path forward at your business? Amit Sharma is CEO of CData.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,434 | 2,022 |
"Buy now, pay later: How nextgen financing platforms can survive the new frontier of fraud | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/buy-now-pay-later-how-nextgen-financing-platforms-can-survive-the-new-frontier-of-fraud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Buy now, pay later: How nextgen financing platforms can survive the new frontier of fraud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Buy now, pay later (BNPL) financing reached record heights in 2021. A September study by Accenture revealed some 45 million BNPL users in the U.S. alone, representing more than 300% year-over-year growth since 2018, according to the Pew Trusts.
Ecommerce merchants are proving all-in on partnering with BNPL platforms to offer short-term financing to those clamoring for it – and clearly many are.
But, as with any new financial product, caution is prudent. BNPL schemes are no doubt proving lucrative for fraudsters as a new payment fraud avenue. As BNPL attracts regulatory scrutiny, these platforms are at a crossroads. Could a well-devised risk strategy, equipped with the appropriate analytic guardrails, help secure the future of this payment modality? Nextgen layaway: BNPL defined BNPL is a short-term line of credit offered at the point of sale, either in person or online. Amounts generally range from less than $100 up to $10,000. Unlike traditional credit lines, BNPL accounts don’t require a full credit check, making it easier for people with no credit, or even bad credit, to get approved.
Consider BNPL a modern-day layaway program. Like the layaway plans first popularized during the Great Depression, the consumer pays in predetermined installments, instead of paying the full amount upfront. The big difference? This layaway reboot offers instant gratification : the buyer receives the purchase immediately without having to wait until it’s paid off.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Merchants offer BNPL to attract new customers, particularly the kind less likely to abandon their carts and more willing to drop cash on higher-ticket items. Many BNPL service providers don’t even charge interest or fees, provided payments are made on time and the remaining balance is paid in full. Instead, the merchant pays a percentage fee on each transaction.
This flexible payment model has proven particularly appealing not only to millennials but also to Gen Z, the most rapidly growing consumer segment. Gen Z will number an estimated 78 million people in the U.S. by 2034 , comprising the nation’s largest generation ever. Put in perspective, among the one in five holiday shoppers who used a BNPL service in 2020, 22% of them were Zoomers.
Without regulation, a Wild West of fraud Currently, most BNPL products are unregulated by the Truth in Lending Act, so the BNPL landscape goes largely unchecked. Financial losses are absorbed by the merchant or the platforms themselves. Because most BNPL platforms are VC funded, backed by deep-pocket investors with hearty risk appetites, fraud safeguards have tended to take a backseat in the rush to seize market share.
Without the appropriate antifraud defenses, however, BNPL merchants and vendors risk death by a thousand cuts – absorbing $100 charges here, $500 charges there, increasing en masse. Behind this potential “slow bleed” scenario lurks two prevalent fraud types: Synthetic identity fraud.
BNPL providers are particularly vulnerable to synthetic ID fraud. This is because many BNPL shoppers have little to no credit history, and most BNPL services only make a “soft pull” on the consumer’s credit. Put plainly, bogus shoppers with thin credit profiles are relatively easy for fraudsters to fabricate.
Account takeover (ATO).
A fraudster gains access to a customer’s BNPL account and goes on a spending spree, often with the intent of reselling the stolen goods. ATO victims often detect the fraud too late, because the account holder isn’t immediately billed.
Fraudsters are already exploiting the gaps between the merchants’ fraud prevention controls and those of their BNPL providers. For example, some merchants sidestep their own fraud prevention systems altogether, because the BNPL provider assumes the fraud chargeback liability. Savvy fraudsters quickly find – and profit greatly – from such vulnerabilities in BNPL ecosystem.
The cumulative fraud losses may one day force BNPL providers to pass some of these costs onto the retailers or, ultimately, the account holders themselves in the form of fees and higher rates. Either outcome could cause the payment option to lose its luster – assuming the losses themselves don’t cripple or spell the end of some BNPL platforms.
Retailers and BNPL platforms alike must also consider reputational tarnish of exposing customers to potential fraud.
Fighting back A robust fraud defense is best offense against BNPL fraudsters. At a minimum, BNPL service providers should incorporate digital/biometric identity verification and transaction monitoring capabilities into their platforms.
Identity verification is particularly important during the application process. Most BNPL systems make the sign-up process effortless, with the user uploading documents to prove their identity. By adding both digital/biometric authentication and liveness checks, BNPL vendors can make their verification processes more resilient without introducing too much friction.
After the new account verification process, BNPL services must protect against account takeover using layered strategies, such as: Implementing a risk-based, multifactor authentication and verification process.
Establishing appropriate limits and adding real-time account freeze capabilities.
Collecting and analyzing information like user device data, geolocation and time of day in concert with other authentication factors.
Applying machine learning (ML) models to event and transactional data to ensure accurate anomaly detection.
On the merchant side, retailers should take a keen interest in their BNPL providers’ antifraud capabilities. Moreover, they should augment them with their own automated fraud detection tools as an added layer of protection when possible. Optimizing the customer experience is paramount on both sides. Importantly, this includes shielding their mutual customers from online grifters.
The future of BNPL Together with the help of their retail partners, BNPL vendors can continue to grow demand for flexible financing options while protecting consumers – and their platforms – from fraud activities. They need only apply the same powerful data analysis tools that have proven effective by more mature financial services players.
Ultimately, a robust antifraud posture can do much more than protect customers and the BNPL platform’s bottom line. In an increasingly crowded space, embedding potent ML/AI models and layered fraud defenses could easily help differentiate a BNPL platform from its competitors.
As the federal government signals greater regulatory scrutiny and emphasis on consumer protections, each BNPL service provider and retailer has a vested interest in proactively cultivating a trustworthy reputation. What they do in response to growing fraud risks will help shape this industry – and perhaps determine if BNPL survives and thrives or goes the way of layaway.
Thomas French is a senior fraud and security intelligence advisor at SAS.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,435 | 2,022 |
"Adapting industrial control system (ICS) security to the new normal | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/adapting-industrial-control-system-ics-security-to-the-new-normal"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Adapting industrial control system (ICS) security to the new normal Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Despite the number of high-profile attacks in the second half of 2021 slightly declining from earlier in the year, the impact of these attacks has not. With cyber-physical assets remaining highly connected, security measures for critical industrial, healthcare and enterprise ICS devices have taken the front seat. A recent report found that 34% of vulnerabilities disclosed in the second half of 2021 were among cyber-physical systems in the internet of things (IoT), information technology (IT) and internet of medical things (IoMT) verticals, proving the need for said security measures to encompass the entire extended internet of things (XIoT), not just operational technology (OT).
Tardigrade malware Spreading throughout several biomanufacturing facilities, the Tardigrade malware was responsible for at least two attacks in April and October on the healthcare sector that allowed bad actors to obtain sensitive company information and deploy malware.
A polymorphic malware, Tardigrade changes properties based on the different environments it finds itself in, making it hard to predict and protect against. BioBright researchers compared the Tardigrade malware to Smoke Loader and, more specifically, described it as having the functionality of a trojan, meaning that once installed on a victim network it searches for stored passwords, deploys a keylogger, starts exfiltrating data and establishes a backdoor for attackers to choose their own adventure.
In response to the known attacks, healthcare companies that could be at risk were warned to scan their biomanufacturing networks for any potential signs of an attack. In an advisory put out by the Bioeconomy Information Sharing and Analysis Center (BIO-ISAC), the nonprofit that initially published the Tardigrade research, they recommended treating networks as if they were compromised or will be compromised and review cybersecurity measures and adjust as needed.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Log4j Another major vulnerability discovered in the second half of 2021, the Log4Shell vulnerability is a zero-day that was first uncovered in December and was found to be impacting the popular Java-based library for logging error messages, Log4j.
Able to be executed by remote and unauthenticated users, there were over 100 known affected vendors, according to this list published by CISA, of which more than 20 are ICS vendors.
Since the software was widely used in OT environments, it was equally as exploitable, and the remote ability for attack made it easy to do so. In response to the vulnerability’s discovery, Cybersecurity and Infrastructure Security Agency (CISA) director Jen Easterly noted that it presented an urgent challenge to network defenders, given its broad use. End users are reliant on their vendors, and the vendor community was asked to immediately identify, mitigate and patch the wide array of products using this software. Vendors were also advised to communicate with their customers to ensure end users knew that their product contained this vulnerability and should prioritize software updates.
New Cooperative ransomware attack A uniquely vulnerable industry, food and beverage manufacturers have seen a growing focus on their operations due to the devastation that a disruption in their production efforts could cause. Similar to the JBS Foods attack earlier in 2021, NEW Cooperative, an Iowa-based farmer cooperative that is part of the state’s agricultural supply chain, suffered a ransomware attack in September, carried out by BlackMatter.
Similar to food processor JBS Foods, NEW Cooperative quickly and proactively took their systems offline to contain the attack and limit damage. With 40% of grain production running on its software and 11 million animals’ feed schedules relying on them, an attack would have quickly and negatively affected the food supply chain.
Recommendations for ICS security From the last six months of 2021, and after studying three different major attacks, security professionals can implement many different measures to fully protect the XIoT moving forward. ICS security measures include network segmentation, phishing and spam protection, and protecting remote-access connections.
This year, awareness was brought to the fact that network segmentation is a key to being able to protect remotely-accessible internet-connected industrial devices. To best protect against these kinds of attacks, network administrators should ensure that their networks are segmented virtually and set up in such a way that they can be managed and controlled remotely.
In addition, phishing attempts have increased as a result of remote work and can be protected against by, among other things, not clicking links from unknown senders, not sharing passwords and enforcing multifactor authentication.
Remote-access connections must also be protected as they’re a critically important aspect of the OT and industrial environments in the new normal. To do so, security professionals in these industries should verify that VPN vulnerabilities are patched, monitor any and all remote connections and enforce permissions and administrative controls related to user access.
Chen Fradkin is a data scientist at Claroty.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,436 | 2,022 |
"Snowplow offers a platform to help enterprises ‘create’ data for AI and analytics | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/snowplow-raises-40m-to-help-enterprises-create-data-for-ai-and-analytics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowplow offers a platform to help enterprises ‘create’ data for AI and analytics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Most organizations working on artificial intelligence (AI) and advanced analytics projects tend to use data from existing systems like Google Analytics and CRMs. These sources offer plenty of information to work with, but they are also disparate in nature, which means the data they provide comes with varying structures (imagine different field types) and different levels of granularity, quality and completeness.
This makes it difficult for the organization to use the data as-is and adds the technically challenging and time-consuming element of data wrangling to the process – where teams have to work to clean, organize and transform the data into a standardized format for use. Plus, it also creates compliance issues since it is very difficult to track data lineage from a collection of black-box SaaS applications.
Creating data with Snowplow To solve the problem, London-based Snowplow Analytics is offering enterprises a platform to generate structured behavioral data assets (describing the behavior of customers, the actions and decisions they make, and the context of those actions and decisions) that are customized to suit specific AI and BI applications and remain fully compliant at the same time. The company today announced it has raised $40 million in a series B round of funding.
“Behavioral data generation is about connecting the events together that a customer, machine, or application might witness throughout time. This allows the behavior to be analyzed in a highly accurate and secure manner, including being compliant with European third-party privacy rules,” Alex Dean, the cofounder and CEO of Snowplow, told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The platform delivers AI/BI-ready data directly to the data warehouse or lakehouse of the customer, complete with a common schema that can be used to train models, streamed for real-time applications or enriched with third-party data and systems to meet future use cases. This means no more hefty investments in finding, cleaning and preparing data for analysis.
Users handle every aspect of the platform through a dedicated console, including defining policies on how to create this data in the first place and enabling its sharing and management. According to the company, more than 10,000 enterprises, including Strava, CNN and Software.com, are already using Snowplow to create data for various AI and analytics applications.
“Snowplow is unique in the way that it solves the problem with informative, accurate data. Other companies create behavioral data (e.g., from web and mobile) but typically to power their own applications in their own schema — examples include digital analytics solutions (e.g., Google Analytics) and CDPs (e.g., Segment, mParticle ). However, unlike these solutions, Snowplow technology is focused on delivering the best AI and BI-ready data directly into the data warehouse (or lakehouse) to power data applications in a universal data language — this is not an export of a dataset that is powered to do something different,” Dean added.
Plan ahead With this round of funding, which was led by global venture capital firm NEA, Snowplow will focus on growing its footprint, both in its home market and abroad. As part of this, the company plans to expand its team and bring support for ever-increasing data types.
“There are a number of unique industry use cases that benefit from this type of data approach … We’ll be making further announcements on the roadmap in the fall,” the CEO said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,437 | 2,022 |
"Databricks summit: New integrations announced for enterprise users | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/databricks-summit-new-integrations-announced-for-enterprise-users"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks summit: New integrations announced for enterprise users Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
After Snowflake and MongoDB’s product fireworks a couple of weeks ago, Databricks joined the party. At its ongoing Data and AI summit , the San Francisco-headquartered data lakehouse company has made a number of notable announcements , starting from Project Lightspeed aimed at improving streaming data processing to a more open Delta Lake and improved MLFlow.
However, the summit hasn’t just been about platform improvements from Databricks. Multiple players forming a part of the modern data stack have also announced new and improved integrations to help their customers get the most out of their lakehouse investment.
Below is a rundown of key new integrations.
Monte Carlo Data observability provider Monte Carlo first announced quick, no-code integrations to help enterprise users get end-to-end data observability for Databricks data pipelines. The company said it will let enterprises plug Monte Carlo into Databricks meta-stores, unity catalog or delta lake and use them to gain out-of-the-box visibility into data freshness, volume, distribution, schema and lineage – and the anomalies associated with them. This way, teams will be able to quickly detect structured and unstructured data incidents, starting from ingestion in Databricks down to the business intelligence (BI) layer, and resolve them well before they affect downstream users.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Acceldata Acceldata , Monte Carlo’s competitor in the data observability space, also announced an integration for end-to-end data pipeline visibility. This solution will track pipeline quality inside and outside Databricks to flag incidents and also include performance optimization capabilities such as automated stability tracking and cost intelligence.
“Data observability offers visibility into the entire data pipeline to help customers observe the overall quality and health of their data end-to-end to help predict potential issues and prevent costly data disasters,” Rohit Choudhary, founder and CEO of Acceldata, said. “With this integration, Acceldata data observability cloud (also) offers customers an added layer of cost intelligence to help detect and decrease inefficiencies to optimize performance and maximize their Databricks investment.” Decodable Data engineering company Decodable debuted a new delta lake connector to enable the ingestion of streaming data into Databricks in a simple and cost-effective way.
The current process of ingesting streaming data involves batching and is cost-prohibitive and complex. However, the new connector, which is now in general availability, ingests from any source in any cloud at the bronze and silver stages of the Databricks medallion data layer architecture , enabling application developers and data engineers to quickly connect streaming data. It is available for use with a free Decodable developer account and can unlock a host of powerful AI and analytics capabilities on Databricks.
Sigma Computing San Francisco-based Sigma Computing announced an integration under which its no-code spreadsheet-like analytics interface will be available on Databricks. This will enable any business user, who is working with a company leveraging Databricks, to analyze cloud-scale, live data at a granular level. The integration just requires one-time deployment and allows users to build sophisticated pivot tables, create and iterate dashboards, and aggregate or free-drill into dynamic, live data in the data lakehouse.
The Databricks Data + AI Summit concludes today, June 30.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,438 | 2,022 |
"Stake – Cash Back and Banking Services for Renters – Raises $12 Million in Series A Funding | VentureBeat"
|
"https://venturebeat.com/business/stake-cash-back-and-banking-services-for-renters-raises-12-million-in-series-a-funding"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Stake – Cash Back and Banking Services for Renters – Raises $12 Million in Series A Funding Share on Facebook Share on X Share on LinkedIn RET Ventures’ first ESG fund, together with a coalition of mission and impact investors, leads investment to align owner and renter incentives.
NEW YORK–(BUSINESS WIRE)–June 30, 2022– Stake , which provides Cash Back and banking services to renters, announced today the completion of its $12 million Series A financing round. With Stake, renters earn Cash Back when they take positive actions, like signing a lease and paying rent. Owners save money with every renter action.
The round was led by RET Ventures , which selected Stake as one of the first investments for the new RET Ventures ESG Fund (the “Housing Impact Fund”). Participation also included: Enterprise Community Partners , which, since 1982, has helped create or preserve 873,000 homes; Hometeam Ventures ; Operator Stack ; and Second Century Ventures , the investment arm of the National Association of Realtors. Existing investors Shadow Ventures and Olive Tree Ventures also participated in the round.
Today more than 44 million American households pay rent every month, and from 1985 to 2020, median rent prices increased by nearly 150% despite income growing just 35%. Leveraging behavioral science, Stake was founded in 2018 to empower renters by providing them with Cash Back on their rent as well as no-fee banking services to build savings. Stake also mitigates pain points for building owners, increasing lease-ups, reducing economic vacancy, improving maintenance, and increasing ancillary revenue.
Using Stake, property managers receive a 130% return on every dollar spent. Renters earn an average of 4% Cash Back on their rent each month. Across the $385 million in annual leases connected to the platform, 65% of renters have more money in their Stake account than any other banking account. In the past year, the number of residences that offer Cash Back with Stake has grown by 10x.
“Renters don’t need more debt or loans,” noted Rowland Hobbs, Co-Founder and CEO of Stake. “What renters need is money to help with everyday essentials and to establish long-term savings. With Stake, we have reimagined the classic ‘rainy day fund’ for renters to build the sort of wealth traditionally associated with home ownership. Now, their largest expense is also their largest source of savings.” The new funding round will enable Stake to continue building out its financial infrastructure and suite of solutions that address difficult issues for renters and property owners alike.
“Stake’s approach to housing affordability is perfectly aligned with the mission of our ESG-centric fund,” said John Helm, partner at RET Ventures, who will join Stake’s board. “While a slew of platforms offer renters innovative payment options, they are all credit or debt-based. They ultimately encourage dangerous behaviors as part of their proposed solution. Stake flips the script on this model by offering a risk-free, renter-centric, efficient, and easy-to-use pathway toward building wealth.” “Unlike homeowners, renters rarely reap financial benefits from paying for their homes – and families who rent tell us they could use a little extra cash each month. This is why Stake’s goal of empowering more economically resilient renters through cash back and no-fee banking services resonated with us,” said Enterprise Community Partners President and CEO Priscilla Almodovar. “It’s not just a good deal for renters. It makes sense for landlords, too, who are more likely to retain residents, which in turn strengthens communities.” About Stake Stake is building the financial infrastructure for the next generation of rentals. Stake aligns incentives between renters, operators, owners, and investors, so everyone earns the Return on Rent™ they deserve. Stake’s revenue management tools outperform, returning 130% on every dollar spent. These savings return millions of dollars to renters each year in the Stake app. Thousands of renters use Stake to earn Cash Back, grow their savings, and access free and equitable banking services. Headquartered in New York City and Seattle, Stake is on a mission to empower wealthier, happier, and more resilient renters. For more information, please visit https://www.stake.rent/ About RET Ventures A leading real estate technology investment firm, RET Ventures is the first industry-backed, early-stage venture fund strategically focused on building cutting-edge “rent tech” – technology for multifamily and single-family rental real estate. RET invests out of core venture funds and a Housing Impact Fund, backing companies that address a range of pain points for real estate operators. Through its deep expertise and connections, RET provides solutions to issues ranging from housing affordability and sustainability to risk management and operational efficiency. The firm’s Strategic Investors include some of the largest REITs and private real estate owner-operators and managers, who control approximately 2.4 million rental units worth $600 billion. For more information, please visit www.ret.vc About Enterprise Community Partners Enterprise is a national nonprofit that exists to make a good home possible for the millions of families without one. We support community development organizations on the ground, aggregate and invest capital for impact, advance housing policy at every level of government, and build and manage communities ourselves. Since 1982, we have invested $54 billion and created 873,000 homes across all 50 states, the District of Columbia and Puerto Rico – all to make home and community places of pride, power and belonging. Join us at enterprisecommunity.org.
*Stake is a financial technology company and is not a bank. Banking services provided by Blue Ridge Bank N.A; Member FDIC. The Stake Visa® Debit Card is issued by Blue Ridge Bank N.A. pursuant to a license from Visa U.S.A. Inc. and may be used everywhere Visa debit cards are accepted.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220629006001/en/ MEDIA Rachel Sales Enunciate [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,439 | 2,022 |
"SPiCE VC Announces the Addition of Rico Pang to its Board of Directors in an Advisory Role | VentureBeat"
|
"https://venturebeat.com/business/spice-vc-announces-the-addition-of-rico-pang-to-its-board-of-directors-in-an-advisory-role"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release SPiCE VC Announces the Addition of Rico Pang to its Board of Directors in an Advisory Role Share on Facebook Share on X Share on LinkedIn The Crypto and Blockchain Entrepreneur and Venture Builder Will Employ His Deep Domain Expertise and Help SPiCE II Build Global Connections as the VC Identifies and Invests In the Next Generation of Unicorns and Decacorns MIAMI & SINGAPORE & DUBAI, United Arab Emirates–(BUSINESS WIRE)–June 30, 2022– SPiCE VC , the leading venture capital (VC) firm in the Blockchain & Tokenization ecosystem, announced today that Rico Pang, serial entrepreneur and co-founder and CEO of Sanctum Global Ventures, has joined the SPiCE VC team as an advisor to its Board of Directors. Mr. Pang’s intimate knowledge of the Blockchain & Tokenization ecosystem as part of the greater global digital economy, coupled with his passion for philanthropy and corporate social responsibility, makes him a valuable addition to an already stellar team that is both diverse in geography and in thought.
“We couldn’t be more thrilled to welcome Rico as a valued advisor to our growing board. From the second I was introduced to Rico it became immediately clear that we are tightly aligned on our appreciation for the critical role the Blockchain & Tokenization ecosystem will continue to play in all aspects of our lives. From finance to education, Blockchain is disrupting everything and is on a rapid pace to becoming a multi-trillion-dollar market,” said Tal Elyashiv, co-founder & managing partner of SPiCE VC. “SPiCE VC’s role in identifying and investing in the innovative companies building the ecosystem of the digital future in a Web 3 world cannot be overstated. Rico is well-positioned to help us achieve our immediate and long-term goals.” Mr. Pang has over 20 years of experience in emerging markets across supply chain management to fintech and e-commerce. He is founder and Secretariat Chairman of the Deep Tech Forum and World Blockchain Centre, while also continuing his critical role of co-founder of Sanctum Global Ventures (SGV) together with Dunstan Teo – a known Bitcoin pioneer. SGV is also an incubator and accelerator that actively participates in the development of the global digital economy, Web3 and metaverse.
As an avid tech investor, Mr. Pang believes in the power of tech as a way to develop talent, reduce inequalities, and subsequently help reduce poverty.
“I am honored to join SPiCE’s outstanding team and use my expertise to support their mission and vision for growth,” said Rico Pang, co-founder & CEO of Sanctum Global Ventures and now SPiCE VC Board of Directors advisor. “I plan to work hand-in-hand with the blockchain & tokenization pioneers at SPiCE VC to not only expedite diversified access into the digital ecosystem globally, but to also help others use that access for the betterment of our local and global communities.” SPiCE VC welcomes Rico Pang as a strategic advisor just weeks after the firm officially closed its first and notably successful fund, SPiCE I, and introduced SPiCE II, a new fund offering investors exposure to the unprecedented growth opportunities within the expanding digital economy. Following the proven approach of SPiCE I, SPiCE II’s focus is identifying innovative companies that stand to benefit the most from the mass proliferation of Blockchain technologies across many industries. A tokenized version of SPiCE II is expected to launch in a few months.
To learn more about SPiCE VC, visit https://spicevc.com/.
ABOUT SPiCE VC: SPiCE VC is a Venture Capital firm providing investors exposure to the massive growth of the Blockchain/Tokenization ecosystem. SPiCE invests globally in platforms and ecosystem providers enabling access to capital markets, banking, real estate, and other industries enhanced through Blockchain technologies. SPiCE focuses on companies who stand to benefit the most from the massive growth of the industry. Combining institutional know-how, hands-on management, entrepreneurial innovation and professional investment experience SPiCE’s management team has been involved in hundreds of tech funding rounds totaling billions of dollars; as entrepreneurs, investors, and executives. SPiCE is located in the US, Switzerland, Singapore and Israel. To learn more about SPiCE VC visit www.spicevc.com or email Tal Elyashiv, Founder and Managing Partner, at [email protected].
View source version on businesswire.com: https://www.businesswire.com/news/home/20220630005309/en/ Liz Whelan [email protected] (312) 315-0160 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,440 | 2,022 |
"Bushel® Launches Digital Payment Network for US Agriculture | VentureBeat"
|
"https://venturebeat.com/business/bushel-launches-digital-payment-network-for-us-agriculture"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Bushel® Launches Digital Payment Network for US Agriculture Share on Facebook Share on X Share on LinkedIn New products are added to the Bushel platform to address the issues in the industry of payment by paper check.
Bushel’s network of more than 40% of grain origination volume in the United States will be able to quickly provide digital payments to their customers.
Additional products available for those not on the Bushel network wishing to add payments capabilities to their existing systems, including ag input providers and any companies taking payments from farmers.
Enabling digital payments is the next phase of building the digital infrastructure for agriculture and aligns with Bushel’s mission to strengthen relationships between agribusiness and farmers.
FARGO, N.D.–(BUSINESS WIRE)–June 29, 2022– Bushel , an independently-owned software technology company focused on developing digital tools for the agricultural supply chain, announced the availability of a payment and money facilitation network. The product launch, announced during Bushel’s first-ever customer conference, will address the challenge the industry faces of facilitating payments for nearly 90% of a $200 billion industry through paper checks today.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220629005046/en/ Bushel, an independently-owned software technology company for growers, grain buyers, ag retailers, protein producers and food companies, announces the availability of a payment and money facilitation network. (Photo: Business Wire) This addition to the Bushel platform allows agricultural producers and agribusinesses to conveniently move money in real time across the agriculture supply chain. Through facilitating digital transactions, Bushel continues to deliver on its promise of strengthening relationships by building and standardizing the digital infrastructure for the agriculture industry. Through its network of grain companies using the Bushel platform, Bushel can safely and efficiently deploy this payment engine for 40% of grain origination in the U.S.
Bushel’s new product suite is created specifically for the complexity and scale of agribusiness.
○ Bushel Payments™ is the preferred money movement facilitator between growers and agribusinesses already on the Bushel platform.
○ Bushel Wallet™ is the first-ever digital wallet created specifically for the complexity and scale of agribusiness and available for farmers ○ Bushel Wallet Link is an API that allows any agribusiness to embed payments in their application or web environment and connect to Bushel Wallet’s network to move money.
One of the first agribusinesses to pilot Bushel Wallet Link will be Consolidated Grain and Barge Co. (CGB) who will be embedding a payments feature into its user experience to allow for simple and easy electronic settlements to their growers.
“To keep up with consumer demands for a more interactive experience, we knew we needed to provide a way to more easily transact with our customer base in a seamless and timely manner,” Charlie Laird, Director of IT Strategy of CGB said. “Bushel has shown it has the forward-thinking mindset to be progressive in the digital transformation of agriculture here in the U.S. and abroad.” Additionally, Ag Valley Cooperative, based in Nebraska, will be piloting Bushel Payments for its growers. Their farmers can access Payments through Ag Valley’s app powered by Bushel, complete the signup and verification process, and immediately begin to transact.
“As we see more opportunities to engage and communicate with our farmers, it became clear that we needed a way to streamline the payment process,” Jeff Krejdl, CEO of Ag Valley Cooperative said. “The onboarding was easy. Once we start working with growers, this is going to be a very cool option to offer to them. Harvest starts in mid-July, so we will start looking at using it for settlement payments then.” Bushel has spent the past five years focusing on building agriculture’s digital infrastructure. With the connections and integrations in place, Bushel is evolving from simply just moving information into safely and securely allowing the facilitation of digital payments, specifically in the agriculture industry. This continues Bushel’s mission to enable, rather than disrupt, grain companies, input providers, farmers, and the supply chain, to be more efficient and effective businesses into the future.
“From early on in our work to digitize agriculture, it became apparent a big piece of this was how farmers and agribusinesses did business with each other. They need the tools to become more financially efficient and clearer on the state of their operations,” Jake Joraanstad, CEO of Bushel said. “The feedback from our customers will be crucial as we continue to innovate our FinTech products for agriculture.” What This Means to Farmers Instead of waiting for a check in the mail and having to turn around directly and deposit the check into their account, farmers can see the money directly into their account.
Farmers can link up to six U.S. bank accounts for transfer purposes or immediately access their Bushel Wallet balance with a Business Debit Card wherever Visa ® is accepted.
Farmers can instantaneously send or request payments for grain deliveries or input purchases within the Wallet ecosystem anytime, anywhere from their mobile device. The onboarding process to register and create a Bushel Wallet account takes less than three minutes.
What This Means to Agribusinesses Agribusinesses can eliminate the payment onboarding process and maintain ACH payment instructions by leveraging the Bushel Payments network where growers are already enrolled Reduced receivables with faster time to receive funds. Their customers have a concise view of what’s due and have control to pay invoices any time – day or night via their Bushel-powered app.
With in-app notification of payments due and made, agribusinesses can gently remind growers of amounts due rather than mailing paper statements that are inefficient and take time and money.
Agribusinesses can embed Bushel Wallet Link into other software applications they use with their customer base beyond grain merchandising. Bushel Wallet Link can embed into existing customer flows via an API connection, bringing the power of payments while maintaining their brand end-to-end.
To ensure financial security for all parties, Bushel Wallet has a fully routable account and is FDIC insured. Bushel complies with SOC 2 standards and debit card transactions over PCI-certified networks. Currently, the Bushel digital payment products are available only in the United States.
Learn more: To learn more, visit bushelwallet.com About Bushel Bushel is an independently owned software company and leading provider of software technology solutions for growers, grain buyers, ag retailers, protein producers and food companies, headquartered in Fargo, N.D. Since launching in 2017, Bushel’s platform has grown rapidly, now powering nearly 2,000 grain facilities across the U.S. and Canada with real-time business information for their producers. Bushel’s platform now reaches more than 40% of grain origination in the United States, resulting in inarguably the largest technology network effect among growers and grain buyers in the U.S. today. Bushel’s product suite includes its flagship mobile app, websites, trading tools, digital payments and money facilitation, market feeds, API services, FarmLogs and a custom software division focused on agriculture. Bushel has been focused on building software since the company was founded in 2011. Data privacy is a cornerstone of Bushel’s philosophy.
Read here Bushel’s Data Ethos.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220629005046/en/ Julia Eberhart Public Relations & Communications Manager [email protected] 605.690.1418 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,441 | 2,022 |
"Report: 81% of IT teams plan to invest in RPA next year | VentureBeat"
|
"https://venturebeat.com/automation/report-81-of-it-teams-plan-to-invest-in-rpa-next-year"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 81% of IT teams plan to invest in RPA next year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A recent survey conducted by Robocorp gathered the perspectives of RPA users, specifically IT teams, to better understand current RPA (robotic process automation) solutions, existing roadblocks RPA users face, the ways in which individuals currently use their RPA technology, how much they’ve invested and where there is room for growth.
First and foremost, Robocorp found that there is incredible growth within the RPA industry: 67% of respondents invested in RPA technology within the past year and, even further, 81% of respondents plan to invest in RPA in the next year. While this increased growth in the use of RPA technology is exciting, the survey also uncovered an apparent dissatisfaction with existing technology. Not only do 69% of respondents experience broken bots with their current RPA at least once per week, but 65% of respondents agree that they would benefit from usage-based pricing.
To add on to the dissatisfaction current RPA users are experiencing, the survey also revealed that most individuals aren’t even using the RPA technology that would best suit their needs: 34% of respondents say the primary value of RPA is the ability to adapt current tech to meet evolving needs and 22% say it’s ability to scale and meet evolving processing needs. This makes for 56% of respondents who would benefit from open source technology but, unfortunately, 81% of respondents say 50% or less of their RPA tech is open source.
The results of the survey clearly highlight RPA’s increasing popularity across industries, particularly to drive company growth and success. However, the results from the survey also show that the solutions individuals are using are clearly not working. It’s time to progress to stronger, more reliable and fairly priced technology that meets consumer standards – Gen2 RPA technology.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Read the full report by Robocorp.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,442 | 2,022 |
"Wells Fargo CIO: AI and machine learning will move financial services industry forward | VentureBeat"
|
"https://venturebeat.com/ai/wells-fargo-cio-ai-and-machine-learning-will-move-financial-services-industry-forward%ef%bf%bc"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wells Fargo CIO: AI and machine learning will move financial services industry forward Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s simple: In financial services , customer data offers the most relevant services and advice.
But, oftentimes, people use different financial institutions based on their needs – their mortgage with one; their credit card with another; their investments , savings and checking accounts with yet another.
And in the financial industry more so than others, institutions are notoriously siloed. Largely because the industry is so competitive and highly regulated, there hasn’t been much incentive for institutions to share data, collaborate or cooperate in an ecosystem.
Customer data is deterministic (that is, relying on first-person sources), so with customers “living across multiple parties,” financial institutions aren’t able to form a precise picture of their needs, said Chintan Mehta, CIO and head of digital technology and innovation at Wells Fargo.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Fragmented data is actually detrimental,” he said. “How do we solve that as an industry as a whole?” While advocating for ways to help solve this customer data challenge, Mehta and his team also consistently incorporate artificial intelligence (AI) and machine learning (ML) initiatives to accelerate operations, streamline services, and enhance customer experiences.
“It’s not rocket science here, but the hard part is getting a good picture of a customer’s needs,” Mehta said. “How do we actually get a full customer profile?” A range of AI initiatives for financial services As the 170-year-old multinational financial services giant competes in an estimated $22.5 trillion industry representing roughly a quarter of the world economy, Mehta’s team advances efforts around smart content management, robotics and intelligent automation, distributed ledger technology, advanced AI, and quantum computing.
Mehta also leads Wells Fargo’s academia and industry research partnerships, including with the Stanford Institute for Human-Centered Artificial Intelligence (HAI), the Stanford Platform Lab, and the MIT-IBM Watson Artificial Intelligence Lab.
In its work, Mehta’s team relies on a range of AI and ML tools: traditional statistical models, deep learning networks, and logistic regression testing (used for classification and predictive analytics). They apply a variety of cloud native platforms including Google and Azure, as well as homegrown systems (based on data locality).
One technique they apply, Mehta said, is long short-term memory. This recurrent neural network uses feedback connections that can process single data points and entire sequences of data. His team applies long short-term memory in natural language processing (NLP) and spoken language understanding to extract intent from phrasing. One example is in complaints management, extracting “specific targeted summaries” from complaints to determine the best courses of action and move quickly on them, Mehta explained. NLP techniques are also applied to website form requests that have more context than those in dropdown menu suggestions.
Traditional deep learning techniques like feedforward neural networks – where information moves forward only in one loop – are applied for basic image and character recognition. Meanwhile, deep learning techniques such as convolutional neural networks – specifically designed to process pixel data – are utilized to analyze documents, Mehta said.
The latter helps prove certain aspects of submitted scanned documents and analyze images in those documents to ensure that they’re complete and contain expected attributes, contents and comments. (For example, in a specific type of document such as a checking account statement, six attributes are expected based on provided inputs, but only four are detected, flagging the document for attention.) All told, this helps to streamline and accelerate various processes, Mehta said.
For upcoming initiatives, the team is also leveraging cloud-native and serverless components, and applying transformer neural network models – which are used to process sequential data including natural language text, genome sequences, sound signals and time series data. Mehta also plans to increasingly incorporate random forest ML pipelines, a supervised learning technique that uses multiple decision trees for classification, regression, and other tasks.
“This is an area that will forward most of the financial institutions,” Mehta said.
Optimizing, accelerating, amidst regulation One significant challenge Mehta and his team face is accelerating the deployment of AI and ML in a highly regulated industry.
“If you’re in a nonregulated industry, the time it takes to have a data set of features and then build a model on top of it, and deploy it into production is pretty short, relatively speaking,” Mehta said.
Whereas in a regulated industry, every stage requires assessment of external risks and internal validation.
“We lean more towards statistical models when we can,” Mehta said, “and when we build out large neural network-based solutions, it goes through a significant amount of scrutiny.” He said that three independent groups review models and challenge them – a frontline independent risk group, a model risk governance group, and an audit group. These groups build separate models to create independent sources of data; apply post hoc processes to analyze the results of experimental data; validate that data sets and models are at “the right range”; and apply techniques to challenge them.
On average, Mehta’s team deploys 50 to 60 models a year, always observing the champion-challenger framework. This involves continuously monitoring and comparing multiple competing strategies in a production environment and evaluating their performance over time. The technique helps to determine which model produces the best results (the “champion”) and the runner-up option (the “challenger”).
The company always has something in production, Mehta said, but the goal is to continuously reduce production time. His department has already made strides in that respect, having reduced the AI modeling process – discovery to market – from 50-plus weeks to 20 weeks.
It’s a question of “How can you optimize that whole end to end flow and automate as much as possible?” Mehta said. “It’s not about a specific AI model. It’s generally speaking, ‘How much muscle memory do we have to bring these things to market and add value?’” He added that “the value of ML specifically is going to be around use cases that we haven’t even thought of yet.” Encouraging financial services industry dialogue As a whole, the industry will also greatly benefit by bridging the digital expanse among players big and small. Collaboration, Mehta said, can help foster “intelligent insights” and bring the industry to its next level of interaction with customers.
This can be achieved, he said, through such capabilities as secure multiparty computation and zero-knowledge proof platforms – which don’t exist today in the industry, Mehta said.
Secure multiparty computing is a cryptographic process that distributes computations across multiple parties, but keeps inputs private and doesn’t allow individual parties to see other parties’ data. Similarly, cryptographic zero knowledge proofing is a method by which one party can prove to another that a given statement is indeed true, but avoids revealing any additional (potentially sensitive) information.
Building out such capabilities will enable institutions to collaborate and share information safely without having privacy or data loss issues, while at the same time competing in an ecosystem appropriately, Mehta said.
Within five years or so, he predicted, the industry will have a firmer hypothesis around collaboration and the use of such advanced tools.
Similarly, Wells Fargo maintains an ongoing dialogue with regulators. As a positive sign, Mehta has recently received external requests from regulators around AI/ML processes and techniques – something that rarely, if ever, occurred in the past. This could be critical, as institutions are “pretty heterogenous” in their use of tools for building models, and the process “could be more industrialized,” Mehta pointed out.
“I think there’s a lot more incentive, interest and appetite on the part of regulators to understand this a little better so that they can think through this and engage with it more,” Mehta said. “This is evolving fast, and they need to evolve along with it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,443 | 2,022 |
"Siemens and Nvidia partner to enable digital twin for the industrial metaverse | VentureBeat"
|
"https://venturebeat.com/ai/siemens-and-nvidia-partner-to-enable-digital-twin-for-the-industrial-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Siemens and Nvidia partner to enable digital twin for the industrial metaverse Share on Facebook Share on X Share on LinkedIn To support the transformation to software-defined vehicles, players from the tech industry and the auto industry must collaborate.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The basic idea behind the concept of digital twins , is to help model aspects of the physical world in software.
According to a forecast from Fortune Business Insights, the market for digital twin technology and services will generate an estimated $8.9 billion in revenue in 2022, growing to $96 billion by 2029.
Industrial technology giant, Siemens, has long been modeling different elements of the real world in software, and it is now looking to advance its approach to enabling an industrial metaverse. To support its efforts, today, Siemens detailed an extended partnership with Nvidia to enable artificial intelligence (AI) digital twin capabilities.
The partnership will see Siemens industrial design and development technology integrated with the Nvidia Omniverse platform , which enables users to create photorealistic virtual simulations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The digital twin is the virtual representation of the real product and the value of that digital twin is how closely we can bring the virtual world and the real world together,” Tony Hemmelgarn , president and CEO of Siemens explained in a press briefing.
Nvidia Omniverse will help Siemens support the industrial metaverse Hemmelgarn said that bringing Siemens technology together with Nvidia Omniverse will allow industrial organizations to make decisions faster.
One area where Nvidia Omniverse and Siemens will be able to help accelerate the decision-making process of industrial companies is with the elimination of physical prototypes. Hemmelgarn noted that in the past, automotive manufacturers often had to build costly prototypes in order to develop new vehicles.
In recent years, there has been a movement toward virtualization for automotive design, though it has typically involved the use of very specialized technology running in a specific location, often referred to as a ‘cave.’ Hemmelgarn said that with the Nvidia Omniverse, instead of automotive vendors needing a cave, the capability to visualize the new design can be opened up to a much wider audience. Omniverse doesn’t require a cave to run in any number of different locations, enabling a manufacturer to more quickly collaborate on an industrial effort.
“Siemens is number one in industrial automation and industrial software and because of this leadership position, we’re able to provide our customers with the most accurate, complete digital twin,” Hemmelgarn said. “However, with Nvidia, we can create this industrial Metaverse jointly taking the manufacturing process and the industrial automation process to a much more realistic level, leveraging AI capabilities.” Nvidia is no stranger to partnership and actively works with vendors across multiple sectors.
Rev Lebaredian , vice president of the Omniverse and simulation technology at Nvidia, commented during the press briefing that he’s particularly excited about the Siemens partnership.
“Siemens excels in the intersection of information technology and operational technology, and that’s something that we don’t do,” Lebaredian said. “There are things that we do, especially in the AI realm, and for real-time, that nobody else can do, and so the combination of these is truly unique.” Photorealism is the key to bringing digital twins to life Hemmelgarn noted that the idea of A digital twin is not a new concept, but it has changed in recent years. In his view, what has changed with digital twin technology is the comprehensive nature of the data that the digital twin encompasses and provides.
“The value of the digital twin is how closely your virtual world can represent your physical world,” Hemmelgarn said.
While Siemens had been building its own digital twins, Nvidia’s Omniverse takes the concept to a different level, thanks in no small part to its photorealism for images. For Hemmelgarn, integrating with Nvidia is all about making digital twins more lifelike, with real-time capabilities.
The idea of photorealism in the metaverse, should not be relegated to superficial things like entertainment, according to Lebaredian. He noted that in the modern era of AI, photorealism is critical for serious applications.
“One of the things that’s clear to us is that in order to build and create AI models we need to supply them with data, that’s essentially an encoding of the experience of the world around them,” Lebaredian said. “The only way we’re going to create truly intelligent AI is by first creating data that matches our real-world accurately, and a big part of that is how that world looks.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,444 | 2,022 |
"Report: 37% of ML leaders say they don't have the data needed to improve model performance | VentureBeat"
|
"https://venturebeat.com/ai/report-37-of-ml-leaders-say-they-dont-have-the-data-needed-to-improve-model-performance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 37% of ML leaders say they don’t have the data needed to improve model performance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A new report by Scale AI uncovers what’s working and what’s not working with AI implementation, and the best practices for ML teams to move from just testing to real-world deployment. The report explores every stage of the ML lifecycle – from data collection and annotation to model development, deployment, and monitoring – in order to understand where AI innovation is being bottlenecked, where breakdowns occur, and what approaches are helping companies find success.
The report’s goal is to continue to shed light on the realities of what it takes to unlock the full potential of AI for every business and help empower organizations and ML practitioners to clear their current hurdles, learn and implement best practices, and ultimately use AI as a strategic advantage.
For ML practitioners, data quality is one of the most important factors in their success, and according to respondents, it’s also the most difficult challenge to overcome. In this study, more than one-third (37%) of all respondents said they do not have the variety of data they need to improve model performance. Not only do they not have variety of data, but quality is also an issue — only 9% of respondents indicated their training data is free from noise, bias and gaps.
Most teams, regardless of industry or level of AI advancement, face similar challenges with data quality and variety. Scale’s data suggests that working closely with annotation partners can help ML teams overcome challenges in data curation and annotation quality, accelerating model deployment.
ML teams that are not at all engaged with annotation partners are the most likely to take greater than three months to get annotated data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This survey was conducted online within the United States by Scale AI from March 31, 2022, to April 12, 2022. More than 1,300 ML practitioners including those from Meta, Amazon, Spotify and more were surveyed for the report.
Read the full report by Scale AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,445 | 2,022 |
"How John Deere grew data seeds into an AI powerhouse | VentureBeat"
|
"https://venturebeat.com/ai/how-john-deere-grew-data-seeds-into-an-ai-powerhouse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How John Deere grew data seeds into an AI powerhouse Share on Facebook Share on X Share on LinkedIn photo courtesy of John Deere Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
During CES 2022 in January, John Deere debuted a fully autonomous tractor, powered by artificial intelligence, that is ready for large-scale production.
According to a press release, the tractor has six pairs of stereo cameras that capture images and pass them through a deep neural network – that then classifies each pixel in approximately 100 milliseconds and determines if the machine continues to move or stops, depending on if an obstacle is detected.
And in March, the Iowa-based company launched See & Spray Ultimate , a precision-targeted herbicide spray technology designed by John Deere’s fully owned subsidiary Blue River Technology. Cameras and processors use computer vision and machine learning to detect weeds from crop plants. There is one camera mounted every one meter across the width of a 120-foot carbon-fiber truss-style boom, or 36 cameras scanning more than 2,100 square feet at once.
But John Deere’s status as a leader in AI innovation did not come out of nowhere. In fact, the agricultural machinery company has been planting and growing data seeds for over two decades. Over the past 10-15 years, John Deere has invested heavily on developing a data platform and machine connectivity, as well as GPS-based guidance, said Julian Sanchez, director of emerging technology at John Deere.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Those three pieces are important to the AI conversation, because implementing real AI solutions is in large part a data game,” he said. “How do you collect the data? How do you transfer the data? How do you train the data? How do you deploy the data?” These days, the company has been enjoying the fruit of its AI labors, with more harvests to come.
John Deere’s long journey towards AI John Deere’s efforts in developing artificial intelligence solutions are part of a larger trend across the agricultural landscape.
Spending on agricultural AI technology and solutions is predicted to grow from $1 billion in 2020 to $4 billion in 2026, according to Markets&Markets.
The company’s journey towards AI began in the mid-nineties, when a small group of innovative engineers were split off from John Deere’s product lines, such as the harvesting combine group or the tractor group, and were told to move to Des Moines, Iowa to work on a coming new wave of technology around GPS.
According to Sanchez, a GPS-based steering system, released in 1999, was a turning point for tractor accuracy at John Deere. “The economics of that accuracy are easy to understand because you overlap less,” he said. “What sold the farmers on it, though, is that they could monitor other parts of the job rather than whether they stayed in a straight line – that was the big unlock. We’ve been building on that ever since.” Moving towards AI opportunities The next “aha” moment, Sanchez explained, was when John Deere tagged a geospatial location to every sensor on its vehicles. “Every kind of agronomic work, whether it’s putting a seed in the ground or harvesting a plant or applying herbicides, has a sensor associated with it, so we know what is working well in the field and what is not,” he explained.
That opened up the whole idea of geospatial maps, which John Deere immediately started developing in the early 2000s. But the data transfer was clunky, Sanchez said: “They’re recorded on the machines, and then you had to go in with a USB drive and gather all of those and take them back to the farm and upload them on a PC.” As a result, in 2010, John Deere realized that every large agricultural vehicle out of the factory should come with a cellular-enabled telematics box. “We started removing that friction of having to move the data from the vehicle to somewhere else to make sure the data continually moves off the vehicle,” he said.
The 2010s brought the mobile and cloud revolutions, which accelerated the ability to innovate on digital tools. By 2016, Moore’s Law (the principle that the speed and capability of computers can be expected to double every two years) brought a resurgence in the opportunities of what could be done with AI. At the time, John Deere had several small teams that had already been working on robotics concepts for at least 10 years. “We had been working with some of the top robotics universities in the country,” Sanchez said. “So we could essentially pour gasoline on our evolution to build on AI.” Building AI capabilities In 2017, John Deere acquired machine learning company Blue River Technologies, which has become one of the key parts of the company’s innovation efforts on AI and deep learning – looking at applications for AI on machines and other domains, including construction. “That immediately doubled or tripled the number of people working on AI,” he said. “That was a pivot point.” However, there is also a John Deere data science team, which numbers in the hundreds, that is looking at a variety of problems, he said, including “how we build models to analyze the data that has come off the machines and provide more valuable insights back to growers.” All AI initiatives at John Deere fall under the chief technology officer’s umbrella, Sanchez said, including an organization focused on autonomy and automation solutions. “That group has the largest concentration of AI talent and includes the Blue River organization,” he added. There is also an organization that manages all of the development of the company’s digital tools – cloud, front-end mobile applications, point web solutions – with a sizable data science team. “They’re the ones curating all of the data, making sure we’re looking at all that data with the intent of generating as many possible insights for growers as possible,” he explained.
Today, John Deere is “pretty laser-focused” on a half-dozen to a dozen solutions the organization believes are most important to continue to develop and eventually deliver to market, Sanchez said. Some of them already exist, like the new autonomous tractor.
But the company’s goal goes beyond one machine. “Our goal is by 2030, we want to have a fully-autonomous production system, meaning we want an autonomous combine and sprayer and tractor planter,” he said. Today, the company offers a fully autonomous tillage solution, which is one of four steps in the production cycle that allows farmers to prepare the land before planting. Over the next eight years, Sanchez says John Deere will be able to do that for planting, spraying and harvesting.
“That’s a big deal given the labor pressures in agriculture,” he said. “For decades, there have been fewer people wanting to live in rural areas, so that’s what AI helps unlock.” He added that this commitment to AI investments comes directly from John Deere’s current CEO, who was previously in charge of the big tech area of the company. “He understood the value,” he explained.
Searching for AI-driven precision at scale The agricultural industry has reached an “asymptote of value you can add by going bigger and faster,” Sanchez continued. “The opportunity for value has really pivoted to being very precise – you have to be able to see what you’re doing, whether you’re placing a seed in the ground, harvesting a kernel of corn or applying herbicides.” For example, if you planted four or five corn seeds, you would want to understand something about the current moisture of the soil, because the perfect moisture would give them the best chance to emerge from the ground as a plant in as few days as possible. You would also want to analyze the quality of the soil and put the seeds in a spot where there are more nutrients. And you’d want to make sure the seeds aren’t too close to one another, because if you do, then they start competing for those nutrients. But if you put them too far apart from each other, then you’re not optimizing the little piece of ground to plant the seeds.
“Now imagine doing that at scale, when you have to plant a hundred thousand acres over the span of two weeks,” said Sanchez. “That’s why AI already has had an impact in agriculture. That’s why we see that runway of opportunity there. Agriculture has all kinds of these perfect examples that are prime for AI, as opposed to broader, more generalized applications.” John Deere’s ‘holy grail’ AI quest John Deere remains on a quest to tackle a couple of big ‘holy grail’ ideas around AI. One of them goes back to autonomy. “To imagine a fully autonomous production system, you have to imagine a whole system where not only can these machines do the jobs in the field, but they also can figure out what field they should move to next,” Sanchez said. “And we have to figure out how they move from field to field without significant human labor.” The second is around the tremendous opportunity both for profitability as well as sustainability in agriculture, in terms of truly understanding the health of every inch of soil that is being used for agriculture. “So there’s a bigger game here, which is if you can farm in such a way that every year your soil gets healthier, then over time that allows you to really achieve that objective of doing more with less,” he explained.
But, he added, it’s really hard to measure things like nitrogen, potassium or sodium in real time in a reliable way. Today, someone goes out to the field, sticks a tube in the ground, takes a core sample, sends it to a lab and six weeks later you get a result.
“It’s sort of like the cutting edge of R&D right now – how do we measure these soil nutrient qualities in real time?” he said. “It’s really hard, no one’s cracked it. And there’s a lot of people working on it.” Key AI enablers still to come While some have criticized John Deere’s AI efforts , questioning whether its AI-powered machinery is too expensive or too complex to use, who owns the data gathered and whether workers will be replaced, Sanchez said that the reality is that finding good, dependable, skilled labor is one of the biggest challenges facing farmers today. Employment of agriculture workers, he added, is projected to only grow 1% from 2019-2029, slower than average for all occupations, while work on the farm can be very demanding during critical times of the year, requiring labor for up to 18 hours a day.
“Deere’s autonomous tractor and other advanced technology provides farmers with the flexibility to manage pressing tasks within their operation at those critical times, because the tractor can handle some of the work that they don’t have time for, or the labor to do, while they focus on jobs that still need their attention,” he said. “Farmers own their data and control who they share it with and when.” In any case, Sanchez maintains that John Deere is still only “in the second or third inning” of implementing and commercializing AI-driven solutions.
“Right now in the market we have three or four meaningful solutions that have what you could really truly call powered by AI with all of the sensing technology, delivering significant value for hundreds and thousands of customers,” he said. “But I think there are dozens and dozens more that are opportunities.” He added that what’s “fun to think about” is that two of the limiting factors to scaling AI are having reliable training data sets and having readily available computing power. The more cameras and the more sensors you have because you have AI solutions, the more data you’re collecting. “So it’s sort of a network effect where the more you grow, the more opportunity there is with your dataset,” he explained.
Whether it is 5G or the next level of connectivity, Sanchez added that latency levels may finally allow John Deere to leverage the power of cloud computing in a way that’s truly real-time – “which for us is less than half a second,” he said, adding that would take the company’s AI efforts to yet another level.
“So, not only are we at the beginning of this, but there are a couple of key, massive enablers here that I think could potentially make this a lot more exciting,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,446 | 2,022 |
"GitHub Copilot is now public — here’s what you need to know | VentureBeat"
|
"https://venturebeat.com/ai/github-copilot-is-now-public-heres-what-you-need-to-know"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub Copilot is now public — here’s what you need to know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
GitHub announced last week that it will be releasing Copilot, its “AI pair programmer” tool, to the public.
Copilot uses AI to provide a range of support functions, including autocompleting instructions, generating entire functions, and transforming docstrings and descriptions into functional source code.
Copilot launched as a technical preview in 2021.
Now all developers can apply for Copilot, which installs as an extension in integrated development environments (IDE) such as Visual Studio, VS Code, Neovim and JetBrains IDEs.
At the time of Copilot’s release, there was a lot of excitement around its stunning coding capabilities. But there were also concerns about how far its abilities can be trusted and whether it has a real impact on the productivity of developers. After a year and billions of lines of code, Copilot is finally ready to be in the hands of every developer.
Here’s what we know about Copilot’s effect on real programming tasks, told by its creators and developers who have used it in their day-to-day work.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How much code is written with Copilot? Behind Copilot is the transformer architecture , the kind of deep learning model used in large language models such as GPT-3 and LaMDA.
Transformers are especially good at processing sequential data such as text, software code and protein sequences. Given a prompt, a transformer model can predict the next elements of the sequence, whether it is words or computer instructions. Copilot is built on OpenAI’s Codex , a transformer that has been trained on tens of millions of code repositories. Once installed on your IDE, Copilot provides suggestions based on the existing code in your file as well as cues, such as the names of your functions and classes and the comments in your code.
It is worth noting that Copilot does not think and code like a programmer.
But since it has been exposed to huge amounts of source code, it can provide very good code suggestions, especially on standard and repetitive tasks that occur frequently in writing software.
According to GitHub, more than 1.2 million developers used Copilot’s technical preview in the past 12 months. In files where Copilot is enabled, it accounts for nearly 40% of code in popular programming languages like Python.
Ryan J. Salva, VP of product at GitHub, told VentureBeat that while it is hard to say how much of the coding done with Copilot is real software development as opposed to exploring the tool, the 40% ratio seemed to hold as developers used Copilot over several days.
Github Copilot reduces distractions “Developers often talk about the value of staying ‘in the flow’ and the positive impact [Copilot] has on their productivity, creativity and overall happiness,” Salva said.
According to Salva, in a survey distributed to 17,000 developers during Copilot’s technical preview, over 75% of developers self-reported that when using Copilot they “spend less mental effort on repetitive programming tasks,” “focus on more satisfying work” and “stay in the flow.” “By minimizing distractions and creating focus time, we not only get work done, we create better and less stressful days,” Salva said. “Anecdotally, we’ve heard stories of developers using Copilot to learn new coding languages, quickly generate boilerplate code for common tasks, write regular expressions or simply recall the syntax for an API without needing to consult documentation.” Developers VentureBeat spoke to confirm some of these points. Abhishek Thakur, a machine learning engineer at Hugging Face, has been using Copilot since June 2021. He has used it in developing AutoTrain, a no-code tool for training state-of-the-art machine learning models. He also uses it for machine learning competitions on Kaggle, making tutorials and participating in hackathons.
“When I am coding, I want the least distractions. In that way, Copilot has been a huge help. It has reduced the time I might spend looking for solutions on the web and instead have them at my fingertips in my favorite IDE,” Thakur said.
Many developers search for solutions to small problems on search engines and StackOverflow, a web forum where developers share code snippets for specific tasks.
“After using Copilot, I rarely visit these websites and can rather focus on coding,” Thakur said.
“If I am in doubt, I try to write comments and let Copilot help me finish the code chunks,” Thakur said. “It might not always be perfect, but it gives a good idea of how the code can be written. The rest is up to the developer to modify and reuse. Same is true for StackOverflow: It doesn’t always have the answer but it does have a lot of good answers which might suit your use case, and you can modify and reuse.” Tackling repetitive tasks with Copilot Louis Castricato, a research intern at Hugging Face and previously at EleutherAI, has used Copilot for scientific computing, where functions are often cumbersome and hard to use. One of these functions is PyTorch’s einsum, which requires unwieldy parameters about the dimensions of tensors that you want to compute.
“Einsum is very nonintuitive to people who have never used it before, and it requires you to pay close attention to the shape that your tensors are taking at every instruction within a call to your model,” Castricato said. “Copilot is particularly strong at inferring the shape of a tensor and automatically writing einsum operations, as well as writing comments explaining the choices it made in writing the einsum expression.” Snir Shechter, R&D team lead at Aporia, has also been using Copilot for nearly a year. “When developing our main product at Aporia, Copilot helps me with writing the easy code,” he said. “Given good naming conventions, Copilot is able to complete the whole function/next block of code. After that, I just need to review to see that all is good (and possibly add more specific logic). It’s really good with completing generic/repetitive code and it figures it out based on the context.” Copilot’s performance particularly stands out for lengthy and repetitive tasks. An example is launching an HTTP server, which usually requires several lines of code and adjustments, depending on the language it is being written in. In one study, GitHub required half of the participants to write the HTTP server code manually and the other half to complete the task using Copilot.
“Preliminary data suggests that developers are not only more likely to complete their task when using Copilot, but they also do it in roughly half the time,” Salva said.
Pushing developers to better document code Copilot works better when programmers provide it with more detailed descriptions. Interestingly, using it has pushed developers to better document their code.
“In the first few months of the technical preview, we’ve seen Copilot change people’s behavior when writing code – namely by writing better, more verbose comments,” Salva said. “This is not only so that Copilot suggestions improve, but it makes it easier to read for others.” Copilot has also become a good tool for documenting software code, a task that is often overlooked, especially when programmers are chasing deadlines.
Castricato uses Copilot to document his code, autocompleting docstrings and type suggestions in Python. This improves the readability of the code and makes it easier for himself and other developers to manage the code later.
“Copilot has increased the amount of documentation I write for my code by at least 2x or 3x,” he said.
The limits of Copilot “[Copilot] is often quite poor at implementing entire algorithms,” Castricato said. “For instance, when I first got Copilot, I wanted to see if it could implement basic forms of dynamic programming without significant guidance. It failed miserably, and I very quickly realized that in order to use Copilot to its fullest ability, you need to explain (through comments) in detail the steps that Copilot needs to take to implement a certain algorithm.” Beyond basic tasks, Copilot will need ample comments to function properly. And in some cases, it will need a fully structured code file to provide useful suggestions.
“In this regard, Copilot is very far away from replacing even the most rudimentary of software engineers,” Castricato said.
Salva acknowledged that Copilot is still a work in progress and a new developer experience. The product team continues to learn lessons from how developers use it and are adjusting the AI model that powers it.
“Copilot tries to understand your intent and to generate the best code it can, but the code it suggests may not always work or even make sense,” Salva said. “While we are working hard and seeing progress in Copilot generating better code, suggestions should be carefully tested, reviewed and vetted, like any other code. We are collecting telemetry data to make the model better, which we prompt users with in the UI.” Is Copilot worth the price? For the moment, Copilot will be offered at $10 per month, or $100 per year, with a 60-day free trial, which seems to be a bargain for software developers. In addition, the tool will be free for students and maintainers of popular open-source projects.
“I think it’s fully worth the price tag,” Thakur said. “As a machine learning engineer, I know that a lot goes into building products like these, especially Copilot, which provides suggestions with sub-milliseconds latency. To build an infrastructure that serves these kinds of models for free is not feasible in the real world for a longer period of time.” Thakur also noted that as the costs of AI infrastructure continue to reduce, the price of Copilot may get lower in the future.
“But at this point, in my opinion, it’s totally worth the price, especially for someone like me who has been using Copilot almost every day since the preview,” he said.
Castricato said that Copilot has saved him multiple hours per week, and sometimes even a day or two of troubleshooting per week.
“Copilot allows me to rapidly test many experiments — often without having to spend extensive time debugging. A set of experiments that conventionally would take me days to implement instead takes me one day,” he said. “As a professional tool, it is well worth its price. It certainly makes me more than $10 a month. I can easily see any large company justifying a Copilot license for all of their technical staff. It’s almost a trivial expense.” Salva believes that this is just the beginning of AI-augmented programming and sees Copilot as the next step in a long line of developer tools.
“As we saw with the compiler, higher-level programming languages and open source itself, tooling advancements have amplified the impact developers have in our world,” he said. “At the same time, those same tools are no substitute for a developer’s experience, skill and creativity.” With better tools, he added, industry demand for developers has steadily increased. “We’re optimistic that GitHub Copilot will have similar effects, complementing the labor of developers and empowering them to write code more easily with greater focus and creativity,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,447 | 2,022 |
"Seemplicity raises $32M to launch productivity platform for security teams | VentureBeat"
|
"https://venturebeat.com/security/seemplicity-security-automation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Seemplicity raises $32M to launch productivity platform for security teams Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, Seemplicity launched a new risk reduction and productivity platform for security teams with $32 million in funding, with $26 million provided as part of a series A funding round led by Glilot Capital Partners.
Seemplicity has attracted a lot of investor attention due to its ability to collect, aggregate and normalize data from third-party security tools, including vulnerability management , appsec , penetration testing , API security , and software-as-a-service (SaaS) security tools in a single location to build automated risk reduction workflows.
For security teams, this approach has the potential to minimize time-consuming remediation actions that security analysts need to take when managing security findings in their environments.
Automating SOC operations The release comes as more and more security teams struggle to keep up with the demands of modern security monitoring, with research showing that more than 70% of SOC analysts experience burnout, with 64% of SOC analysts saying manual work takes up more than half their time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of the core reasons for this burnout among SOC analysts is that most security teams are relying on unscalable approaches to security with many manual processes, such as triaging alerts and manually investigating incidents, that take up hours of analysts time.
“When it comes to risk reduction, security teams are fighting the perfect storm — untangling thousands of inconsistent security findings from siloed tools on the one hand, while working with multidisciplinary remediation teams on a constantly evolving technology stack on the other,” said Yoran Sirkis, cofounder and CEO of Seemplicity. “The manual, fragmented, and non-continuous operational methodologies in place today that aim to bridge between security findings and remediation teams prevent security teams from scaling and significantly extend the overall time to remediation.” Seemplicity aims to provide security teams with a unified platform where they can integrate data from other existing security tools and build automated workflows to manage the findings of multiple security tools in one place.
The organization claims this approach to automation reduces the amount of time security analysts spend on manual operations by 80% and increases remediation throughput sixfold.
A look at the risk-based vulnerability management market As a risk reduction provider, Seemplicity falls within the security and vulnerability management market, which researchers valued at $13.8 billion in 2021, and anticipate will reach a value of $18.7 billion by 2026.
More broadly, the company is competing against risk-based vulnerability management (RBVM) tools and application security orchestration and correlation (ASOC) tools.
One of these competitors is Tenable , which recently announced annual revenue of $541.1 million, and offers a RBVM tool that uses machine learning to analyze more than 20 trillion threats, vulnerabilities and asset data points to help prioritize the most significant vulnerabilities.
Another competing RBVM provider is Balbix , with Balbix Security Cloud, that can continuously analyze hundreds of billions of time-varying signals to continually analyze vulnerabilities and prioritize them, so that security teams can address the most significant vulnerabilities first.
Balbix recently raised $70 million as part of a series C funding round.
However, Sirkis argues that Seemplicity goes beyond consolidating and prioritizing findings, and instead, “focuses on the outcome rather than the output — reducing time-to-remediation,” so that security teams don’t need to manually operate the entire remediation lifecycle.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,448 | 2,022 |
"Microsoft announces new 'family' of identity and access management tools | VentureBeat"
|
"https://venturebeat.com/security/microsoft-entra"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft announces new ‘family’ of identity and access management tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, Microsoft announced the launch of Entra, a new product family of identity and access management solutions. The family includes existing tools like Azure AD alongside two new product categories; Cloud Infrastructure Entitlement Management (CIEM) and Decentralized Identity.
For users, Entra product family is designed to protect access to any app or resource by enabling security teams to discover and manage permissions in multicloud environments so they can secure digital identities from end-to-end.
A look at the Microsoft Entra’s product family Microsoft’s new CIEM solution, Entra Permissions Management, is a permissions’ management solution that’s designed to provide visibility into permissions for user and workload identities that goes live in July.
Entra Permissions Management provides security teams with a solution to detect unused and excessive permissions, so they can more effectively enforce the principle of least privilege and maintain a top-down view of identities across all cloud services including Microsoft Azure, Amazon Web Services and Google Cloud platform.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The organization’s new decentralized identity offering, Verified ID, is available in early August and will enable users to decide what information they share, when, who they share it with and provide them with the ability to revoke access when necessary.
Verified ID also provides teams with a method to verify the credentials of users and organizations. For instance, users can store their education and certification credentials and share them with other users on request to verify their identity.
Protecting digital identities as part of the attack surface The launch of these new solutions comes as digital identities have become a critical part of most organization’s attack surfaces, with cyber criminals using identity-based attacks, such as credential theft and social engineering to gain access to enterprise environments and steal sensitive information.
In fact, research shows that identity-driven techniques accounted for three out of the five top attacks targeting organizations in 2021.
As Vasu Jakkal, CVP Microsoft, Security, Compliance, Identity and Privacy explains, while the digital universe starts with your digital identity, it’s an “attack vector that is getting easily exploited.” Identity “is the battle of security attacks right now and there are 921 attacks per second,” Jakkal said.
“This has almost doubled. There were 579 attacks per second just a few months back, so the escalation of attacks continues to increase and that puts people at extreme risk when it comes to their own personal security.” Jakkal says that this challenge is augmented due to a “dangerous mismatch” between what defenders can protect with existing solutions and the volume of these attacks.
Emerging identity and access management solutions Entra’s launch announcement comes as the global identity and access management market is in a state of growth, with researchers estimating the market at $12.26 billion in 2020 and anticipating it will reach a value of $34.52 billion in 2028 as more organizations attempt to grapple with identity-based attacks and compliance concerns.
As the market grows, Microsoft isn’t the only big tech vendor looking to redefine identity management to better protect digital identities. Just recently, Google announced the launch of passwordless authentication options for users on Chrome and Android to better defend against credential-based attacks.
Likewise, last year, Apple announced the launch of passkeys that are stored in the iCloud Keychain, that enable users to sign in to websites and applications without using passwords.
With more providers maturing their approach to identity management and security, Microsoft is aiming to differentiate itself from other vendors by building an end-to-end solution to identity protection across employees, partners and customers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,449 | 2,022 |
"What to consider before adopting an intelligent virtual assistant | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/what-to-consider-before-adopting-an-intelligent-virtual-assistant"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community What to consider before adopting an intelligent virtual assistant Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Contact centers have evolved to be dynamic communications hubs that have been put to the test the past two years.
Companies have begun to invest in intelligent virtual assistants (IVAs) because they are effective in improving contact center productivity and the customer experience. However, to get the best return from these virtual assistants, you need to know your strategy. Without clear direction, you ultimately jeopardize customer experience.
Here are questions to ask and challenges to consider before expanding your IVA strategy. Checking these boxes will help ensure the IVA meets your business needs and customer communications preferences.
Question: What level of complexity will the IVA support? As I noted above, one of the first and most important questions you should ask is, “What is the general strategy for the IVA?” Is the IVA going to supplement your agents to allow for them to focus on more complex tasks? Or is the IVA going to focus on one or a few very specific use cases (e.g., password reset, bill payments or two-factor authentication)? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When diving into your IVA strategy, it’s really about knowing the complexity you want the IVA to handle and how many of those inquiries you wish to block from being escalated to live agents. A clear strategy and knowing the complexities that could lie ahead are critical to successful integration.
Challenge: Understanding the technology Understanding the technology is central to designing IVAs that will support the required complexity. Knowing the differences between IVAs and other contact center solutions such as chatbots, voicebots and interactive voice response, known as IVR, will help you guarantee your IVA can effectively support specific use cases, regardless of complexity. Below are different contact center technologies and their key differences.
Chatbot: A chatbot is a program that can automatically communicate with a user without a human agent’s help. They have limited capabilities and typically interact via text. Chatbots are rule-based and task-specific, which allows them to pose questions based on predetermined options. They lack sophistication and will not make any inferences from previous interactions with customers. Chatbots are best suited as a question and answer use cases.
Voicebot: Voicebots and chatbots have similar functionality. The main difference from a chatbot and voicebot is the channel. Voicebots involve more complexity as they incorporate speech-to-text, which allows callers to speak to the bot. These solutions use IVR software.
IVR: Briefly mentioned above, IVR software is an automated phone system technology that interacts with callers and gathers information based on how the caller navigates a call menu. It does not use AI. Callers move through menu options through spoken responses or by pressing numbers on their phones. IVR software routes the caller to specific departments or specialists. Some may consider an IVR to be a simple voicebot.
IVA: An intelligent virtual assistant is the most sophisticated of the options and you can use it across various channels. IVAs process natural language requests using natural language understanding or natural language processing and understand situational context, allowing them to handle a more complex range of questions and interactions. These tools closely resemble human speech and can understand queries with spelling and grammatical errors, slang or another potentially confusing language, much like a human agent.
You’re better equipped to advance existing contact center communications strategies when you understand IVAs, the full volume of capabilities they offer and how they differ from other AI-enabled solutions.
Question: What persona should the intelligent virtual assistant represent? For an IVA to be effective, you must understand the persona you want the virtual assistant to represent. This persona will inform how you design your virtual assistant to act based on your company’s brand. To know the persona, you need to know how your customers engage with the contact center and the complexity of the skills that the assistants — live and virtual — need to be able to manage.
Based on these defining characteristics, you can set business rules for the IVA. These rules then create the standard for how to design the IVA. Key questions to answer to uncover persona include: Should the voice be female? Male? Should it have an accent? How many languages should it be able to speak? Will it need to be familiar with jargon from a particular industry? Should it have a casual tone and follow a more informal language model? Or should it be formal and professional? How will customers speak to the IVA? Answering these questions will guide you in designing an effective IVA that you can scale for your brand.
Challenge: Lack of collaboration between IT and CX teams IT teams often work closely with a communications provider to design and implement the IVA. Though they support this process, IT teams typically don’t engage with the customers and might not have a clear picture of their engagement preferences. You can overcome this challenge by increasing collaboration between IT and customer experience (CX) teams.
For example, CX team members can provide insight into the company rules for customer support and how the business manages interaction paths and escalation levels. In banking, this might include the ability for a caller to create a payment plan with an IVA over the phone; however, if the IVA hears a specific balance figure or concern through a particular phrase, it knows to connect the caller to a human agent. If the IVA doesn’t have this level of business logic, the company can jeopardize the customer experience.
CX team members are also knowledgeable about how to create personas for customers and how to understand their engagement preferences. They’re also aware of standard industry terms that customers might use when interacting with an IVA that the IT team might not consider. Once IT teams know these terms, they can then create training models for the IVA that include the common terms and phrases.
What the future holds for intelligent virtual assistants One current limitation of IVAs is that they sometimes lack visual engagement. It will be interesting to see IVAs evolve to video channels in the coming years. With video, customer support teams, through the use of IVAs, would use biometrics to understand people’s body language and experience, make inferences about their experience and sentiment and automate video support experiences or escalate to an agent.
For example, in healthcare settings, if someone with a severe illness called their doctor’s office and communicated via IVA-enabled video, the IVA could visually pick up on common symptoms the patient demonstrates. This might include lack of focus, inability to maintain eye contact, drowsiness, etc. The IVA can then note these visible symptoms in the patient’s chart to inform the team of nurses and doctors. The potential of this technology is exciting.
Answering essential questions and addressing challenges related to using IVAs early in the investment process will help you optimize your strategies to leverage automated and intelligent solutions that improve customer experiences. As you deepen your IVA strategies, you’ll better understand the technology’s potential, improve customer experiences and see positive impacts on your operations.
Tim Wurth is director of product management at Intrado.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,450 | 2,022 |
"Three ‘soft skills’ for every developer's toolbox | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/three-soft-skills-for-every-developers-toolbox"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Three ‘soft skills’ for every developer’s toolbox Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Software engineers will always be motivated to move up in their careers for better pay, more flexibility and the ability to expand their skills. What they do is in demand, and paired with all of the pandemic dislocations we’ve all seen, it’s no wonder so many people are migrating. In software, this is all part of wider trends that have been given a snappy name: “ The Great Resignation.
” A study from Udemy revealed that, even before the pandemic, nearly half of the employees surveyed quit their jobs because of their managers. There are certainly talent shortages now, but even at different times, it has been common practice to promote developers quickly to fill empty management slots, even without management experience. It can be an awkward transition when someone used to relying primarily on their technical skills needs to rapidly develop the soft skills that could make them a great manager.
The same Udemy study found that almost 90% of employees value emotional intelligence in their leaders. Becoming a better manager means figuring out how to use your past experiences and strengths because that’s what you have. Now is the time to reflect on the managers you’ve crossed paths with throughout your career, and what they did or didn’t do to make you feel supported and successful. My reflection on my own transition from individual contributor to leading a technical team has surfaced some important soft skills that help me put my team first: Shift to a team-oriented mindset Engineers and developers are accustomed to heads-down delivery, with focused concentration on perfecting their craft. Although some managerial roles will maintain these elements, it’s unlikely a manager will primarily focus on developing them. Instead, the focus shifts to supporting the team holistically.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Developers have a common, very understandable fear about losing their technical edge and credibility, or being deemed “post technical.” They do not actually become post-technical. Instead, their technical expertise is just put to work differently, for example, to evaluate what is a feasible plan, vs. executing it themselves.
Becoming a successful manager requires the realization that your performance is defined by your team’s success rather than by your individual capabilities. Serving the team, not your own personal technical interests, is the best investment of your time for results.
As a manager, you should be more focused on developing your teammates’ skills, removing blockers and implementing training to help them grow. And every time you might not have the answer to a question, there’s an opportunity to empower someone on your team to position them as the expert. Because, realistically, making the whole team 1% better is a better investment overall than improving yourself by 5%.
Embrace empathy Your team members may be technical people, but they’re first and foremost themselves.
As much as developers work with computers, they are not machines. We’re human beings with aspirations – not resources or opaque task execution units. Becoming a manager reminds us that we’re all human, and channeling each individual’s strengths as a collective is better for getting things done.
In a managerial role, it’s critical to create a continuous line of communication with your team members. We’re human and it’s not possible to “always say the right thing,” so focus on openness and being able to talk again. Build rapport so you understand what they want out of their career and help them develop a plan to get there.
It’s important for managers to have “strong opinions loosely held,” meaning sustaining a strong point of view to be decisive while serving as an active listener with a willingness to be persuaded differently. This lets a team get clear, unambiguous guidance that indicates where to go, without being cast in concrete.
There’s no simple pattern or method that fits each organization or team, and that’s because we’re all inherently different. Rather than being overly simplistic by “staying flexible,” managers can really aim to understand their team members and treat them as individuals. Working with actual individuals will always present options no shallow business or management book could advise you on, if you learn how to look for them.
Focus on big-picture thinking As the manager, you should know how your team generates value for the rest of the organization. By necessity, software developers need to be heads-down in a lot of details; the inverse soft skill is getting out of those details and seeing the bigger picture for success.
Developers might discuss refactoring a particular code module to make subsequent changes easier or even just fixing a bug. The larger value often isn’t actually the technology or feature. But rather the ability to develop new things faster, or reduce user friction. Managers act as “translation machines”: software engineers will deal with concepts like technical debt, the wider business will deal with ideas like user satisfaction and development velocity.
Doing this translation requires understanding how the wider business outside of your team thinks. Being able to describe the CEO’s goals for the company all the way down to what one of your team members is doing this week, demonstrates a greater understanding of not only your team, but the organization as a whole. Why does your team’s work matter? Practice. Rinse. Repeat.
So, how do managers develop these skills? The only way is to practice. Those with strength in any skill expose themselves to structured practice. There’s no real magic or “shortcut hack.” Being intentional about the skills you want to build and continuously putting them into practice is truly the only way to constantly improve.
These focuses of practice come from two places: internal and external. In the external world, you pick mentors – someone who is either in your role with a number of years more experience or someone who is in the role you aspire to have one day – and ask for advice.
A good mentor will help you identify patterns, weaknesses, and build relationships to help you grow. Internal practice comes from developing clarity about your personal goals and motivations. The ultimate soft skill is to work on attenuating, not eliminating, your weaknesses and honing your strengths.
David Allen is senior director of developer relations at Neo4j.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,451 | 2,022 |
"How human mobility data can drive better business decisions | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-human-mobility-data-drives-better-business-decisions"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How human mobility data can drive better business decisions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s often difficult to tell what people will do next, but understanding human movement patterns throughout the world is crucial for creating a baseline for the pandemic’s consequences and forecasting the future. In the COVID-19 era, location data has demonstrated how mobility insights are directly connected with adjacent events like supply chain interruptions and inflation. What are the connections between these and what conclusions can organizations draw from them? What’s more, why do they matter? How the pandemic changed movement patterns A migration wave to the U.S. occurred due to the pandemic, the shift to remote employment, layoffs and lockdowns. The Joint Center for Housing Studies at Harvard discovered that an abnormally large number of people made permanent moves early in the pandemic and again at the end of 2020, indicating a 12-14% increase over previous years. Temporary relocations increased by 18% in 2020 compared to 2019.
By looking at human mobility data, we’re able to pinpoint where population growth has happened, as well as what changes in population income groups mean for impacted communities. Consumer flow, reaction to catastrophes and neighborhood changes are all events that can be captured using movement data, which has massive consequences for the health of businesses.
A confluence of issues We can use location data to expose more than simply human movements. For example, location data may tell you not just whether individuals are visiting a specific store, but also when and for how long. Looking at this kind of data can help business owners figure out how supply chain disruptions are hurting their customers. Reduced dwell time, for example, could signal a scarcity of stock as customers are unable to locate what they want to buy.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Studying seasonal and hourly visitor patterns can also reveal whether customers are expecting to snag a previously unavailable item. Everyone remembers the days when grocery store lines snaked around the block searching for toilet paper.
Inflation has a significant impact on foot traffic and buying decisions as well. We were able to measure visitation to gas stations using location data in these days of soaring gas costs. In 2021, COVID-19 appeared to have a greater impact on total visitation than pricing factors. This year, we’re seeing pockets of significantly reduced visitation for poorer Southern states like Alabama and also for high-poverty cities like Minneapolis and Chicago. Changes in consumer behavior around buying gas will undoubtedly have a downstream effect on purchase of higher-priced items like cars and electronics. Location data can be used to forecast these changes long before they affect business bottom lines.
My team’s research indicated that, despite lower foot traffic, consumers who do go to stores spend more (adjusted for inflation) than they did before the pandemic. The trend prior to COVID-19 was the inverse: more traffic with less spending, reflecting historically higher browsing activity. Customers at in-person retail outlets appear to be shopping with a specific purchase in mind these days. If there are no supply concerns, they also appear to be more likely to complete the purchase than in 2019.
This is supported by an examination of the latest Amazon shop closures.
Amazon is shuttering subsidiaries such as Amazon Pop-Up and Amazon 4-Star, which aimed to let customers “try before they buy” by allowing them to come in, browse and see what caught their eye. Sadly, for Amazon, the pandemic seems to have put a damper on this venture.
Using what you’ve learned So, what’s the upshot of this information and what should it be used for? One of the most significant advantages of mobility data is that, unlike static census data, it provides businesses with real-time, up-to-date insight about brand locations, neighborhoods and movement trends. It’s also possible to conduct research on a global scale. In today’s fast-paced environment, dynamic data is essential. Companies and organizations that rely on static and out-of-date data are bound to make inaccurate assumptions and lose out financially. The pandemic, supply shortages and growing inflation have all heightened the possibility of these negative outcomes.
When mobility data is used efficiently, it allows for more efficient resource allocation, resulting in increased growth and profit. Examining how foot traffic changes or does not change with respect to inflation and supply chain serves as a projection model for improved resource allocation. For example, if you notice more business at your gas station on Wednesdays, it may be a good idea to schedule additional personnel on that day. Alternatively, you might be able to use what you know about stock shipments to reroute deliveries to stores where increased traffic is expected and more stocked shelves are needed.
Furthermore, for businesses that rely on face-to-face sales — such as motels, vehicle rental services and self-storage — foot traffic is strongly linked to revenue.
Predicting income for significant brands is important not only for financial analysts and investors, but also for companies seeking inside information on the competition’s performance. Long-term strategy and focus may be informed by understanding why a competitor is functioning effectively. Long before quarterly statistics hit the headlines, businesses can predict changing patterns of consumer behavior.
Insights about how residents and non-locals use space in the neighborhood are also available, which helps to paint a picture of what they need. For example, if you know a location attracts many people from more than 30 miles away, it suggests an underserved area in the location they’re coming from. Inflation, in particular, has an influence. Companies can use location data to find out if consumers are seeking cheap products inaccessible in their immediate area; this helps with site selection for firms seeking expansion and receptive buyers.
Meeting changing consumer needs Once you start diving into and analyzing the data, foot traffic and location intelligence may give you important insights. Foot traffic analysis indicates the impact of inflation and supply chain interruptions. The bottom line is that the pandemic, as well as the accompanying supply chain disruptions and inflation, have permanently altered movement patterns and consumer behavior. Brands use location data to monitor their internal performance, benchmark against competitors and discover new opportunities. All of these are critical elements for surviving in today’s changing consumer environment.
Elena Solodow is the manager of content and insights at Unacast.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,452 | 2,022 |
"How AI is improving the digital ad experience for consumers | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-ai-is-improving-the-digital-ad-experience-for-consumers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How AI is improving the digital ad experience for consumers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
We’ve all experienced it: the ad that runs half a dozen times during our favorite TV show, or the online ad that follows us everywhere. We search for something once, and suddenly there are ads for it all over our social media feeds.
As digital audiences have grown, fueled largely by growth in channels like CTV/OTT and streaming audio , advertisers have been pouring buckets of money into delivering their brand messaging to these captive audiences.
While targeting technology has evolved dramatically to provide more relevancy and better personalization, it’s not without flaws. Oversaturation is still a problem. And automation can sometimes over-optimize for a specific, perhaps unintended, trend.
The need for a human touch in advertising Part of the reason ad delivery sometimes misses the mark is that technology doesn’t understand the nuances of human behavior. In fact, AI should be, by design, devoid of biases and influence. But when it comes to advertising, there’s a lot of intuitive information that must be considered, especially as it relates to human behavior.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That’s why, despite AI technologies making a big impact to improve the ad experience, it still takes human touch to interpret and inform the model. Here’s how marketers can leverage AI to deliver a better consumer experience.
Identify and respond to trends at scale Certainly, analysts could look at ad performance data to figure out what’s resonating and use that insight to optimize campaigns. But doing it at the speed and scale necessary is impossible. Effective performance measurement requires multiplatform, real-time analysis — how ads are performing across multiple channels examined together — and real-time optimization to be effective. By using AI to analyze and optimize, marketers can eliminate repetitive, annoying or misplaced ads.
Leverage multi-touch attribution Digital marketing has traditionally relied on first- or last-touch attribution, meaning the “credit” for the buy, web visit or download is attributed to the first or last impression the consumer was exposed to. But in reality, it’s more likely that a waterfall effect drove the action — multiple touchpoints in a specific placement, strung together in a series — and that experience is infinitely different across every consumer’s journey. AI can analyze this dynamic journey, learning the specific touchpoints and cascade across multiple channels that drive efficacy and delivering that just-right experience to influence buyer behavior.
Manage volume across platforms AI-based ad platforms are optimized for performance. But to a machine, high performance means getting the most ads in front of the largest, most valuable audience. That can have a decidedly negative firehose effect, not to mention blow through the budget in no time at all. It’s akin to turning on the sink spigot full blast without adjusting flow or temperature. That is why it is important to adjust variables to manage the volume of ad delivery, including setting frequency caps that span multiple platforms, so consumers aren’t bombarded at first and then ghosted.
Deploy smarter contextual targeting Beyond just making ads relevant to the viewer based on known interests or intent, AI can also make them relevant based on the context in which they appear. For example, if an advertiser has set up a weather trigger to sell their latest winter coat, they may not want to have that ad run during a climate change discussion. But what if it’s a weather segment about a change in the climate this week — a drop in temperatures, for example? AI can tell the difference and deliver the ad appropriately.
Include attention metrics Marketers have traditionally used length of play to measure ad effectiveness — the longer a viewer lets it play, the more interested they must be. But this only tells part of the story. How many times have you gotten up and walked away from the TV or put down the device to grab a snack during an ad? With AI, we can optimize for attention metrics, which typically means getting our message out within the context of higher-quality, more compelling content — content audiences are less likely to turn away from. AI helps brands to do that in real time, but again, it requires human insight to know what’s gripping and will keep people’s attention.
AI also needs a human touch Of course, AI is certainly not without risk. In fact, without proper input and tuning, it can start to make poor decisions. For example, if we see that performance of a specific creative is starting to dip, AI may want to pull out of that buy and shift spending elsewhere, especially if CPM is going up as the audience shrinks. But it could be that the campaign is just reaching further down the funnel to the more engaged, high-value customers. The cost may be higher, but so is the return on ad spend because it’s a more valuable audience. Human guidance is key to preventing AI from optimizing incorrectly.
In a world where privacy is an ongoing concern, it’s important for adtech vendors to understand how to reach people in a way that’s meaningful and addressable without being annoying or interrupting their experience. Using AI, backed with human intuition, to optimize targeting and delivery provides a much more curated experience that adds value for the consumer.
TJ Sullivan is EVP of sales at Digital Remedy.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,453 | 2,022 |
"Everyone has moved their data to the cloud — now what? | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/everyone-has-moved-their-data-to-the-cloud-now-what"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Everyone has moved their data to the cloud — now what? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Companies of all shapes and sizes increasingly understand that there is a need to continually improve competitive differentiation and avoid falling behind the digital-native FAANGs of the world — data-first companies like Google and Amazon have leveraged data to dominate their markets. Additionally, the global pandemic has galvanized digital agendas, data and agile decision-making for strategic priorities spread across remote workspaces. In fact, a Gartner Board of Directors study found 69% of respondents said COVID-19 has led their organization to accelerate data and digital business initiatives.
Migrating data to the cloud isn’t a new thing, but many will find that cloud migration alone won’t magically transform their business into the next Google or Amazon.
And most companies discover that once they migrate, the latest cloud data warehouse, lakehouse, fabric or mesh doesn’t help harness the power of their data. A recent TDWI Research study of 244 companies using a cloud data warehouse/lake revealed that an astounding 76% experienced most or all of the same on-premises challenges.
The cloud lake or warehouse only solves one problem — providing access to data — which, albeit necessary, doesn’t solve for data usability and definitely not at absolute scale (which is what gives FAANGs their ‘byte’)! VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data usability is key to enabling truly digital businesses — ones that can draw on and use data to hyper-personalize every product and service and create unique user experiences for each customer.
The path to data usability Using data is hard. You have raw bits of information filled with errors, duplicate information, inconsistent formats and variability and siloed disparate systems.
Moving data to the cloud simply relocates these issues. TDWI reported that 76% of companies confirmed the same on-premise challenges. They may have moved their data to one place, but it’s still imbued with the same problems. Same wine, new bottle.
The ever-increasing bits of data ultimately need to be standardized, cleansed, linked and organized to be usable. And in order to ensure scalability and accuracy, it must be done in an automated manner.
Only then can companies begin to uncover the hidden gems, new business ideas and interesting relationships in the data. Doing so allows companies to gain a deeper, clearer and richer understanding of their customers, supply chains, processes and convert them into monetizable opportunities.
The objective is to establish a unit of central intelligence, at the heart of which are data assets—monetizable and readily usable layers of data from which the enterprise can extract value, on-demand.
That is easier said than done given current impediments: Highly manual, acronym soupy and complex data preparation implementations — namely because there isn’t enough talent, time, or (the right) tools to handle the scale necessary to make data ready for digital.
When a business doesn’t run in ‘batch mode’ and data scientists ‘ algorithms are predicated on constant access to data, how can current data preparation solutions that run on once-a-month routines cut it? Isn’t the very promise of digital to make every company anytime, anywhere, all in? Furthermore, few organizations have enough data scientists to do that.
Research by QuantHub shows there are three times as many data scientist job postings versus job searches, leaving a current gap of 250,000 unfilled positions.
Faced with the dual challenges of data scale and talent scarcity, companies require a radical new approach to achieve data usability. To use an analogy from the auto industry, just as BEVs have revolutionized how we get from point A to B, advanced data usability systems will revolutionize the ability for every business to create usable data to become truly digital.
Solving the usability puzzle with automation Most see AI as a solution for the decisioning side of analytics, however the FAANGs’ biggest discovery was using AI to automate data preparation, organization and monetization.
AI must be applied to the essential tasks to solve for data usability — to simplify, streamline and supercharge the many functions necessary to build, operate and maintain usable data.
The best approaches simplify this process into three steps: ingest, enrich and distribute. For ingest, algorithms corral data from all sources and systems at speed and scale. Second, these many floating bits are linked, assigned and fused to allow for instant use. This usable data must then be organized to allow for flow and distribution across customer, business and enterprise systems and processes.
Such an automated, scaled and all-in data usability system liberates data scientists, business experts and technology developers from tedious, manual and fragile data preparation while offering flexibility and speed as business needs change.
Most importantly, this system lets you understand, use and monetize every last bit of data at absolute scale, enabling a digital business that can rival (or even beat) the FAANGs.
Ultimately, this isn’t to say cloud data warehouses, lakes, fabrics, or whatever will be the next hot trend are bad. They solve for a much-needed purpose — easy access to data. But the journey to digital doesn’t end in the cloud. Data usability at scale will put an organization on the path to becoming a truly data-first digital business.
Abhishek Mehta is the chairman and CEO of Tresata DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,454 | 2,022 |
"Dissecting the hype over low-code | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/dissecting-the-hype-over-low-code"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Dissecting the hype over low-code Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
We’ve reached the point where the industry attention being paid to low-code development has moved it from “a hot topic” to “normal.” Gartner , for instance, predicts that 65% of app activity will fall into this category by 2024. It’s a monumental industry shift. But – and I cannot stress this enough – it has already taken place.
If we’ve approached a steady state of normality, why keep hyping it? And are the promises attached to that hype, to put it charitably, overblown? For those unfamiliar with low-code, it’s the ability to use a graphical interface to create application software with little or no need to do any programming in the traditional sense. By abstracting the code, these low-code tools and platforms offer the ability to speed up the development process. That’s the promise. It’s not controversial.
Where the hyperbole comes in is what impact that will have on users, organizations and the industry as a whole.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First off, it’s important to recognize that low-code is not new.
It traces its roots back to the rapid application development (RAD) environments of the 90s. We started calling such tools “low-code” a little more than a decade ago, and one could (and I do) strongly argue that it’s now pervasive. It’s not so much a matter of if companies are using low-code, but where.
If IT isn’t doing it, users certainly are. The practice is so generic that it turns up in CRM, machine learning, BPM, ERP… it’s a long list. From that perspective, Gartner’s prediction is likely a lowball estimate, as low-code took over a significant portion of the market a long time ago.
So again, why the hype? What’s the big deal about low-code? I would (and do) argue that it’s really about two other trends.
The first is citizen development , a movement that is also not at all new but received a new and catchier name a little over a decade ago. The idea is that you can get more applications built by going past IT and having users do their own development, and since such users already know what they want, there’s no need to communicate requirements and risk confusion. Since few citizens have learned to code, let alone code well, citizen development leans heavily on the use of low-code platforms and tools.
The second trend is continuous improvement, also an old idea in new packaging – but a very worthy one. The notion is that no effort, no process, no product, no method should be set in stone; organizations should collect data, evaluate options, and look for ways to enhance and evolve things. In software, that means applications aren’t so much deliverables as they are ongoing relationships. Various methodologies operating under the banner of Agile reflect this mindset in the world of code, but an emerging supposition is that low-code is easier to adapt and evolve than code – an idea rife with caveats but an emerging idea nevertheless.
Low-code really can help both of these movements. It can also help traditional developers in traditional IT departments building traditional things. But not all low-code platforms and tools help these movements equally well.
Citizen development Low-code is all but required for citizen development.
But there’s a real need for a maturity model here. Many proponents of citizen development put pressure on vendors to create tools that are so easy “anyone can use them.” But that idea is based on premises that don’t hold up very well. The first is that users want to develop applications for themselves. Not every salesperson, graphic artist, nurse, financial analyst, etc., wants to create applications. They have day jobs.
The next faulty assumption is that, but for the lack of easy tools, everyone could build what they need for themselves. That the only thing holding them back is that pesky need to type instructions in a text-based language of some kind. But there’s a reason that software development is a professional discipline. Taking away the need to code like a developer doesn’t take away the need to think like one.
Yet another faulty assumption is made by vendors, albeit based on demand. It’s a focus primarily on the construction part of application delivery. Professionals know full well that construction is perhaps 10% of the work required to deliver an application; one cannot omit design, testing, profiling, security, auditability, documentation, education, deployment, change management, and countless other needs for all but the simplest of solutions, and some (e.g., compliance auditing and security) might still be required even for “simple stuff.” Finally, simplicity doesn’t survive the long term. Before long, application builders grow to want more and more functionality. That can’t be done without sacrificing simplicity. And it often can’t be done without a vendor change. Before long, citizen development efforts start to resemble IT-driven projects.
Low-code tools and platforms that can truly help tend to focus on the entire delivery cycle, not just construction. They focus on productivity, not simplicity. They provide some tools for non-professionals and connected yet different tools to technical professionals – and they assist with the communication between them. Such approaches and tools aren’t necessarily the ones being the most hyped, but they’re doing the most good.
So it’s not so much that citizen development is growing (even though it is), but evolved citizen development is growing even more.
Continuous improvement What happens more often than not when applying low-code to continuous improvement is the focus on an initial release; a minimum viable product (MVP). To put something out there to collect data we can use to create the “real” product using “real” tools and platforms (e.g., code).
That’s not what continuous improvement really means, but that is where the hype is focused.
It’s also a function of the fact that most low-code tools focus almost exclusively on that construction phase of application delivery, treading a low-code effort as disposable. Most low-code tools have little support for structured deployment or change management.
But, with the right low-code tools, applications can be built quickly, deployed quickly, and modified and redeployed quickly on an ongoing and regular basis. They can take inspiration from the world of Agile development even if they eschew that school’s concepts that are tightly coupled to code.
In fact, better software gets built if continuous improvement is central to an organization’s culture. People (professionals and amateurs alike) aren’t very good at imagining and describing what they want, and even when what they asked for is what they get, they invariably realize that they forgot things. That circumstances have changed. And (the right) low-code tools enable rapid responses to those changing requirements, conditions, and desires.
The hype is in the wrong place It’s not that low-code is a nothing burger. It’s a very big deal – so big that it’s already widespread. So varied that thousands of vendors do low-code in very different ways. So matter-of-fact that many large commercial platforms include some low-code capabilities to allow for scripting and automation scenarios. It’s done because it contributes to productivity. It often contributes to clarity. It sometimes contributes to creative chaos. But it’s important to think of low-code development as, well, development.
What’s really going on is that low-code is incredibly useful to movements that are trending and are in the process of becoming big. And this kind of makes sense. Low-code makes many kinds of software development more productive. It’s going to accompany any areas of innovation, no matter what they are.
And it’s not alone.
Artificial intelligence shows up all over as well and is rapidly becoming a component one might rely on while working on something else. Business intelligence grows every day and collecting/analyzing/reacting to data is again becoming something that is used as a component of something bigger.
I tend to want to focus on those higher-level trends, not the technologies that enable them, but I suppose there’s plenty of hype to go around, and if low-code (or some providers of low-code) are doing such a good job that they’re enabling other innovative moments to flourish, it’s probably hype that’s well spent.
Mike Fitzmaurice is VP of North America at WEBCON.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,455 | 2,022 |
"3 ways businesses can build more resilient data architectures | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/3-ways-businesses-can-build-more-resilient-data-architectures"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 3 ways businesses can build more resilient data architectures Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data sharing is a central element of the digital transformation companies have experienced over the past several years. According to Chief Data Officers (CDOs), companies’ ability to build a resilient, privacy-centric and shareable data architecture directly impacts their growth potential.
Trends and predictions from a Gartner survey of CDOs estimate that by 2023, organizations that promote data sharing will outperform their peers on most business value metrics. Yet, at the same time, Gartner predicts that through 2022, less than 5% of data-sharing programs will correctly identify trusted data and locate trusted data sources.
A resilient data architecture is difficult to build. From a privacy perspective, signal loss is a top challenge in building a successful system. Breaking down data silos and merging an abundance of data into one universal snapshot of owned data has made processing and activating data equally challenging.
Data sharing among teams is central to successful digital acceleration and there are three main ways businesses can take steps toward building a more resilient data architecture: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1.) Focus on your team The sturdiness of any building or bridge is reliant on the team of architects and planners involved in building it. The same principle holds true in the data architecture realm. As businesses build and refine their data architecture, look closely at how engineers, data scientists, legal and privacy experts and project managers are equipped for the job.
Ensure teams have the proper training and certifications to overcome knowledge gaps, especially in the privacy sphere. Ask for team members to get certified by a large professional organization such as the International Association of Privacy Professionals (IAPP). These organizations offer courses and certification for a diversity of roles, so each team member can learn about data privacy requirements in a way that is specific to their job. This can clear up misunderstandings while investing in privacy knowledge across the board.
After investing in training, encourage collaboration among all data team experts and ensure projects are given enough time to foster collaboration. Siloed teams do not work together efficiently. Sometimes, teams that are rushed to bring something to market in a non-compliant way can result in a full rebuild effort that costs even more time, effort and money to fix.
Teams need time to understand a regulatory space, structure the data architecture to fit rules and privacy implications and work together to build a more resilient system. Teams that work together and have strong relationships internally more often bridge knowledge gaps and create architecture that better serves the end user.
This collaboration may even lead to the creation of hybrid roles where team members share privacy as a secondary expertise. Many organizations have privacy experts working within data teams, but consider how data-informed roles like sales or marketing could benefit from better-shared privacy knowledge. Adapt the team structures to introduce more hybridized data and privacy roles to break down the silos that make data architecture ineffective.
Just as data should not be a siloed asset, privacy should not be a siloed responsibility. Organizations are evolving to respond to this shift.
2.) Make compliance your first mission Compliance should be integrated into any new project from the start, so it is crucial to align with this team at the very beginning of any project. With a collaborative, trained team, each member’s first step should be to dedicate time toward understanding privacy considerations and possible risk areas to be able to build a structure that is compliant.
It costs extra time and effort to do this step first, but it ensures the project will be built right the first time. Trial and error is not the best approach for privacy-centric challenges in data architecture. Fixing non-compliant structures retroactively ends up costing more money and time in the long term.
3.) Work from a single source of truth As the governance, risk and compliance landscape has become exponentially more complex in the past decade, companies have slowly realized they can no longer rely on one system or architecture for orchestrating data. Global organizations are required to comply with many regulations (GDPR, CCPA, IPPA and more) and the nuances between these ever-changing requirements are too complex for one static system. These organizations receive data from many sources and then do the work of obtaining, storing and analyzing the data across parallel data warehouses. Multiple inputs and outputs muddle compliance and privacy goals.
To assure greater resilience, companies need to create and enforce one basic logic for storing and safeguarding data – a single source of privacy data certifying that user privacy is intact. Resilient architecture has the power to show that whatever happens with data, user privacy is respected within every system.
Big data is losing its grip on architecture, making way for a privacy-centric approach. An organization may be facing challenges amid this paradigm shift towards tighter compliance. Tightening the structure internally within a team and externally with unified data channels to bolster resilience and fight the challenges all companies are looking to solve in their data architecture.
Julian Llorente is the director of product and data privacy at Tealium.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,456 | 2,022 |
"3 most common — and dangerous — holes in companies' cyber defenses | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/3-most-common-and-dangerous-holes-in-companies-cyber-defenses"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 3 most common — and dangerous — holes in companies’ cyber defenses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cyberattack warnings have become so frequent that it’s easy to tune them out. Your company has loaded up on security tools and run its Red Team drills. You’re confident you’ve done all you can.
Executives at Microsoft and the chip-making giant Nvidia were likely feeling the same way until the companies suffered excruciating breaches through common, easy-to-exploit holes. It just goes to show that even the most tech-savvy companies are at risk. Cyberattacks in the U.S. more than quadrupled last year and hackers are still gaining entry in ways both sophisticated and obvious. Here are three common holes they’re exploiting in corporate cyber defenses, plus some easy-to-implement solutions: Cyber defense and privilege escalation Say you’ve hired someone on the help desk, granting them privileges to install patches and software. Later, the employee is transferred elsewhere in the organization, but their privileges remain. That’s because most companies have strict protocols for handing them out – but not many for withdrawing them. This lack of withdrawal is a major cybersecurity weak point.
As the help desk situation is repeated across your organization, companies become laden with unneeded privilege. Each account pushes you closer to a successful attack. Privilege escalation was the root cause for a breach at Block, where an ex-employee leveraged access that should have been removed.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some organizations de-emphasize the problem. Most CISOs know hackers gain little by burrowing into frontline workers’ accounts. Without admin privileges, there’s no way to install malware or ransomware.
Yet as privilege escalates, more fruitful points of entry multiply.
Take the recent breach of Okta , which was as simple as it was effective. Hackers exploited the privileges of a subcontractor’s engineer, installed code downloaded from the internet and soon had the keys to a $23 billion cloud software firm.
Then they gained access to about 366 Okta customer accounts. To add insult to injury, Lapsus$ , the group responsible, posted screenshots of its bounty and publicly taunted Okta for its failings.
Though no cyber defense is perfect, companies can reduce risk by allowing privilege only as needed – and employ even greater vigor to withdrawing it. Protect your company by stopping the problem before it starts.
The risk of lateral movement Hackers aren’t much different from bank robbers. They both need reconnaissance to be successful. They get it by laterally moving through your organization.
After capturing one system, criminals can move to the next and the next, sizing up defenses and probing for a path to your crown jewels. To be sure, breaching an administrator’s account for shipping and receiving might not bring treasure in the form of confidential information, privilege escalation or lateral movement. But if hackers can access someone in the financial group, devops or even the CEO’s executive assistant, they’ve found a route to sensitive material.
At some companies, an administrator credentialed for one part of a network is automatically granted access to another. It’s a recipe for disaster. If there’s no pressing need for them to be there, it only adds another gateway to attack.
One solution is air gapping, meaning there’s no direct connection between one part of your network and another. Preventive software then adds a second rampart, allowing for adjustments on the fly. When an attack is identified, it automatically air gaps critical data, isolating data you can least afford to lose.
A stale response plan You already have an incident response plan. How fresh is it? If you haven’t been running tabletop exercises – staging varied levels of attack to check for vulnerabilities – you’re likely at risk. As modes of assault change, you need to know how effectively your defenses can adjust. How quickly can you respond? Who’s responsible for shutting down which systems? Who needs to be informed at various levels of a breach? We once got a call from a Fortune 500 medical technology firm with an attack in progress. Privileged escalation and lateral movement were happening at network speeds: As soon as a system was reinstated with its golden image , it was compromised again, literally in milliseconds. At the same time, alarms were ringing across the entire network, with tens of thousands of systems at stake. The incident response plan simply couldn’t keep up.
Hackers continue to escalate their game by writing new ransomware and dusting off old tricks thought to be solved. CIOs and CISOs respond by throwing the latest software at the threats and implementing new responses. Yet the real danger lies in complacency. Sometimes it pays to get back to basics: Review privilege escalation, shut down lateral movement and never stop updating and testing response plans.
The time and money a company invests in its cybersecurity today is nothing compared to what comes after a breach. No one wants to explain to one’s customers why your efforts weren’t enough.
Raj Dodhiawala is president of Remediant.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,457 | 2,022 |
"How Equinix wants the internet to be ready for the metaverse | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-equinix-wants-the-internet-to-be-ready-for-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Equinix wants the internet to be ready for the metaverse Share on Facebook Share on X Share on LinkedIn An Equinix colocation datacenter.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Equinix is one of the companies that provide the backbone of the internet, and it has more than 10,000 companies using its more than 240 datacenters around the world. And so it has a say in how we will build the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
Intel has said that the making the metaverse work in real time for billions of people will take a thousand times more computing power than we have today. That’s pretty intimidating, but to Equinix, this has been coming for a long time and the latest view of the metaverse is an evolution of the internet.
Equinix has doubled the number of datacenters that it had a decade ago.
I talked with Matt George, director of digital transformation and GTM strategy at Equinix, about the technical challenges of building the metaverse. One of the big challenges is not bandwidth, which enables you to download a lot of data, but latency, which determines how fast your interactions can be. The metaverse isn’t really going to be the metaverse if we have high levels of latency. It has to be snappy.
Whether we can do this depends on the datacenters of the world. I’ve been inside one of Equinix’s datacenters, and they’re vast.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Here’s an edited transcript of our interview.
VentureBeat: Tell us about your background and Equinix’s view of the metaverse.
Matt George: I’ve been at Equinix nine years, so I’ve seen the company transform itself quite a bit. I’ve had several roles at Equinix. When I first joined, my background–I’m not a data center guy. I worked in the media industry in the U.K. and Europe for the likes of Sky and BT. I was involved in IPTV launches, and also Tivo on the DVR side. My role at Equinix covers subject matter specialty in content and digital media, but I also manage a team that sits as an overlay between our SaaS function and our technical solution teams. We’re involved in thought leadership. We’re customer-facing. We have a very interesting role within the company.
From an Equinix point of view, when we talk about the metaverse, for us it’s always existing in some aspects. It’s not a new concept. It’s obviously gained a lot of publicity in the last six months, last year, especially because of the presentation by Facebook and their change to Meta. It elevated the subject. But for us, the foundation of Equinix, we see our platform as enabling companies–effectively we have 10,000 companies that are using our platform and building out much more flexible digital infrastructure. That, for me, is the core that will enable companies to thrive in the metaverse.
As you probably know, you can see a number of articles that are very pro-metaverse, and then there are a number of articles that view it as a marketing wrap-up. But coming through that we see the need–our platform has global reach. We have 240-plus data centers. Those data centers are interconnected. That gives any company a platform to be able to cope with the impact of digitization. Whether that is deploying physical points of presence, whether it’s connecting those to edge locations to serve end users with the low latency that’s required, or whether it’s connecting their own ecosystems. Companies that we work with in the gaming and media industry, which are closest to the metaverse, we’re seeing numerous use cases for the platform, and also connecting to other partners.
For me it’s always existed. The basis of the metaverse is that interconnection, that interconnectivity that’s needed to support low latency, to support the high compute that’s needed for some of the virtual uses. That’s where we sit.
VentureBeat: I’ve had good conversations on this topic with companies like Comcast and Subspace.
Cable Labs as well. Subspace was interesting because their plan was to try to light up a lot of dark fiber and bypass bottlenecks in the internet that could improve real time communication for games and get rid of more latency. It’s an interesting proposition where they could turn around and promise good connections to the game companies. “If you have a player in Siberia and they want to play a multiplayer game with someone somewhere else, we can make that happen so they don’t have a bad experience.” It was almost fixing the internet one connection at a time. I do know they also had a bunch of layoffs now, though. Maybe that plan isn’t really–it almost seems like a Band-Aid for the metaverse. You’re fixing it for some players, but that’s only one problem.
The interesting thing I learned from Comcast and Cable Labs was they’re talking about prioritizing traffic in the home. If you had a game or streaming video, that might get more priority than something else. Being able to distinguish and identify the traffic in the home would allow you to get better real time performance with less latency. There are these things that people seem to be trying, and I’m curious what floats up to your level around how to make something that wasn’t really designed for real time to work with real time.
George: Companies access the net behind Comcast or of the other network hubs that are all over. Where we sit in this whole equation is at that base layer. A lot of gaming companies have those challenges, the user experience challenges, concurrent usage challenges. A lot of them come to Equinix because they can solve those problems altogether.
We work with a company called i3D.net , a gaming platform. They’re one of the companies that would sit in the metaverse. But their business challenges are exactly what you said. How they work with us is that they can get direct connections to the likes of Comcast and any other networks or ISPs that get them close to the eyeballs. They will use, effectively, our platform as a map of the world and build out locations close to the edge, so that the latency is as low as possible. It’s not only for gameplay. It’s for things like when they want to give software updates or new releases. That’s very similar for a number of other companies we work with. The use cases we’re seeing are very much around user experience, managing concurrent usage, managing peaks and troughs.
But you need this reach and this platform, because–we don’t create services to support the game industry. It’s our base layer of connectivity. There’s a little infographic from Newzoo on the metaverse. It shows you very nicely where companies sit in this sort of relationship. We didn’t speak to them at all, but they put us down at the bottom, which is great. We’re that base layer. There are a lot of companies that will look at services that are supporting this high compute, low latency. Our view is that if you can connect to those within our platform, then it’s building out that ecosystem that makes it better.
VentureBeat: How much capital investment has to happen in order to accommodate the metaverse? I know that some people have put things out there. Intel had said that they expect the metaverse, working in real time with hundreds of millions of people interacting, would require a 1,000 times increase in computing power in the coming years. They put that stake in the ground and said, “You guys have to get real about this.” The spatial internet won’t happen overnight.
George: There’s that balance. We’re talking about a virtual world, and the virtual world won’t happen without physical infrastructure, which is part of the irony here. Where we sit, we see that balance of what we call physical infrastructure and virtual infrastructure, and a couple of other things are driving this.
Typically, 10 years ago, when you talked about an Equinix location, you had to physically own the kit and put the kit in a location and manage it. Now you can spin up services virtually. You don’t even have to set foot inside one of our locations now to connect and start benefiting from this compute power and low latency that’s required. It’s a balance. Companies don’t want to sink huge amounts of cost into physical infrastructure. They want to consume it on demand. You’re going to see companies looking to grow the metaverse looking at these types of opex models, as opposed to sinking capex into further investment and commitment that they may not like.
Certain industries, I think–we’ve talked about gaming. We’ve talked about streaming. For me, again, with the metaverse, it covers a lot of that virtual reality. You look at some of the industries like the automotive industry, the fashion industry–some of those industries are going to look at how they can monetize that type of technology, whether that’s VR or AR or bringing that together. But like anything, they’re not 100 percent sure. They’re dipping their toes in the water, because they don’t want to be left behind.
Again, I go back to the position we find ourselves in. We wouldn’t describe ourselves as a metaverse company, but we’re going to see some business come our way because of the desire of companies to be there. But the virtual world is not going to be enabled without the physical world.
VentureBeat: How much capex do you generally spend in a year? George: Our capex is built into investing in our data centers, our physical footprint. I’m sure we can send you our latest financial figures in terms of how much investment we’ve had building out those locations. It’s billions, and not just in new locations. Each of our data center locations is a campus. If you were to pick a city, there’s probably about 10 locations connected across those metros, and each of those locations will have a build phase. The demand for space is increasing.
Our business model is split across what we call our hyperscale model, where you have the companies that require huge amounts of space – the Microsofts, the Amazons, the cloud providers – and our retail model, where you have enterprises that are shifting to this off-premise deployment. Those figures are available and we can give you those. That will be a sign of the global demand for both physical footprint and also what we now call digital infrastructure services. That growth will continue.
VentureBeat: I’m interested in this global view of just how much investment is going into the internet, and also how much dark fiber is out there. Is it disappearing? Is it still there? George: The other thing as well is the growing importance of the sea cables, that network. A lot of those land at Equinix locations as well. You’re getting a very connected reach across the world using those deep sea cables.
VentureBeat: When I talked to Comcast they said that their operating plan was always to invest 30 percent ahead of where the network needed to be, investing for a couple of years down the road beyond what was necessary for that particular year. That’s how they were able to grow 30 percent during the pandemic years. It’s interesting to see what kind of growth can be absorbed. What did the pandemic actually cause to happen? George: From our point of view, we saw a significant acceleration in the deployment of digital infrastructure. Our business focuses on digital transformation. We were seeing transformation across all sectors pre-pandemic, but the pandemic, as you probably know, just accelerated industries that were maybe behind the curve, and just totally drove industries like streaming, home entertainment, and collaboration tools that were already on an upward curve. The demand for space, the demand for what we classify as interconnection, is enormous.
The other thing that might be of interest to you, if you’re looking at the size of the internet–we measure the private interconnections. We produce something called the Global Interconnection Index, and we’ve been doing that for five years. That tracks the amount of private interconnection that goes across our footprint and others. That’s projected at about a 45 percent growth rate over the next four years. That would be something good to dig into if you’re looking for a general view of how companies are going to connect and the traffic that’s going across those connections.
VentureBeat: For the metaverse, I guess the logical question is, does the internet need a redesign? George: It depends on whether it’s a redesign or a pivot. If you look at how things are shifting, some of the macro trends, we have more devices connecting to the internet. We’re creating more data than we can properly use. The other thing that isn’t technological is the demographic shift. We have everybody now in buyer mode over the age of 16 who’s a digital native. They’ve grown up with technology. They’ve grown up with the internet. They’ve grown up with this desire to change, to adapt very quickly.
Your question is a good one. As to whether it needs it, I’m not sure if it needs it, but it may be driven. And of course what we haven’t talked about the metaverse is the linking with the finance side, the cryptocurrencies and things like that. I look at it from the media side, because that’s my area, but there’s an area of our business that would have some insight into that financial services side as well. I’m not sure yet. I’m not sure whether it needs it, but I’m not sure whether we’ll have the choice.
VentureBeat: I also wonder whether the metaverse’s demands on the network will be paid for by something else. The example I like to talk about with Nvidia–they’re doing the engineer’s metaverse, the OmniVerse.
George: Yes, Nvidia is a partner of ours. We work closely with them.
VentureBeat: With the OmniVerse, what’s interesting is there are the gamers and game designers who want to build the metaverse, but it requires a tremendous amount of asset creation. The good thing for them is that all these enterprises are building those assets for things like BMW’s digital-twin factory. Once BMW builds this, it becomes a bunch of assets available in the OmniVerse. They’re interoperable. And then Nvidia was talking about how they may have to build out all of North America in order to test their self-driving cars, because they can’t be tested in the real world yet without killing someone. They’ll test them in a virtual world, and that has to be a digital twin of the real one. They need to build it out for that purpose, but again, game developers could use those assets.
Nvidia’s most ambitious thing is Earth-2, the digital twin of the Earth. They want that to be accurate on about a meter-level scale, because then they can feed all that data into supercomputers and come up with climate change models, predicting climate change for decades to come. That’s their most ambitious product. I asked their CEO about that and said, “Well, if you’re going to do that, don’t you get the metaverse for free?” And he says, “Yes, we get the metaverse for free.” Is the demand for the metaverse, then, going to be paid for by something else that’s already happening? George: Nvidia’s a great example to talk about, because they have use cases on the immediate side. They’re working with a lot of the car companies. They’re working with BMW. So yes, as they develop different things, different models will be monetized to enable them to have a business impact. What they’re looking for, really, is this end-to-end collaboration. It goes back to where–we have always talked about an ecosystem. An ecosystem is really a smaller version of a metaverse. Every industry has an ecosystem. If they collaborate, if more businesses now will work on this on-demand, needs-must basis, then things will scale.
You’re going to see certain industries, certain companies, adapting very well. You’ll see certain other industries maybe at the moment that are just trying to work out what is and isn’t relevant to them. The next thing you’ll probably get is the metaverse’s relationship to sustainability. Is that good or bad? Those are some of my thoughts on it. But where we sit as Equinix, we’ve seen some of this happen before. It might not have been called the metaverse. But it’s the foundation of our platform. It’s going to be fueled by collaboration and interconnection.
VentureBeat: It sounds like you would probably fit in the category of a close follower of the internet, and a metaverse optimist? George: Me personally? I think so. Again, I wouldn’t classify everything I’m talking about as related to the metaverse. For me it’s something that is already happening. You don’t necessarily need to put a label on it. But the Nvidia stuff we’re close to, because Nvidia is a partner of ours. To enable some of the things that they want to do, they will align with Equinix. Jensen Huang talks about the Equinix relationship in some of the YouTube videos that are out there.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,458 | 2,022 |
"Decarbonization: The critical role of data in realizing your net zero ambition | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/decarbonization-the-critical-role-of-data-in-realizing-your-net-zero-ambition"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Decarbonization: The critical role of data in realizing your net zero ambition Share on Facebook Share on X Share on LinkedIn Presented by Capgemini Decarbonization is now firmly at the top of the C-suite agenda. Legislation is evolving fast, and civil society is increasingly sensitive to the carbon catastrophe we face. Citizens, customers and the whole of society is demanding climate action, right now.
Consumers, investors and employees expect organizations to be accountable and transparent about climate action. Greenwashing is a major issue, for example, with fossil fuels being marketed as carbon neutral ,and 59% of sustainability claims by fashion brands having been found to be greenwashing.
Promises and platitudes are no longer enough. The carbon cost of every activity is scrutinized intensely, since reports on climate risks and social impacts are now expected to be disclosed in routine corporate accounting.
Finance and asset managers are using ESG performance as a decision-making criterion. During the height of the global pandemic in 2020, large funds with ESG criteria outperformed the broader market.
In fact, the most carbon virtuous companies can expect positive impacts on corporate financials, enjoying more favorable financing terms and being seen as more resilient in times of crisis.
Increasingly, organizations face legal demands to act and even more importantly, to prove their actions. In May 2021, a ruling by the Dutch Supreme Court ordered Shell to reduce its carbon emissions by 45% by 2030 for failing to deliver formal proof that it was keeping its commitments. This legal thunderclap warns all companies that lip service is no longer enough. Action is everything.
We are entering a new era of carbon accounting, and carbon is our new currency. There is no doubt that enterprise needs to be fully equipped for carbon accounting, as it is for financial accounting now.
The critical role of data in reporting on ESG commitments While many businesses and governments have set net-zero targets, data-powered intelligence is key to bridging the gap between net-zero ambition and action. The shift towards action — and proof of action — demands a super-refined level of ESG data management.
Simple in theory, less so in real life. First, we make sure that all the data is complete, consistent and compliant with the taxonomy and frameworks, cataloging to build a trusted foundation.
Next, we move from a one-off, batch collection logic to a recurring collection logic, or even continuous integration. This allows us to create a real data platform dedicated to the net zero program designed for analytics and optimizing its value using data science. Not forgetting, throughout the process, to implement the governance required to manage the project over the long term.
Net zero intelligence supercharges decarbonization Evolving frameworks, regulations and standards require that organizations make their emissions data transparent and visible. Not so long ago, this data was reported annually. Today environmental data is a new parameter that feeds into real-time decision-making processes, enabling us to make the optimum compromise between cost, time, quality and now carbon.
Data, AI and analytics are key levers to secure and execute the enterprise sustainability agenda.
Data is an essential lever to build resilience and reduce climate and business risks by addressing three main objectives: Measure to steer progress Improve to reduce impact Anticipate, adjust the climate action plan Data for Net Zero Capgemini has developed a seminal net zero program, underpinned by an enviable track record in data analysis, governance and the deployment of data solutions and products. These experiences have convinced us of the power of data to fuel the decarbonization process through the creation of Data for Net Zero.
We translate the carbon assessment into tangible insights to monitor and report your ESG commitments at scale through industrialized measurement, powered by our trusted data and AI platforms.
Data for Net Zero enables simulations and advanced analytics that provide centralized real-time enhanced insights. These enable organizations to transform their ESG commitments into a pragmatic and viable action plan.
Create a data strategy for net zero First, your data vision needs to be seamlessly integrated into your overall net zero trajectory. This means breaking down your net zero objectives into key data projects and indictors, then sharing them right across your business.
To anchor your data challenges, you’ll need to review the best calculation methodology for GHG emissions and define the optimum organizational model and parameters of governance. To achieve your data ambition, you’ll also need to select the right technologies and solutions. And finally, you’ll need to create and nurture the optimum data partner ecosystem, which means focusing on seamless data collaboration.
For example, a large American retailer asks its suppliers to formalize the improvements it has made, from one year to the next, on key environmental indicators, as a condition of their partnership. For large industrial companies, fulfilling this simple request would usually take months of work, collecting information and creating an appropriate response. A foundation of data collaboration focused on sustainability, activated across a data ecosystem, revolutionizes the process, quickly identifying those business needs and organizing data and systems accordingly.
Establish a Sustainability Data Hub It’s crucial that businesses set up a Data for Net Zero nerve center at the crossroads of their enterprise functions. Creating a Sustainability Data Hub will enable you to identify granular data to feed your data hub, from sources such as operational data, the operating system, and external sources, including the emission factor database and suppliers’ carbon data.
We’ll help you design and set up the optimum technological platform for sustainability-related data, based on your current data estate. You’ll be able to measure data-founded insights and report the environmental impacts of your activities, including scope 3. And you’ll soon be packaging data models to enhance advanced analytics and help business functions to simulate reduction paths to reduce their footprint powered by AI driven use cases.
In the automotive industry, one of our clients is applying this analytical platform to its inbound and outbound logistics activities. Until now, it reported its carbon footprint annually. Today, in its decision-making, sustainable development is on the same level as the safety of the vehicles it designs.
Activate ESG data performance Your ESG data performance can be harnessed and put to work as a corporate asset. First, you’ll need to set up a cross-organizational ESG performance steering infrastructure and choose the relevant ESG reporting framework to meet mandatory disclosing process from SEC, EU, HKEX, TCRD and ISSB to name of few. Then you’ll need to measure the ESG insights of all your activities, projects and transactions, as well as those of third parties.
Once this is in place, you can industrialize and automate ESG reporting to comply with evolving regulations. And in the process, you’ll be able to extract a specific environmental dataset to meet and exceed the increasing expectations of investors, customers and other stakeholders.
Cross-functional projects require cross-functional skills It’s a fact net zero is a cross-function responsibility that needs a holistic approach. In addition to creating a solid data and AI foundation, it’s critical that senior managers in information systems management, data and corporate social responsibility (CSR) are fully invested and committed to the cause. You’ll also need the full buy-in of decision makers right across the business, in operations, purchasing, supply chain and sales, who need to track their respective sustainability performance.
Going forward, collaboration with an external partner, like Capgemini, with cross-functional expertise in strategy and operations transformation, industrial process engineering, data management and AI, ESG and sustainability will transform your approach to put decarbonization in action and at scale. We’ll facilitate cross-functional exchanges, helping you to technically embrace sustainable performance and measurement in your data roadmap.
In truth, ESG is no longer an “optional extra” or a “nice to have.” Instead, it’s now a given that businesses will deliver clear and transparent ESG reporting. A business that doesn’t deliver a comprehensive ESG program is likely to experience poor investor satisfaction, as well as a negative impact on its financial results.
Learn how Capgemini can help you make the best of data to secure your net-zero transformations, far beyond delivering basic annual carbon footprint statistics.
Find out more here.
Vincent de Montalivet is Data for Net zero Offer Leader | Data & AI Group Portfolio | Insights & Data at Capgemini.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,459 | 2,022 |
"Tiller Launches New Personal Finance Service for Excel, Partners on Exclusive Offer for Microsoft 365 Subscribers | VentureBeat"
|
"https://venturebeat.com/business/tiller-launches-new-personal-finance-service-for-excel-partners-on-exclusive-offer-for-microsoft-365-subscribers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Tiller Launches New Personal Finance Service for Excel, Partners on Exclusive Offer for Microsoft 365 Subscribers Share on Facebook Share on X Share on LinkedIn Tiller’s personal finance service now offers a full suite of tools for customers to manage their financial lives with the power of Microsoft Excel SEATTLE–(BUSINESS WIRE)–May 31, 2022– Today Tiller announced full support for Microsoft Excel, delivering a complete personal finance service built on the industry-leading spreadsheet software program.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220526005011/en/ Monthly budget sheet in the Tiller Foundation template for Microsoft Excel. (Photo: Business Wire) Tiller’s newly upgraded service connects 21,000 banks to Excel and imports daily financial data with a click. Customers can easily track their daily spending, account balances, budgets, and net worth in their Excel workbooks without data entry or logging into multiple accounts.
Working with Microsoft, Tiller is offering Microsoft 365 customers a special 60-day trial of the service.
Additional personal finance features exclusive to Tiller include prebuilt templates, daily account update email, user Community, and top-rated customer support. This summer Tiller will release AutoCat, the first fully customizable transaction auto-categorization engine for Excel.
“Tiller welcomes all Microsoft 365 subscribers,” said Peter Polson, CEO of Tiller. “With full support for Excel, our customers can more easily manage their money, their way, with all their accounts updated in one place, flexible reporting, customizable categories, and uncompromising privacy. We’re certain fans of Excel will fall in love with Tiller.” Signing up with Tiller Microsoft 365 subscribers can claim their 60-day extended free trial of Tiller by following a link in Microsoft’s email announcing the offer. Qualifying users must be based in the US and have an active Microsoft 365 subscription. This offer is exclusively available to Microsoft 365 subscribers.
ABOUT TILLER Tiller is the only automated personal finance service built on Microsoft Excel and Google Sheets, combining the ease of an app with the power of spreadsheets. Tiller provides customers with a clear view of all their finances in one place, flexible templates, a vibrant user community, US-based customer support, strict privacy, and no ads.
Tiller’s team is driven by a mission to help people gain greater confidence and control of their financial lives, guided by the conviction that money matters because life matters more.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220526005011/en/ For more information, press only: Peter Polson, Tiller, (206) 669-0130, [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,460 | 2,022 |
"Oncomatryx Announces FDA and AEMPS IND Clearance for OMTX705, a First-in-class Tumor Microenvironment-targeted ADC, to Treat Advanced Solid Tumors | VentureBeat"
|
"https://venturebeat.com/business/oncomatryx-announces-fda-and-aemps-ind-clearance-for-omtx705-a-first-in-class-tumor-microenvironment-targeted-adc-to-treat-advanced-solid-tumors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Oncomatryx Announces FDA and AEMPS IND Clearance for OMTX705, a First-in-class Tumor Microenvironment-targeted ADC, to Treat Advanced Solid Tumors Share on Facebook Share on X Share on LinkedIn BILBAO, Spain–(BUSINESS WIRE)–May 31, 2022– After 15 years of pioneering R&D efforts in the tumor microenvironment, Oncomatryx announces that the FDA and the Spanish Drug Agency (AEMPS) have cleared its investigational new drug (IND) application for OMTX705, a First-in-Class antibody-drug conjugate (ADC) targeting the tumor microenvironment.
OMTX705 is a pioneering ADC that targets, with a novel, dual mechanism of action, the Cancer-Associated Fibroblasts that enable tumor metastasis, drug resistance and immunosuppression.
OMTX705 has already shown unbeatable safety in non-human primates and tumor regression in murine models of pancreatic, breast, lung and gastric cancer.
Oncomatryx will run a multicenter, dose escalation trial in patients suffering metastatic solid tumors. OMTX705 will be administered as single agent and in combination with immunotherapy.
OMTX705 phase I clinical trial will be run in seven hospitals in USA and Spain.
About Oncomatryx Oncomatryx, a global biopharmaceutical company, is pioneering the development of precision ADCs and bispecific antibodies against novel targets in the tumor microenvironment.
The company, located in the Bizkaia Technology Park (Bilbao, Spain), has discovered novel pathways and proteins in the Cancer-Associated Fibroblasts that surround the tumor and promote its invasiveness, immunosuppression and drug resistance.
Oncomatryx has undertaken the development of pioneering drugs against these pathways and proteins of the tumor microenvironment, in collaboration with prestigious universities and hospitals in USA and Europe, led by the Institute of Cell Biology and Immunology of the University of Stuttgart (Germany).
Oncomatryx is investing 50 million euros in 2022-2024, to develop OMTX705 and other novel tumor microenvironment-targeted ADCs and Bispecific antibodies.
www.oncomatryx.com View source version on businesswire.com: https://www.businesswire.com/news/home/20220524005142/en/ Pedro Esnaola [email protected] +34 946 087 037 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,461 | 2,022 |
"Menlo Microsystems wants to make electrical switches far more efficient | VentureBeat"
|
"https://venturebeat.com/business/menlo-microsystems-wants-to-make-electrical-switches-far-more-efficient"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Menlo Microsystems wants to make electrical switches far more efficient Share on Facebook Share on X Share on LinkedIn Menlo Microsystems' Ideal Switch.
Menlo Microsystems unveiled its Ideal Switch recently and it also raised $150 million to shake up the electronics industry.
It calls this the biggest innovation since the transistor, as it can power its functions with less than a milliwatt and switch at speeds in the billions of operations a second. The Irvine, California-based company raised the money to start domestic manufacturing, and it hopes its tech will save over $7 trillion by 2050.
The lead investors in the recent round included Vertical Venture Partners and Tony Fadell’s Future Shape, with participation from new investors Fidelity Management & Research Company, DBL Partners and Adage Capital Management. Existing investors also participated. All told, the company has raised $225 million.
Garcia said that there are a billion ceiling fans in the world. If you replaced existing fans with Ideal Switches, he said you could save enough energy to take 17 power plants offline.
To Garcia, the Ideal Switch is the electronic industry’s Holy Grail, as it is a device that delivers the benefits of a mechanical relay and a semiconductor switch, with no compromises. It is tiny, fast, reliable, withstands extreme temperatures, is ultra-low loss and can handle thousands of watts. And it can be built with conventional semiconductor equipment.
Fadell noted in a statement that electrical switches that distribute power are shipped on the order of 20 billion a year. Adroit Market Research said the global electrification market is experiencing tremendous growth, projected to reach $128 billion by 2028. The Ideal Switch is transforming the electrification of everything by increasing energy efficiency of the entire legacy electric infrastructure, upgrading 100-year-old relay technology with a microelectromechanical (MEMS) switch.
The company said it can eliminate 20% of global emissions and bringing $37 billion in electricity savings by 2050, a major contribution to the fight against climate change. The company is evaluating chip manufacturing locations in California, New York, Texas, and Florida.
An analyst’s view In an email to VentureBeat, Vole Development analyst Ana Villamor said there is a big market for relays (or also called circuit breakers), which are being used as circuit protection in applications such as EV (very important for battery management system) or datacom (also linked to UPS systems and batteries).
“These applications will require many more relays in the next years due to ens system demand increase,” she said. “Moreover, with the increase of global energy demand, more battery energy storage systems will be installed over the world to support energy storage. This will also increase the demand of efficient relays. Notice that the reliability and safety of the applications linked to the circuit breakers is of extreme importance (and more specifically, all the applications linked to batteries).” She noted relays are mostly mechanical devices. With electrification push to increase efficiency there is an opportunity for “electrical relays.” “When going to electrical relays, one of the criteria is to have a fast-switching speed, and this can be achieved by using different approaches like using MEMS technology (like MenloMicro) or for instance to use SiC components,” Villamor said. “Most of manufacturers are today in R&D, but we see a high interest from different companies to understand this market.” She said that, theoretically, Menlo’s technology is indeed interesting as it decreases the size of the component (very important in EV applications, but also interesting for other applications), and it saves power, which is also in line with the efficiency increase of the systems.
“Today, both SiC and MEMS solutions as relay are still expensive compared to standard relays, but high volumes should considerably decrease the cost of those,” Villamor said. “Silicon relays would be slower in switching than these new options, and when a circuit is linked to a battery, it is key to have a fast response to be able to disconnect the battery from the circuit.” Origins at GE The tech was originally invented in a research lab at General Electric in Schenectady, New York about 18 years ago. The idea was to reinvent the circuit breaker, which was patented 140 years ago by Thomas Edison, and it has not changed much since then.
“And as you look at the physics, making things smaller, putting more small elements together versus larger elements to carry large amounts of power, there are some huge efficiency gains in that,” Garcia said. “And then of course, you can make things faster. And so all of that was really about the efficiency of power distribution.” The researchers chose a particular technology that was being developed back then by a number of companies called MEMS, or micro electromechanical systems. It turned out MEMS could be scalable due to their small size. But the challenge was reliability.
“One of the questions I always get is, ‘Why didn’t anybody else do this before?’ Well, they didn’t do it before, but they tried to do it before but were not able to be successful,” Garcia said. “So one of the keys were was drastically reducing the power requirements for it. By doing so, you can drastically increase the efficiency of conducting electricity.” The problem was that the MEMS devices used materials that were already useful in semiconductor factories. GE did research for 12 years and spent $40 million figuring out why the devices were failing with a variety of materials. GE took engineers who did the metallurgy for fins on aircraft engines, coupled them with the device physicists, and asked them to solve the problem.
They came up with an alloy that solves the reliability issues. The result was a reliable switch that has substantially enhanced and improved electrical properties. The founders of Menlo Microsystems asked GE to spin the technology out as a startup around 2016. The company raised its first round to prove it could manufacture the devices in a plant in Schenectady. They raised a second round to further qualify production and in 2021, the company started generating revenue. It hit just under $10 million in the first year.
Ramping up production With the new round, the company plans to sale up its volume manufacturing and set up a factory in the U.S. for the sake of supply chain efficiency. The first products are coming soon.
The three primary markets are a wireless communications infrastructure, aerospace and defense, and power distribution, which is the largest market where most of the revenue is coming from today.
The Idea Switch transmits power at two or three times better than an electromechanical devices or even solid-state devices today for wireless, and it uses substantially less power in a power distribution, Garcia said.
As to why it got overlooked and took so long to develop, Garcia said the whole world has been focused on renewables and new energy sources. This turned out to be a good way to reinvent electronics.
“What broke the log jam here was the invention of this alloy. And that’s what gives us both the reliability as well as the electrical efficiency that we get from this,” Garcia said. “We can build it in a semiconductor process with the regular semiconductor toolset.
The company has about 50 people and it could grow by another 50%. Most of the staff is in engineering. Garcia said that 60% of energy in electronic devices is lost through inefficiency.
“As you start employing this into into distribution systems, it will make an impact instantly,” Garcia said. “We will probably start on the high ends of the market as we scale up. So industrial automation, eventually working through microgrids charging, charging infrastructure, and then even building management where you start being able to control every single circuit in a building.” He added, “This is taking that that whole concept of digitally controlled energy management one step further, but in doing it with a far more efficient device.” Garcia said the feedback since the announcement has been overwhelming. He noted there were more than 130 design commitments from various customers, from aerospace to power delivery. Menlo Microsystems had to focus on the basics of material science to get gains in power efficiency.
As for making the devices in the U.S., Garcia noted some geopolitical reasons of controlling a supply chain. On top of that, semiconductor manufacturing isn’t labor-intensive anymore. So the labor rates and economics doesn’t play into the decisions in a big way. COVID-19 also made it clear that controlling the supply chain is essential, he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,462 | 2,022 |
"Ironing out the material challenge for the metaverse and digital twins | VentureBeat"
|
"https://venturebeat.com/business/ironing-out-the-material-challenge-for-the-metaverse-and-digital-twins"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ironing out the material challenge for the metaverse and digital twins Share on Facebook Share on X Share on LinkedIn Young woman using virtual reality (VR) headset Better file formats and standards for representing 3D structures like USD and glTF have played a crucial role in progressing the metaverse and digital twins.
However, there has been far less agreement on representing materials. Vendors, artists and enterprises mitigate these issues today by sticking within a given ecosystem of design tools and rendering engines or generating multiple versions.
Now, the 3D industry and industrial designers are exploring ways to promote material interoperability across tools. This could allow creators or businesses to create a virtual representation of new cloth, upholstery, style, shoes or paint and have it accurately rendered across various tools and 3D worlds everywhere.
There are actually two complementary material interoperability challenges.
First, each rendering engine has a different approach for capturing and representing the physical appearance of materials under various lighting conditions. Second, there are multiple ways of representing the physical properties of materials, such as how they fold, drape, feel, blow in the wind or resist fire.
It could take a while for the industry to converge on any one format. Various file formats have emerged to help exchange materials across tools and virtual worlds, including U3M, AXF, MDB, MTL, KMP and SBS, each with its strengths and weaknesses. It may be that industry specific formats dominate within their respective domains, while others are used across domains.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! A realistic look Enterprises creating 3D assets for games and entertainment are exploring how better materials processing techniques like physically based rendering (PBR) can improve the look of virtual worlds. “People think of a material as a fabric, commonly, but the 3D industry talks about materials as a visual thing,” Elliott Round, co-founder and CTO of M-XR, a 3D production platform, told VentureBeat.
Most people are familiar with the way primary paints like red, yellow and blue are combined to create a variety of colors. Materials take this a step further with additional texture maps representing other properties like albedo, metalness, roughness, opacity and diffuse. This is where it gets complicated. “Different render engines have different amounts of material properties,” Round explained. “Some will have five parameters, while others can have ten, so they can all work slightly differently. That’s something we are hoping to solve with other companies to unify 3D a bit better.” The industry has traditionally faced computational and memory constraints for accurately rendering materials. But now, these constraints are starting to fall away with better computers and algorithms. “I’m hoping we get to a position where we no longer have to sort of cut corners and hack materials because there are fewer constraints,” Round said. “It could become unified like in the real world.” His company has been developing tools and techniques for quickly capturing the visual properties of real-world objects into virtual worlds. They started out using tools like photogrammetry and structured light scanning to capture 3D objects. “All of these approaches give you really good 3D geometry, but none of them will give you material information. And that is arguably what is key to photo realism,” Round explained. This includes aspects such as how light reflects off an object and whether it is scattered, absorbed or transmitted.
His team also explored various kinds of swatch scanners often used in the textile industry. These types of scanners from companies like Vizoo and X-Rite can capture visual material properties by scanning fabric swatches or pieces of paper. Artists and enterprise apps can later apply these to 3D objects. Round said these scans are really good but don’t work particularly well for scanning a whole object, prompting research on better whole object capturing techniques. Epic recently invested in M-XR to help scale these tools for 3D creators.
A realistic feel Companies making physical materials, such as textiles, upholstery and clothing, face additional material challenges. They also need to capture the physical feel of things using various tools and approaches. For example, Bru Textiles, a Belgian textile giant, spent four years developing a workflow for capturing visually and physically accurate textile digital twins for its new Twinbru service. Twinbru partnership development manager Jo De Ridder told VentureBeat, “[The digital twin] is a 100% replica of the physical fabric both physically and specification wise.” This helps design firms create realistic prototypes, such as a new hotel lobby, and quickly explore variations for clients. In the past, they would have to approximate the look through a swatch book and create a mockup that did not always look the same as the finished product. “Having digital twins shortens the supply chain, reduces complexity and increases accuracy,” De Ridder said.
However, it is a complex process. It took the Twinbru team years to develop and streamline the workflow to capture the visual and physical properties and render these into digital twins. They used a combination of X-Rite and Vizoo scanners to capture AXF and U3M files representing visual aspects of the fabrics. In addition, they worked with Labotex to capture the physical properties of the textile into an SAP database that is converted into the appropriate physics engine format. They have created digital twins of the fabric available for Nvidia Omniverse, Chaos Cosmos, ArchiUp and Swatchbook.
Creating a more material metaverse Improved industry collaboration could help streamline similar workflows for other companies that make and work with textiles, paints, cloth and other materials. A 2020 Digital Fabric Physics Interoperability survey by the 3D Retail Coalition concluded that it is now possible to measure five fabric physics attributes once and accurately translate these into the equivalent physics values for multiple 3D apparel software solutions. These include bend, stretch/elongation, shear/diagonal stretch, weight and thickness.
Industry leaders are also starting to collaborate on open standards. For example, Browzwear, which makes 3D fashion design software, has been collaborating with Vizoo to drive the adoption of the Unified 3D Material (U3M) standard in the fashion industry. One big plus compared to other formats is that it can capture both the fabric’s visual information and physical properties.
“I truly believe that evolving the metaverse to the point of mass adoption requires materials and textures to be accurately represented,” Avihay Feld, CEO of Browzwear, told VentureBeat. “Synthetic visions involving digital twins as frozen snapshots of the physical world are a good start. Digital twins as an evolving picture of reality that is synchronized with reality are even better.” He argues that it is not clear where the metaverse is going, but it is easy to imagine two possibilities. One is a metaverse that is a departure from reality, where virtual worlds defy the laws of physics. The other is a metaverse that imitates reality so users would have experiences that are analogous to those possible in the real world.
He believes that a true-to-life representation of both the visual and physical properties will be essential in this second case. Having realistic things inside the virtual world will make it more immersive and compelling, but it will also enable the metaverse to support a variety of use cases. A major one of these is commerce, not of strictly digital items, but of real-life objects. In this second case, having true digital twins, from the perspectives of visualizing textures and simulating the physics of an object, will be essential.
“It is possible that these two possibilities will coexist, but without the true-to-life experiences, it’s likely the metaverse will remain a fantasy world for the tech-savvy instead of being the transformative new universe it has the potential to be,” Feld said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,463 | 2,022 |
"Infineon and pmdtechnologies Develop 3D Depth-Sensing Technology for Magic Leap 2 | VentureBeat"
|
"https://venturebeat.com/business/infineon-and-pmdtechnologies-develop-3d-depth-sensing-technology-for-magic-leap-2"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Infineon and pmdtechnologies Develop 3D Depth-Sensing Technology for Magic Leap 2 Share on Facebook Share on X Share on LinkedIn Enabling advanced cutting-edge industrial and medical applications MUNICH–(BUSINESS WIRE)–May 30, 2022– Augmented reality (AR) applications are about to fundamentally change the way we live and work. Later this year, AR pioneer Magic Leap is expected to introduce its newest AR device, the Magic Leap 2. Designed specifically for enterprise use, Magic Leap 2 will be among the most immersive enterprise AR headsets in the market. With industry-leading optics and powerful computing in an ergonomic design, Magic Leap 2 will enable operators to work more efficiently, help companies optimize complex processes, and allow staff to seamlessly collaborate. One of the key features of Magic Leap 2 is the 3D indirect-Time-of-Flight (iToF) depth sensing technology that was co-developed by Infineon Technologies AG (FSE: IFX / OTCQX: IFNNY) and pmdtechnologies ag (pmd).
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220530005010/en/ One of the key features of Magic Leap 2 is the 3D indirect Time-of-Flight (iToF) depth-sensing technology that was co-developed by Infineon Technologies AG (Infineon) and pmdtechnologies ag (pmd). (Photo: Business Wire) Magic Leap 2 demonstrates the potential of the REAL3™ 3D Image Sensor. The new and improved IRS2877C Time-of-Flight imager captures the physical environment around the user and helps the device to understand and finally interact with it. Thanks to the 3D imager’s VGA resolution, many different objects can be detected in detail.
The Time-of-Flight technology developed by Infineon and pmd creates an accurate 3D mapping of the environment as well as 3D image of faces, hand details, or objects in real-time. This advancement helps to enable accurate environmental interaction with the Magic Leap 2. In addition, the sensor enables enhanced gesture controls on Magic Leap 2. Infineon and pmd optimized the 3D sensor to bring the power consumption down to a minimum, reducing heat and increasing battery life of Magic Leap 2.
“Sensing the environment precisely and in real-time is key for augmented reality applications,” says Magic Leap CTO, Julie Green. “Magic Leap 2 will provide an even more immersive user experience. These superior functionalities will help to connect the physical and digital world even more seamlessly.” “We have introduced our 3D imager technology in a professional environment, where precision and reliability are life-saving features,” says Andreas Urschitz, Division President Power & Sensor Systems and designated CMO at Infineon.
“The latest 3D time-of-flight technology is going to enable new augmented and mixed reality applications for healthcare and industry. It’s about to change the way we live and work fundamentally.” “Our technology helps Magic Leap 2 to detect precisely to the millimeter the location of objects in a physical environment. Virtual objects can be placed in the real world, and stay in place when the user walks around a room and would be obscured, as other real objects appear in front of them,” says Bernd Buxbaum, CEO of pmd. “It also works reliably in bright sunlight or complete darkness, where other depth-sensing solutions quickly reach their limits.” More and more AR will be applied in industrial and medical environments that make use of these technological advancements. For example Brainlab, a Munich based digital medical technology company, combines their patient-specific, AI-driven anatomical segmentation visualization software with spatial computing from Magic Leap, to provide surgeons an increased understanding of the patient’s anatomy.
Visitors of Hannover Messe 2022 can experience a live demo of Magic Leap 2 in Hall 9, Booth D36 (ifm electronics).
About pmd pmd is the worldwide leading 3D Time-of-Flight CMOS-based digital imaging technology supplier, headquartered in Siegen, Germany with subsidiaries in the USA, China, and Korea. Started up in 2002, the company owns over 450 worldwide patents concerning Time-of-Flight based applications, the PMD measurement principle, and its realization. Addressed markets for 3D sensors systems and software from pmd are industrial automation, automotive, and a wide field of consumer applications, including augmented reality, smartphones, drones, and cleaning robots.
Additional information is available at www.pmdtec.com About Infineon Infineon Technologies AG is a world leader in semiconductor solutions that make life easier, safer and greener. Microelectronics from Infineon are the key to a better future. With around 50,280 employees worldwide, Infineon generated revenue of about €11.1 billion in the 2021 fiscal year (ending 30 September).
Infineon is listed on the Frankfurt Stock Exchange (ticker symbol: IFX) and in the USA on the over-the-counter market OTCQX International Premier (ticker symbol: IFNNY). Further information is available at www.infineon.com This press release is available online at www.infineon.com/press.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220530005010/en/ Media Contact Sabrina Buxbaum (Director) Marketing & Corporate Strategy [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,464 | 2,022 |
"ICCPP ODM+ Unveils Ceramic Coil Disposable and Multi-category Solution Resulted from Its Digital R&D Strategy | VentureBeat"
|
"https://venturebeat.com/business/iccpp-odm-unveils-ceramic-coil-disposable-and-multi-category-solution-resulted-from-its-digital-rd-strategy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release ICCPP ODM+ Unveils Ceramic Coil Disposable and Multi-category Solution Resulted from Its Digital R&D Strategy Share on Facebook Share on X Share on LinkedIn BIRMINGHAM, England–(BUSINESS WIRE)–May 31, 2022– ICCPP launched the full-range ceramic coil solution including disposable ceramic coil at Vaper Expo UK in Birmingham on May 27, 2022 — a new category empowered by its “Digital Transformation” strategy, marking a new era in the global vape industry.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20220528005017/en/ (Photo: Business Wire) For the first time, ICCPP exhibited as a group and showcased its branded ODM+ solution and full-range ceramic coil solution. Adopting the Gene Tree technology, ICCPP’s ceramic coil will be integrated with the ODM+ business to promote a greater evolution in the industry. The ceramic coil, ICCPP’s first product released since it jointly launched the digital strategy with SAP and PwC, is bound to become a benchmark in the global vape industry.
Breakthrough: ceramic coil solution for disposable and multiple categories In the vape industry, the ceramic coil technology is a key indicator of technological advancement of a company. ICCPP began to engage in the ceramic coil technology in 2019 by forming a R&D team led by “Peacock Program” talents, aiming to achieve an industry-leading level with the third or fourth generation of ceramic coil, and fully surpass the peers with the fifth or sixth generation.
Committed to becoming the leader in atomization technology, ICCPP brings a natural and pleasant vaping experience to the customers with its nanocrystalline (NC) materials and technical innovation. From thick film to thin film, from wire heating and surface heating to solid heating, ICCPP’s NC ceramic atomization technology has been constantly optimized the data model of the coil and enhancing the team’s R&D strength to address the pain points in the vape coil industry. The newly launched Gene Tree thin film technology features five technical advantages, namely powder-free, improved safety, finer atomization, a longer lifespan, and an improved flavor.
As a result of the continuous endeavor of the R&D team, ICCPP recently launched the new generation of Gene Tree film technology and product applications, which, compared to the third-generation ceramic coil, has improved the flavor, increased the coil lifespan, and improved product reliability, bringing users a natural and pleasant vaping experience.
ICCPP will soon launch a disposable product equipped with the Gene Tree ceramic technology, with such features as longer-lasting flavor, more puffs, better restoration of flavor, and higher reliability, will bring revolutionary user experience, trigger a revolution in the industry, and become an industry milestone.
“ Digital + technological innovation “ to provide ODM + digital solution ICCPP’s core advantages at the technical level are derived from its huge investment in scientific and technological innovation. On May 26, ICCPP announced the official launch of the “Digital Transformation Project” at its Shenzhen headquarters, becoming the world’s first major company in the vape industry to cooperate with SAP and PwC. This marks that ICCPP will lead the industry into the phase of digital transformation.
The digital strategy refers to the overall digital innovation of the company integrating R&D, manufacturing, export, overseas marketing, etc. In the future, ICCPP will create an unprecedented “digital closed loop” and new digital competitiveness by creating digital R&D, digital manufacturing, digital marketing, and digital consumer experience. The mode of “digital + technological innovation” will become ICCPP’s underlying competitive advantage, including digitization + chips, digitization + fragrance, and digitization + smart manufacturing. “The company will drive development through digitalization and create new value through digitalization.” Through digital transformation, the company aims to bring greater contributions to the industry and global consumers.
In order to develop ceramic coil, ICCPP quickly simulated the atomization effect by setting a data model and inputting parameters such as interface and size, so as to find the optimal configuration without sample production, thus accelerating product development and delivery. At present, ICCPP has a set of long-term plans for refillable and disposable ceramic coil products, and has reached cooperation intentions with at least 5 overseas major customers.
Mission of a leading going-out company: to empower global customers with technical innovation ICCPP knows that customers in this era don’t need the traditional ODM anymore, but the branded ODM+. Therefore, the company has launched the new strategic partnership integrating brand positioning, user insight, product design, product development and after-sales, to guide customers to embrace new era of product customization.
Firstly, ICCPP promises to provide customers with the latest technology applications, and give them the same or even better technology as ICCPP’s own brands. Secondly, ICCPP shares all cutting-edge technologies with the entire industry chain in the strategic cooperation, including labs and institutes, automated factories, overseas user insights and global marketing services. This confidence of ICCPP comes from its rich and successful overseas experience and the ability to provide customers with one-stop services. ICCPP is committed to building two global platforms to empower customers: a truly open, Android-like vape technology platform, and a globalization platform based on years of overseas experience.
As the first strategic category after the digital upgrade, the ceramic coil solution can adapt to the full range of product lines. Emboding the industry’s most advanced technologies, it is set to create a new vape category, improve user experience and bring changes to the market structure. On the one hand, ICCPP ODM+ business will undergo the digital upgrade and efficiency upgrade of the whole operation system integrating market survey, R&D, product design, order, delivery, and after-sales; on the other hand, it will establish in-depth cooperation with global customers on the ceramic coil solution. So far, ICCPP has reached cooperation intentions on this ceramic coil technology with many overseas major customers. ICCPP expects to bring more surprises to global customers and restructure the market in future.
The original source-language text of this announcement is the official, authoritative version. Translations are provided as an accommodation only, and should be cross-referenced with the source-language text, which is the only version of the text intended to have legal effect.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220528005017/en/ Media Relations: Tingkai Xu, Senior PR Manager Mobile: 15957944779 Email: [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,465 | 2,022 |
"Despite metaverse buzz, 60% of consumers have zero interest in virtual shopping | VentureBeat"
|
"https://venturebeat.com/business/despite-metaverse-buzz-60-of-consumers-have-zero-interest-in-virtual-shopping"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Despite metaverse buzz, 60% of consumers have zero interest in virtual shopping Share on Facebook Share on X Share on LinkedIn While buzz has surrounded the supposed infinite potential of the metaverse and AR/VR technology as tools for future online marketplaces, there has also been a decline in revenue for several ecommerce companies in recent years, leading some organizations to go back to the drawing board when it comes to good digital CX. A new report from Productsup has surveyed consumers’ tastes and expectations when it comes to digital hybrid shopping experiences, with a particular focus on sustainability and the metaverse. For many companies looking to boost sales in the digital marketplace, the results illustrate an uphill battle: according to the report, 60% of shoppers have zero interest in buying virtual goods whatsoever.
With revenue from the metaverse expected to reach $800 billion in 2024, it’s no wonder that forward-thinking organizations might be eager to cater to customers who aren’t quite yet interested in online-only spending. Overall, the results from Productsup’s report indicate that customers are chiefly keen on digital CX that offers transparency, accessibility and availability.
In the past decade, sustainability and DEI initiatives have risen to the forefront of consumers’ minds; as they decide on whether to purchase a company’s product, they’re more and more likely to inquire about the why and how a said product is made. Consumers tend to avoid products that’ll end up in a landfill, and instead prefer ones that are reusable (71%) or recyclable (70%). Despite this, consumers say information on a product’s reusability (34%) and recyclability (30%) can be difficult to find.
It’s no longer enough to include a “fair trade” or “biodegradable” label on your paper coffee cups, for example — not only do 43% of consumers want a detailed explanation as to how the product is biodegradable, but 40% also want information that proves that the product aligns with its “sustainable” label. “Consumers aren’t distracted by ‘greenwashing,'” said Lisette Huyskamp, chief marketing officer at Productsup. “[Their] expectations can’t be met unless product information is managed with a strong P2C [product-to-consumer] strategy.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! While consumers across all generations want more product information, how best to present said information depends on each generation. Gen Z welcomes the advent of the metaverse and digital-only shopping much more readily than their older counterparts. Similarly, Gen Z is much more likely to prefer information that’s presented via online comparisons (40%) or QR codes (37%). On the other end of the generational spectrum, those 55 years or older tend to prefer information that’s easy to find and contained within the product description itself.
Finally, customers tend not to want an “either/or” shopping experience; i.e., they want access to product information and deals that are accessible in both the metaverse and the store. Roughly an equal amount of consumers have indicated they’re more likely to buy a product if a deal is offered exclusively in a store vs. online (55% vs. 54% respectively), meaning that companies should offer coupons and sales in both physical and digital venues. Technology that blends physical and digital shopping is also welcomed: 47% of consumers would make a purchase if they could access product information via a store’s mobile app while they’re shopping in-person, for example. The use of augmented reality (AR) technology, such as smart mirrors and mobile filters, could also be used to motivate consumers at the store (41%) or on the company’s website (42%).
All in all, the results indicate that while many consumers are looking forward to the expected increases in speed, convenience and information offered by the metaverse and other digital marketplaces, they’re not quite yet willing to abandon the tried-and-true methods of decades past. “In today’s commerce world, brands and retailers need to deliver nuanced experiences tailored to consumers wherever they shop,” said Huyskamp.
Productsup’s report is based on a survey of nearly 5,700 consumers age 16 and up across the U.S. and Europe, asking about their preferences, expectations and behavior toward hybrid shopping experiences.
Read the full report by Productsup.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,466 | 2,022 |
"CoinMarketCap takes the Capital conference to the metaverse | VentureBeat"
|
"https://venturebeat.com/business/coinmarketcap-takes-the-capital-conference-to-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CoinMarketCap takes the Capital conference to the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cryptocurrencies , the low-hanging fruit of the blockchain technology tree, have seen wider adoption when compared to other emerging blockchain fruits. Gartner predicts 20% of large enterprises will use digital currencies by 2024. However, new (or not-so-new now) on the horizon are NFTs , the metaverse and Web3 — which are becoming more mainstream. As blockchain technology continues to make more inroads into the enterprise and the buzz on NFTs, the metaverse and Web3 increases, Web3 innovators and stakeholders are increasingly gathering at events to discuss the spread, direction and impact of Web3.
One such event is CoinMarketCap’s “The Capital: Time to ship conference,” a virtual event hosted in the metaverse by CoinMarketCap (CMC). The two-day event, which was held on May 26th and 27th and cosponsored by industry giants like Trust Wallet, Klaytn, and Apollo X, among others, focused on metaverse, blockchain, NFTs and Web3, drawing blockchain and Web3 enthusiasts from across the world.
Keynote speakers at the conference included Changpeng “CZ” Zhao, founder and CEO at Binance, Michael Saylor of MicroStrategy, Jaynti Kanani, cofounder of Polygon, Sebastien Borget, cofounder and COO at Sandbox and others.
VentureBeat was on the ground to cover the event and drill deep into what Web3 and blockchain stakeholders consider to be use cases for the emergent virtual world. While one would think a conference like Time to ship wouldn’t feature cryptocurrencies so heavily on its agenda, since it was more about the metaverse, Web3 and NFTs, the opposite was the case. The conference began with Zhao delivering a keynote address on the state of crypto.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! During his keynote address, Zhao touched on several sensitive issues including the Ronin Hack and how Binance helped Axie Infinity get back on its feet through a cash injection.
“Any single project failure has spillover effects on multiple fronts: investors lose confidence, the mainstream media paints a negative picture and, most importantly, people do get hurt. Most industry players do want to help each other out — especially when there’s a crisis,” said Zhao.
Speaking further on the role of Binance in the Twitter takeover, he said he wants Twitter to be the platform for free speech and would like to help the social network make the transition to Web3. Explaining market movement and the present bearish momentum that the cryptocurrency market is experiencing, he said there was an all-time high in 2013, an all-time high in 2017 and an all-time high in 2021, adding that there would always be lows in between, as historical numbers don’t predict the future.
Zhao’s views were mirrored by Saylor who, in his fireside chat on day 1, also broached the floundering state of the market from a psychological point of view. Speaking on his firm’s losses in the bear market, he said “trying to sell before a loss is a trader mentality.” Decentralized finance gains new heights Decentralized finance (DeFi) was at the center of discussion at the conference, with panel discussions around decentralized crypto exchanges (DEXs) and farm yield. Nikita Ovchinnik, chief business development officer at 1inch, said the loyalty of users helps to improve the onboarding process. He believes user loyalty can be a valuable asset during a crypto crisis (aka crypto winter) like the one being experienced now.
During a discussion on the Ethereum blockchain migration to proof of stake (PoS), Jan Liphardt, founder and chief technologist at Boba Network, said the chains offering only low cost will be in trouble, but chains that solve major gaps in the ecosystem like cross-chain transactions will thrive. Leo Chen, VP of engineering at Harmony Protocol, believes PoS is still in the early stages as a consensus mechanism and it hasn’t experienced a huge attack. PoS will introduce new challenges to the ecosystem, according to Chen.
Dan Roberts, editor-in-chief at Decrypt, said the crypto winter can be a good thing since it shifts the attention of people from material gains to building sustainable products.
NFTs and new use cases Discussed at length and with new insights, NFTs were the darling of the CMC conference. With NFTs becoming a store of value during the lockdown and even after, it’s become imperative to check out new use cases and criteria believed to be key if NFTs must survive beyond the hype.
Speaking on the use cases of NFTs, Roneil Rumburg, cofounder and CEO at Audius, compared the technological advancement to hypertext in the early days of the internet. He said no one talks about hypertext now but the things this technology enabled, like Google, Amazon and Facebook.
The use case of virtual and in real life (vIRL) is slowly becoming commonplace, especially with the emergence of NFTs that allow you to own physical items without taking delivery of the said item. Owning a vIRL is having the option of selling, holding the NFT, or redeeming the physical item, enabling vIRLs to save the environment from excess shipping.
Metaverse not yet mature As the conference neared its end, Paul Caslin from Hello made a case for adoption, stating that while the metaverse can bring people together, most virtual worlds currently don’t bring mainstream audiences into the space and deliver entertainment that’s captivating. Perhaps this is proof that the metaverse isn’t near maturity yet? While experts say technologies like AR, VR, blockchain, 5G and AI will converge to improve the metaverse, we’re barely scratching the surface. The potentials are undoubtedly huge, but it might take some celebrity firepower to spark conversations in new sectors of the endeavor. Like Caslin noted, the floodgates can be opened via different strategies, “…once it’s good for one person, it’s good for everyone. There are many people on the sidelines at the moment waiting to see what happens,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,467 | 2,022 |
"What is an embedding for AI? | VentureBeat"
|
"https://venturebeat.com/ai/what-is-an-embedding-for-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is an embedding for AI? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When a question is presented to an artificial intelligence (AI) algorithm, it must be converted into a format that the algorithm can understand. This is often called “ embedding a problem,” to use the verb form of the word. Scientists also use the word as a noun and talk about an “embedding.” In most cases, the embeddings are collections of numbers. They are often arranged in a vector to simplify their representation. Sometimes they’re presented as a square or rectangular matrix to enable some mathematical work.
Embeddings are constructed from raw data that may be numerical audio, video or textual information. Pretty much any data from an experiment or a sensor can be converted into an embedding in some form.
In some cases, it’s an obvious process. Numbers like temperatures or times can be copied pretty much verbatim. They may also be rounded off, converted into a different set of units (say to Celsius from Fahrenheit), normalized or cleaned of simple errors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In other cases, it’s a mixture of art and knowledge. The algorithms take the raw information and look for salient features and patterns that might help answer the question at hand for the AI. For instance, an autonomous car may look for octagonal patterns to identify stop signs. Similarly, a text algorithm may look for words that generally have an angry connotation so it can gauge the sentiment of a statement.
What is the structure of an AI embedding? The embedding algorithm transforms these raw files into simpler collections of numbers. This numerical format for the problem is usually a deliberate simplification of the different elements from the problem. It’s designed so that the details can be described with a much smaller set of numbers. Some scientists say that the embedding process goes from an information-sparse raw format into an information-dense format of the embedding.
This shorter vector shouldn’t be confused with the larger raw data files, which are all ultimately just collections of numbers. All data is numerical in some form because computers are filled with logic gates that can only make decisions based on the numeric.
The embeddings are often a few important numbers — a succinct encapsulation of the important components in the data. An analysis of a sports problem, for example, may reduce each entry for a player to height, weight, sprinting speed and vertical leap. A study of food may reduce each potential menu item to its composition of protein, fats and carbohydrates.
The decision of what to include and leave out in an embedding is both an art and a science. In many cases, this structure is a way for humans to add their knowledge of the problem area and leave out extraneous information while guiding the AI to the heart of the matter. For example, an embedding can be structured so that a study of athletes could exclude the color of their eyes or the number of tattoos.
In some cases, scientists deliberately begin with as much information as possible and then let the algorithm search out the most salient details. Sometimes the human guidance ends up excluding useful details without recognizing the implicit bias that doing so causes.
How are embeddings biased? Artificial intelligence algorithms are only as good as their embeddings in their training set and their embeddings are only as good as the data inside them. If there is bias in the raw data collected, the embeddings built from them will — at the very least — reflect that bias.
For example, if a dataset is collected from one town, it will only contain information about the people in that town and carry with it all the idiosyncrasies of the population. If the embeddings built from this data are used on this town alone, the biases will fit the people. But if the data is used to fit a model used for many other towns, the biases may be wildly different.
Sometimes biases can creep into the model through the process of creating an embedding. The algorithms reduce the amount of information and simplify it. If this eliminates some crucial element, the bias will grow.
There are some algorithms designed to reduce known biases. For example, adataset may be gathered imperfectly and may overrepresent, say, the number of women or men in the general population. Perhaps only some responded to a request for information or perhaps the data was only gathered in a biased location. The embedded version can randomly exclude some of the overrepresented set to restore some balance overall.
Is there anything that can be done about bias? In addition to this, there are some algorithms designed to add balance to a dataset. These algorithms use statistical techniques and AI to identify ways that there are dangerous or biased correlations in the dataset. The algorithms can then either delete or rescale the data and remove some bias.
A skilled scientist can also design the embeddings to target the best answer. The humans creating the embedding algorithms can pick and choose approaches that can minimize the potential for bias. They can either leave off some data elements or minimize their effects.
Still, there are limits to what they can do about imperfect datasets. In some cases, the bias is a dominant signal in the data stream.
What are the most common structures for embeddings? Embeddings are designed to be information-dense representations of the dataset being studied. The most common format is a vector of floating-point numbers. The values are scaled, sometimes logarithmically, so that each element of the vector has a similar range of values. Some choose values between zero and one.
One goal is to ensure that the distances between the vectors represents the difference between the underlying elements. This can require some artful decision-making. Some data elements may be pruned. Others may be scaled or combined.
While there are some data elements like temperatures or weights that are naturally floating-point numbers on an absolute scale, many data elements don’t fit this directly. Some parameters are boolean values, for example, if a person owns a car. Others are drawn from a set of standard values, say, the model, make and model year of a car.
A real challenge is converting unstructured text into embedded vectors. One common algorithm is to search for the presence or absence of uncommon words. That is, words that aren’t basic verbs, pronouns or other glue words used in every sentence. Some of the more complex algorithms include Word2vec, Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA) and – Biterm Topic Model (BTM).
Are there standards for embeddings? As AI has grown more common and popular, scientists have created and shared some standard embedding algorithms. These versions, often protected by open-source licenses, are often developed by university researchers who share them to increase knowledge.
Other algorithms come directly from companies. They’re effectively selling not just their AI learning algorithms, but also the embedding algorithms for pre-processing the data.
Some better known standards are: Object2vec – From Amazon’s SageMaker. This algorithm finds the most salient parts of any data object and keeps them. It’s designed to be highly customizable, so the scientist can focus on the important data fields.
Word2vec – Google created Word2vec by analyzing the language and finding an algorithm that converts words into vector embeddings by analyzing the context and creating embeddings that capture the semantic and syntactic patterns. It is trained so that words with similar meanings will end up with similar vector embeddings.
GloVe – Stanford researchers built this algorithm that tries by analyzing data about word usage around the world. The name is short for Global Vectors.
Inception – This model uses a convolutional neural network to analyze images directly and then produce embeddings based upon the content. Its principle authors came from Google and several major universities.
How are the market leaders creating embeddings for their AI algorithms? All of the major computing companies have strong investments in artificial intelligence and also the tools needed to support the algorithms. Pre-processing any data and creating customized embeddings is a key step.
Amazon’s SageMaker, for instance, offers a powerful routine, Object2Vec , that converts data files into embeddings in a customizable way. The algorithm also learns as it progresses, adapting itself to the dataset in order to produce a consistent set of embedding vectors. They also support several algorithms focused on unstructured data like BlazingText for extracting useful embedding vectors from large text files.
Google’s TensorFlow project supports a Universal Sentence Encoder to provide a standard mechanism for converting text into embeddings. Their image models are also pre-trained to handle some standard objects and features found in images. Some use these as a foundation for custom training on their particular sets of objects in their image set.
Microsoft’s AI research team offers broad support for a number of universal embeddings models for text. Their Multitask, Deep Neural Network model, for example, aims to create strong models that are consistent even when working with language used in different domains. Their DeBERT model uses more than 1.5 billion parameters to capture many of the intricacies of natural language. Earlier versions are also integrated with the AutomatedML tool for easier use.
IBM supports a variety of embedding algorithms, including many of the standards. Their Quantum Embedding algorithm was inspired by portions of the theory used to describe subatomic particles. It is designed to preserve logical concepts and structure during the process. Their MAX-Word approach uses the Swivel algorithm to preprocess text as part of the training for their Watson project.
How are startups targeting AI embeddings? The startups tend to focus on narrow areas of the process so they can make a difference. Some work on optimizing the embedding algorithm themselves and others focus on particular domains or applied areas.
One area of great interest is building good search engines and databases for storing embeddings so it’s easy to find the closest matches. Companies like Pinecone.io , Milvus , Zilliz and Elastic are creating search engines that specialize in vector search so they can be applied to the vectors produced by embedding algorithms. They also simplify the embedding process, often using common open-source libraries and embedding algorithms for natural language processing.
Intent AI wants to unlock the power of network connections discovered in first-party marketing data. Their embedding algorithms help marketers apply AI to optimize the process of matching buyers to sellers.
H20.ai builds an automated tool for helping businesses apply AI to their products. The tool contains a model creation pipeline with prebuilt embedding algorithms as a start. Scientists can also buy and sell model features used in embedding creation through their feature store.
The Rosette platform from Basis Technology offers a pre-trained statistical model for identifying and tagging entities in natural language. It integrates this model with an indexer and translation software to provide a pan-language solution.
Is there anything that cannot be embedded? The process of converting data into the numerical inputs for an AI algorithm is generally reductive. That is, it reduces the amount of complexity and detail. When this destroys some of the necessary value in the data, the entire training process can fail or at least fail to capture all the rich variations.
In some cases, the embedding process may carry all the bias with it. The classic example of AI training failure is when the algorithm is asked to make a distinction between photos of two different types of objects. If one set of photos is taken on a sunny day and the other is taken on a cloudy day, the subtle differences in shading and coloration may be picked up by the AI training algorithm. If the embedding process passes along these differences, the entire experiment will produce an AI model that’s learned to focus on the lighting instead of the object.
There will also be some truly complex datasets that can’t be reduced to a simpler, more manageable form. In these cases, different algorithms that don’t use embeddings should be deployed.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,468 | 2,022 |
"Turning AI failure into AI success stories | VentureBeat"
|
"https://venturebeat.com/ai/turning-ai-failure-into-ai-success-stories"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Turning AI failure into AI success stories Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It shouldn’t be much longer before the novelty of AI wears off and it becomes just another technology helping the enterprise become more efficient and more productive. But while the success rate for AI projects is improving and organizations are seeing real-world benefits to production environments, it is still proving far too difficult getting projects to the finish line.
In most cases, this is not a problem with the technology itself, but with the way it is being implemented and the general mismatch between AI’s actual capabilities and the use cases it’s expected to address. Fortunately, both of these issues can be rectified, but it will take some time before users to gain the kind of experience needed to employ AI effectively.
According to PwC , successful AI implementations share a number of key characteristics. Rather than simply unleashing the technology first on one goal and then another in a linear fashion, which is the habit for most traditional technology initiatives, a more effective approach is to direct it at three critical capabilities: business transformation, enhanced decision-making and systems and process modernization.
Not only does this holistic approach unite AI specialists, analytics teams, software engineers and others around a set of common goals, it fosters the creation of an environment built around data management, intelligent operations and the cloud. With this framework in place, organizations can then easily transition toward more practical applications like process automation, improved UX, product development and others.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Don’t get lost in AI Even at this point, it’s easy to get lost without a clear plan for what you hope to accomplish with AI and how you want to do it.
Boston Consulting Group has laid out a five-point plan for designing a successful machine learning program : Start by ensuring that the application matches the desired outcome. Even something relatively simple as a sales forecast can be influenced by factors other than pricing.
Use all applicable data, including data from external sources. Too often, models go awry due to the tunnel-vision of their designers.
Avoid making the model overly complex, since this often leads to improper interpretation and inaccurate results. Instead, go for simpler, more interpretable submodels that address the logic of clearly defined challenges.
Keep models focused on concrete, practical business decisions that drive real value.
Don’t place accuracy above usefulness. A model may still perform well after a period of time, but update or replace it if it is no longer relevant to current conditions.
Success is a relative term, of course, so there will never be a clear-cut definition between a successful AI project and a failed one. The Harvard Business Review recently surveyed more than 100 companies across multiple business sectors and found that only 15 percent are in a leadership position when it comes to AI. These are organizations that not only have a track-record of success but also have clearly defined implementation and assessment processes for their AI initiatives, all of which are evaluated and updated continually. Leaders also devote more capital to AI development – in some cases up to 60 percent more – than less successful firms.
More often than not, says Tableau’s Richard Tibbetts and Sarah Wachter , AI’s failure to meet expectations is due to those expectations being too high. Given the hype that AI has been greeted with, it’s understandable that many users think anything can be fixed with enough data. But the world is not that predictable and while patterns may emerge in data, even AI cannot create causal or even correlated connections between events if they don’t exist in the first place. This is why project selection is probably the biggest hurdle most organizations will face and this can only be surmounted by asking the right questions first and maintaining a clear understanding of what challenges can and cannot be addressed through predictive modeling.
Giving AI projects a try Even in the digital world, the adage about success only coming to those who try and try again holds true. AI has the advantage that it can be retrained and refocused on key tasks far more easily than traditional software, which means even its failures can produce valuable knowledge that can be used to define the next project.
And while the pressure may be on to get AI into production fast, it is important to remember that true success doesn’t come to those who do AI first, but those who do it best.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,469 | 2,022 |
"TensorFlow now defaults to Intel oneDNN AI optimizations | VentureBeat"
|
"https://venturebeat.com/ai/tensorflow-now-defaults-to-intel-onednn-ai-optimizations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages TensorFlow now defaults to Intel oneDNN AI optimizations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The open-source TensorFlow machine learning library is getting faster, thanks to a collaboration between Google and Intel.
The open-source oneAPI Deep Neural Network Library ( oneDNN ) developed by Intel is now on by default in TensorFlow, a project led by Google. OneDNN is an open-source cross-platform performance library of deep learning building blocks that are intended for developers of both deep learning applications and frameworks such as TensorFlow.
According to Intel, the promise of oneDNN for enterprises and data scientists is a significant acceleration of up to 3 times performance for AI operations with TensorFlow, one of the most widely used open-source machine learning technologies in use today.
“Intel has been collaborating with TensorFlow on oneDNN feature integration over the last several years,” AG Ramesh, principal engineer for AI Frameworks at Intel told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The oneDNN library first became available as a preview opt-in feature starting with Tensorflow 2.5, which was released in May 2021. After a year of testing and positive feedback from the community, Ramesh said that oneDNN was turned on by default in the recent TensorFlow 2.9 update.
The oneDNN library brings AI performance improvements for model execution Ramesh explained that with oneDNN, data scientists will see performance improvements in the model execution time.
The oneDNN improvements are applicable to all Linux x86 packages and for CPUs with neural-network-focused hardware features found on 2nd Gen Intel Xeon Scalable processors and newer CPUs. Intel calls this performance optimization “software AI acceleration” and says it can make a measurable impact in certain cases.
Ramesh added that business users and data scientists will be able to access lower precision data types – int8 for inference and Bfloat16 for inference and training – to get additional performance benefits from AI accelerators such as Intel Deep Learning Boost.
Accelerating deep learning with oneDNN According to Slintel , TensorFlow has a market share of 37%. Kaggle’s 2021 State of Data Science and Machine Learning survey pegged TensorFlow’s usage at 53%.
However, while TensorFlow is a popular technology, the oneDNN library and Intel’s approach to machine learning optimization isn’t just about TensorFlow. Ramesh said that Intel software optimizations through oneDNN and other oneAPI libraries deliver measurable performance gains to several popular open source deep learning frameworks such as TensorFlow, PyTorch and Apache MXNet , as well as machine learning frameworks such as Scikit-learn and XGBoost.
He adds that most of the optimizations have already been up-streamed into the respective default framework distributions.
Intel’s strategy for building out AI optimizations like oneDNN The oneDNN library is part of a broad strategy at Intel to help enable AI for developers, data scientists, researchers, and data engineers.
Wei Li, vice president and general manager of AI and Analytics, told VentureBeat that Intel’s goal is to make it as easy as possible for any type of user to accelerate their end-to-end AI journey from the edge to the cloud – no matter what software they want to use, running on Intel. Li said that having an open ecosystem across software offerings helps to enable innovation. He noted that Intel is doing work ranging from contributions at the language level with Python, partnering and optimizing the industry frameworks like PyTorch and TensorFlow, to releasing Intel developed tools that increase productivity like the OpenVINO toolkit. “Intel recently announced Project Apollo at Vision which brings even more, new open source AI reference offerings that will accelerate the adoption of AI everywhere across industries,” Li said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,470 | 2,022 |
"Nvidia's AI-powered supercomputers advance nuclear fusion research | VentureBeat"
|
"https://venturebeat.com/ai/nvidias-ai-powered-supercomputers-advance-nuclear-fusion-research"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia’s AI-powered supercomputers advance nuclear fusion research Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The most powerful supercomputers on the planet are used to perform all manner of complex operations. Increasingly, they are used to enable artificial intelligence for research that could one day impact billions of people.
The world’s fastest and most powerful high-performance computing (HPC) supercomputers are front and center at the International Supercomputing Conference (ISC) which runs from May 29 to June 2 in Hamburg, Germany. As part of the ISC event, Nvidia will provide insight about its latest HPC systems and the use cases they enable.
“HPC plus AI is really the transformational tool of scientific computing,” Dion Harris, lead technical product marketing manager for accelerated computing, said in a media briefing ahead of ISC. “We talk about exascale AI because we do believe that this is going to be one of the key pivotal tools to drive scientific innovation and any data center that’s building a supercomputer needs to understand how their system will perform from an AI standpoint.” U.S-based Grace Hopper superchip supercomputer coming to Los Alamos National Laboratory Nvidia first announced its Grace ARM-based CPUs in April 2021, with a goal of including them in HPC deployments. The goal is now coming to fruition.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At ISC 2022, Nvidia is announcing that Los Alamos National Laboratory and Hewlett Packard Enterprise (HPE) are building Venado, which is the first U.S. based supercomputer to use the Grace chip architecture.
The Venado supercomputer uses a combination of Grace and Grace Hopper superchips , in a system that is expected to deliver 10 exaflops of AI performance. The Venado system will be used for material science, renewable energy, as well as energy distribution research.
Nvidia-powered AI enables brain imaging research Brain imaging is among the HPC and AI use cases Nvidia is announcing at ISC 2022.
King’s College of London is using the Nvidia powered Cambridge-1 system, which is the most powerful supercomputer in the United Kingdom, along with the open-source Monai AI framework that is optimized for medical imaging use cases.
The powerful hardware and AI software has been used to produce the world’s largest database of synthetic brain images.
The reason that is important is the amount of AI-driven research to identify conditions such as Alzheimer’s or dementia, Harris explained. “But in order to train those models, you need large databases,” he said.
There are many privacy concerns when using real patient data, which is why it’s important for researchers to have access to synthetic data , he added.
“This is a true example of HPC not just delivering speeds and feeds, but really making real contributions to the scientific and research community,” Harris said.
Modeling a nuclear fusion reactor As people around the world try to find solutions to the challenges of global warming , one of the primary strategies is to identify renewable energy sources.
One such source could be nuclear fusion reactors. Today’s nuclear reactors are fission-based and generate radioactive waste. The promise of fusion is that it can deliver large amounts of energy, without the same waste as fission.
At ISC 2022, Nvidia is announcing that the U.K. Atomic Energy Authority (AEA) is using the Nvidia Omniverse simulation platform to accelerate the design and development of a full-scale fusion reactor.
“With the Nvidia Omniverse, researchers could potentially build a fully functioning digital twin of a reactor, helping ensure the most efficient designs are selected for construction,” Harris said.
The goal for Omniverse and the digital twin is to have an AI-generated replica of the fusion reactor system. The U.K. AEA is also planning to simulate the physics of the Fusion plasma containment itself.
The simulation will be done with the Nvidia Modulus AI-physics framework to actually model how the fusion reaction and its containment can occur.
“The holy grail of fusion energy is being able to not just create a fusion reaction, but have it be sustainable,” Harris said. “We really think this will be a path towards sustainable energy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,471 | 2,022 |
"Los Alamos supercomputer will use Nvidia's Grace Hopper processors | VentureBeat"
|
"https://venturebeat.com/ai/los-alamos-supercomputer-will-use-nvidias-grace-hopper-processors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Los Alamos supercomputer will use Nvidia’s Grace Hopper processors Share on Facebook Share on X Share on LinkedIn This supercomputer uses Nvidia's Grace Hopper chips.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia said that a new supercomputer being made by the Los Alamos National Laboratory will use the chipmaker’s Grace and Grace Hopper superchips.
The central processing units (CPUs) will be part of a HPE Cray EX supercomputer dubbed the Venado. The chips are also being used by other computer makers to build the next generation of servers turbocharging AI and HPC workloads for the exascale era, Nvidia said.
Atos, Dell Technologies, Gigabyte, HPE, Inspur, Lenovo and Supermicro are planning to deploy servers built with the Grace and Grace Hopper CPUs.
All these new systems benefit from the just-announced Grace and Grace Hopper designs in the Nvidia HGX platform, which provide manufacturers the blueprints needed to build systems that offer high performance and twice the memory bandwidth and energy efficiency of today’s leading data center CPU.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As supercomputing enters the era of exascale AI, Nvidia is teaming up with our [hardware] partners to enable researchers to tackle massive challenges previously out of reach,” said Ian Buck, vice president of hyperscale and HPC at Nvidia, in a statement. “Across climate science, energy research, space exploration, digital biology, quantum computing and more, the Nvidia Grace CPU Superchip and Grace Hopper Superchip form the foundation of the world’s most advanced platform for HPC and AI.” Supercomputing centers in the U.S. and Europe will be among the first with systems featuring the superchips.
Los Alamos National Laboratory’s Venado will be the first system in the U.S. to be powered by Nvidia Grace CPU technology. The Venado is a heterogeneous system that will feature a mix of Grace CPU superchip nodes and Grace Hopper superchip nodes for a wide and emerging set of applications. When completed, the system is expected to exceed 10 exaflops of AI performance.
“By equipping LANL’s researchers with the performance of Nvidia Grace Hopper, Venado will continue this laboratory’s commitment to pushing the boundaries of scientific breakthroughs,” said Irene Qualters, associate director for Simulation and Computation at LANL, in a statement. “Nvidia’s accelerated computing platform and expansive ecosystem are removing performance barriers, allowing LANL to make new discoveries that will benefit the nation and society as a whole.” Alps, the Swiss National Computing Center’s new system, also to be built by HPE using the HPE Cray EX supercomputer, will use the Grace CPU Superchip to enable breakthrough research in a wide range of fields. It will serve as a general-purpose system open to the research community in Switzerland, as well as the rest of the world.
The Nvidia Grace CPU Superchip features two Arm-based CPUs, connected coherently through the high-bandwidth, low-latency, low-power Nvidia NVLink-C2C interconnect. This design features up to 144 high-performance Arm Neoverse cores with scalable vector extensions and a 1 terabyte-per-second memory subsystem.
The Grace CPU Superchip interfaces with the latest PCIe Gen5 protocol to enable maximum connectivity with the highest-performing GPUs, as well as with Nvidia ConnectX-7 smart network interface cards and Nvidia BlueField-3 DPUs for secure HPC and AI workloads.
The Grace Hopper Superchip pairs an Nvidia Hopper GPU with an Nvidia Grace CPU in an integrated module connected with NVLink-C2C to address HPC and giant-scale AI applications.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,472 | 2,022 |
"How manufacturing companies can use digital twins to remain competitive | VentureBeat"
|
"https://venturebeat.com/ai/how-physical-product-focused-companies-can-use-digital-twins-to-remain-competitive"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How manufacturing companies can use digital twins to remain competitive Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Companies that make physical products sometimes struggle to stay relevant as digital natives and find creative ways to capture the highest margin fringes of age-old businesses. One specific challenge these companies face is that digital native businesses have developed advanced data processing capabilities to create better customer experiences and identify new opportunities. This is much harder for established physical goods industries, which rely on legacy systems and manufacturing equipment.
Digital twins could help bridge this gap between legacy systems and modern customer experiences, Michael Carroll, VP at Georgia-Pacific, predicted at the Digital Twin Summit.
Carroll leads corporate transformation strategy development at the paper and forest products giant.
He argues that physical products industries don’t have suitable mechanisms for dealing with the exponential growth in data.
Most business leaders he talks to know that data is growing, but they take a linear rather than exponential perspective. This limits the ability to capture value from new data streams like the IoT, ecommerce services, manufacturing equipment and customer interactions.
The permission bottleneck Business leaders also face the challenge of implementing a permission-based approach to help integrate information technology (IT) and operational technology (OT) used for managing physical machines.
To do so, business and engineering teams must ask the IT department for access to digital representations of the assets they manage. Then, the IT department needs to ask for permission to get more data from the physical assets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We end up in a permission asking cycle in a world that is growing exponentially,” Carroll said.
He observed that in the mid-1970s, the bulk of the S&P 500 was made of companies whose tangible assets made up 85% of their value. But today, the balance between tangible assets like goods created in factories and intangible assets like brands and experiences is reversed.
The leading companies are systems-based rather than functions and process-based companies. They have created connected ecosystems that generate, aggregate and analyze customer, market and supplier information. As a result, they understand what their customer wants before competitors do.
The exponential model Established businesses need to take a similar approach that extends these traditional tools to support digital twins of real-world goods, manufacturing processes and marketplaces. To do this at scale, the IT organization needs to plan a more self-service and democratized approach to provision, update and leverage digital twins.
“This means that in order to create value at the rate that data grows, which is exponential, you might have to reconstruct yourself so that you don’t have to ask permission to go create value,” Carroll said.
This allows business executives and operations teams to stand up new devices, create new applications or change configurations on their own.
“Now they are responsible for the digital representation of the thing they are in charge of,” he said.
This new approach could also allow enterprises to create digital twins powered by artificial intelligence (AI) to understand and respond to customer values and decisions. “We do not know a lot of the answers, except to say that we’re pretty sure that tomorrow is about creating value in the exponential age and creating value at scale, with data growing exponentially,” Carroll said. “ Digital twins will be a huge part of that, and it will be powered by AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,473 | 2,022 |
"How NLP is overcoming the document bottleneck in digital threads | VentureBeat"
|
"https://venturebeat.com/ai/how-nlp-is-overcoming-the-document-bottleneck-in-digital-threads"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How NLP is overcoming the document bottleneck in digital threads Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Enterprises are increasingly adopting new digital technologies to help streamline and automate the design and manufacturing of physical products. Digital twins help organize and share much of this technical data. Digital threads connect changes to this data across a product or processes lifecycle.
However, some of the most crucial manufacturing data is managed as PDF documents and handwritten notes , also known as “unstructured data.” At the Digital Twin Summit , executives from XSB , an industrial artificial intelligence (AI) company, explained how natural language processing (NLP) techniques are bridging the gap between text documents, digital twins and digital threads.
“We are using an ensemble of artificial intelligence technologies, to basically read the document to extract the information and to infer additional information,” said Rupert Hopkins, CEO of XSB.
For example, a document may reference a material specification only used in aluminum casting workflows. The extracted data from these documents could also help plan out lead time, new work instructions, or supply chain requirements.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! An old-fashioned process Andrew Bank, strategy and business development manager at XSB, said that manufacturers are increasingly digitizing product information, accounting data and customer data. But other critical engineering data ends up in PDF files, including product specs, drawings and work instructions. These documents include critical data like tables, graphs, equations and references to other documents.
“A static document is a terrible container for data,” said Banks.
There are often dozens of explicit and implicit references between documents — which creates a bottleneck every time an engineer needs to figure out how to respond to a new customer order or supply chain constraint. A single line specifying a new requirement might mean hopping through ten documents to figure out what will be required to tune machine settings, procure new raw materials and adjust work processes.
Engineers today typically copy and paste or re-key data from one document or system to another. A minor adjustment on the factory floor or the supply chain can propagate across all the interrelated requirements.
“If there’s no dynamic live link between those derivatives downstream and the authoritative source data change, management and impact assessment become almost impossible and very difficult at the least,” Bank said.
Creating a data graph Graph databases are an ingredient of digital twins , since they provide a way to help connect context across different data sources and for other users. XSB has developed tools that use AI and semantic ontologies to transform a static collection of data into digital models represented using a graph structure.
Banks claimed that some manufacturing-intensive organizations can save up to 65% when they move from static documents to these digital models.
The data is stored in the Swiss Knowledge Graph , which attributes meaning and context to each piece of data. This allows teams to pull this data into the product lifecycle management (PLM), manufacturing execution systems (MES) and Microsoft Office.
PLM records often refer to industry standards stored in other systems or attached as a file attachment. XSB has developed plugins for PLM tools like PTC Windchill, and Siemens Teamcenter that bring metadata from the document into the PLM interface.
XSB claims one customer is using the tool to automate a process for approving configuration changes from within SharePoint. It says others are using it to ensure work instructions are automatically updated in response to new customer requests. Additionally, XSB says another component testing company uses the tool to pull test requirements out of technical data packages to ensure labs test the correction materials.
Connecting the thread Pulling requirements out of complex engineering documents can be tedious. Engineering requirements are often expressed using “shall” statements. A statement such as, “The part shall be made of brass, with a finish of desert sand and a size of AA,” actually specifies three different requirements that all need to be analyzed and accounted for separately.
Hopkins said most humans only demonstrate 75% accuracy at pulling all the requirements out of a complex document. He claims that XSB achieves 80-90% accuracy on loosely trained systems and 95-100% accuracy when combined with statistical process controls.
Digital models help inform a digital thread about the relationships between the regulations, materials, processes and specifications associated with a product. This can help to plan more efficiently around changes to work instructions or material regulations.
“If all of a sudden cadmium is being pulled from the supply chain, we need to alert the twin to that kind of change and those changes today are mostly document-driven,” Hopkins said. This mirrors how companies like John Snow Labs use NLP to digitize medical records in healthcare. More importantly, it illustrates how NLP technology could play a valuable role in creating digital twins and digital threads in other industries such as construction, power, telecommunications and logistics.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,474 | 2,022 |
"Elon Musk weighed in on artificial general intelligence — should companies care? | VentureBeat"
|
"https://venturebeat.com/ai/elon-musk-weighed-in-on-artificial-general-intelligence-should-companies-care"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elon Musk weighed in on artificial general intelligence — should companies care? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Yesterday, Elon Musk decided to enter the charged Twitter debate on whether artificial general intelligence (AGI), or the ability for AI to understand or learn any intellectual task that a human being can, is imminent.
He put it this way : Of course, Musk’s tweets should always be taken with truckloads of salt. The world’s richest man is also known for some of the world’s worst Twitter takes, from manipulating Tesla stock to giving Kanye West’s 2020 presidential run his “ full support ” — not to mention the recent “ will he or won’t he ,” regarding his deal to buy Twitter itself.
An open letter to Elon Musk and a $100,000 challenge Still, Musk’s comments, as usual, could not simply be ignored. In a new post on his Substack , Gary Marcus, author of Rebooting.AI and a big ( some say controversial) driver of the AGI critique on Twitter, wrote an open letter to Elon Musk, offering to place a $100,000 bet on whether AGI would appear by 2029.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And Kevin Kelly, co-founder of LongBets, a “a public arena for enjoyably competitive predictions, of interest to society, with philanthropic money at stake,” told Marcus on Twitter that “as co-founder, I’ll give [you] all [the] help you might need to place this bet on Longbets.” So far, Marcus said he isn’t surprised that he hasn’t heard back from Musk – but added that the Tesla CEO’s comments are unhelpful in the larger discussions about AI.
“Musk’s pronouncements on AGI just hasten the all-in-rush on current technology, when we actually probably need to take a step back to understand where we are and face the difficult problems realistically,” said Marcus — pointing out that the hardest problems are around getting machines to reason about the everyday world and to have common sense.
An ‘avalanche of misinformation’ about AI “There is so much hype about AI and so much money being invested, but invested in the wrong things,” said Marcus. “Things like DALL-E 2 and GPT-3 are fun to play with, but they are likely to create an avalanche of misinformation and don’t actually represent the real hard problems in existing AI technologies, like those of racial, social, and gender inequality that have been documented by people like Dr. Abebe Birhane, Mozilla senior fellow in Trustworthy AI.” There is also a natural tendency to look at AI, and AGI more specifically, as “something magical,” he added. This, he claimed, has deluded enterprise businesses and government policymakers.
“It leads people to imagine AI as a one-size-fits-all universal solvent, which it isn’t,” he said. “I like to tell businesses that AI is really good right now in the part of the curve where you have a lot of training data, but not so good in the long tail.” Overall, people have invested a great deal of money into AI, Marcus said, based on an unsound premise – and a focus on AGI being around the corner just adds fuel to that fire. “Things might change in fifty or even twenty years, but expecting full wholesale magic is unrealistic,” he explained. “Managing investment in AI means being realistic and not automatically believing press clippings.” What artificial general intelligence won’t be able to do by 2029 The new post on Marcus’ Substack, called “Dear Elon Musk, here are five things you might want to consider about AGI,” gets into the details of what Marcus really believes will happen by 2029.
“AGI is a problem of enormous scope, because intelligence itself is of a broad scope,” he said in the post, adding that five important things will not occur by the end of the decade: “In 2029, AI will not be able to watch a movie and tell you accurately what is going on.” “In 2029, AI will not be able to read a novel and reliably answer questions about the plot.” “In 2029, AI will not be able to work as a competent cook in an arbitrary kitchen.” “In 2029, AI will not be able to reliably construct bug-free code of more than 10,000 lines from natural language specification or by interactions with a non-expert user. [Gluing together code from existing libraries doesn’t count.]” “In 2029, AI will not be able to take an arbitrary proof in the mathematical literature written in natural language and convert it into a symbolic form suitable for symbolic verification.” Waiting for Elon Musk to respond Marcus said it would be “terrific fun” if Musk actually responded to his challenge of a bet on whether AGI will come to fruition by 2029. Meanwhile, several others on Twitter have offered to match Marcus’ $100,000 bet, with the total offer now standing at 500K.
“It would be great to have a public debate, with or without cash on the line,” he said. “The more the public understands about the realities and challenges of AI, the better.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,475 | 2,022 |
"Capgemini finds gap between digital twin potential and actuality | VentureBeat"
|
"https://venturebeat.com/ai/capgemini-finds-gap-between-digital-twin-potential-and-actuality"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Capgemini finds gap between digital twin potential and actuality Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital twin implementations are already demonstrating a 76% cost reduction and a 68% increase in customer engagement. And digital twin adoption is predicted to increase by 36% in the next five years. However, only 13% of organizations have developed full-scaled digital twin strategies.
These are the findings of a recent Capgemini survey on digital twin adoption. The systems’ integration consultancy launched the survey to tease apart why so many companies struggled with the technology, despite the tremendous gains of early adopters.
The top challenges for bridging this gap include developing a long-term roadmap, cultivating the right skills and building the appropriate partnerships. The payoff for doing these right is immense. Leading firms are seeing a 15% increase in sales, turnaround time and operational efficiency and a 25% improvement in system performance.
Why digital twins? Capgemini had a previous focus on AI, edge computing , IoT and analytics, which are all crucial for digital transformation. Brian Bronson, president of Americas and APAC at Capgemini Engineering, told VentureBeat they realized that digital twins are also central to intelligent industry trends, such as changing customer preferences, growing regulatory pressures and increased concerns around carbon emissions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One big driver is increasing concerns about sustainability. Capgemini found that digital twin leaders realized an average of 16% improvement in sustainability due to digital twins. Bronson said digital twins enhance scalability and promote the integration of products and services.
Sustainability benefits range from process efficiencies, reducing emissions and the ability to test the viability of new, sustainable materials.
“We are seeing many applications across industries such as urban planning, infrastructure, energy and utilities, auto, aviation, consumer products and healthcare,” Bronson said.
Bridging the gap Despite the tremendous promise, many organizations struggle to get digital twin projects off the ground.
“We found that although 55% of organizations consider digital twins strategic in digital transformation, 42% lack vision on how to deploy them,” Bronson said.
A mismatch between long-term vision and operational governance creates various delays. For example, inefficient program management and lack of governance can derail the launch of a digital twin.
New skills required Digital twins are built across multiple interconnected disciplines, which requires a unique set of skills that are not yet common. Jiani Zhang the chief software officer at Capgemini Engineering, told VentureBeat that enterprises need to engage or cultivate experts in data analysis, IoT, design and industry.
“Industry specialists must be comfortable with reinventing how people will interact with digital versions of what they know well,” Zhang said.
Designers need to understand and then express the value in the data that is collected , in addition to knowing the user and tasks intimately. IoT architects should consider the future needs and growth of the systems they build and advocate these requirements to the customer in the form of business models. Data scientists need to experiment, strategize and collaborate with business experts, designers and engineers.
“While we can certainly attract folks with the potential to do this work, it is very challenging to grow the kind of talent that can be both hyper-focused on their areas and broad enough to work well under the requirements of digital twin applications,” Zhang said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,476 | 2,022 |
"Battery digital twin standard to drive EV sustainability | VentureBeat"
|
"https://venturebeat.com/ai/battery-digital-twin-standard-to-drive-ev-sustainability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Battery digital twin standard to drive EV sustainability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Battery performance, supply chain challenges and their carbon footprint are some of the biggest bottlenecks for the electrical vehicle (EV) industry. A new Battery State of Health (SOH) Standard will make it easier to create digital twins of different battery designs and the processes that produce them to address all these challenges.
The standard supports tools to ensure EV batteries are produced, distributed, maintained and recycled safely and sustainably. This promises to open the door to an array of second and third-life battery uses, such as creating decentralized energy storage systems. It will also allow EV owners to check whether a battery needs to be recharged or replaced and help used car buyers estimate future replacement costs.
The new standard is a foundational component of the Mobility Open Blockchain Initiative (MOBI) launched in 2018. MOBI’s founding members include Accenture, Aioi Nissay Dowa Insurance Services USA, BMW, Bosch, Ford, General Motors, Hyperledger, IBM, IOTA Foundation, Groupe Renault, ConsenSys, ZF Friedrichshafen AG and many more. It has since grown to over one hundred members.
Before MOBI’s launch, many mobility and tech communities were experimenting with blockchains and building proofs-of-concept to demonstrate the technology. MOBI cofounder and director Tram Vo told VentureBeat, “These companies found that putting a vehicle, data, or service on a chain was easy, but scaling was hard,” MOBI cofounder and director Tram Vo told VentureBeat. “Without common, agreed-upon standards of identifying things, sharing data and transacting within a business network, the technology itself had little use for global enterprise applications.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Measuring health The definition of battery SOH is calculated as a ratio of total maximum capacity at any given time compared to the beginning of life capacity (or rated capacity). Vo said that when the ratio dips below 80%, the battery has reached its end of life and needs to be replaced. The original pack is often repurposed for a second life. These batteries consist of cells, modules and a pack and so the definition of SOH carries for individual components in the pack.
Battery SOH can also be calculated using impedance (or resistance) and represents the thermal limit of the battery. When the SOH is measured using capacity, it is also called “capacity fade” as maximum capacity decreases over time.
Vo said the ability to measure and track battery SOH would enable EV owners to recharge/recycle batteries in a timely manner and help eliminate range anxiety. This promises to drive more potential buyers to purchase new and used EVs.
SOH tracking will also open an array of second and third-life uses for batteries and unlock the potential for more seamless decentralized energy storage systems by fostering greater visibility within the battery lifecycle. SOH tracking will also give stakeholders valuable insights into how batteries are affected by various factors and may help manufacturers design more durable batteries.
“SOH tracking will allow us to extend the battery life cycle, expand primary and secondary EV markets and harness unused energy to power the grid, enabling a more circular distribution of renewable energy around the globe and streamlining the battery recycling process,” Vo said.
Vetting battery data The new standard will utilize a decentralized data network for authenticating battery data across stakeholders. The battery identity number (BIN) standard specifies the format, content and physical requirements for a globally unique identity of battery packs. Vo said it is like a vehicle identification number (VIN) for vehicles.
It provides an indelible BIN on battery packs to facilitate 1) battery identification information retrieval; 2) accurate and efficient battery recall campaigns; 3) a global battery passport; 4) traceability across the battery component supply chain; 5) recycling certification; 6) battery swapping and 7) life cycle traceability.
The BIN is composed of a battery manufacturer identifier (BMI), a battery descriptor section (BDS) and a battery information section (BIS), each of which denotes a battery’s specific characteristics, imbuing it with a unique and traceable identity.
The group is also working on a battery Self-Sovereign Digital Twin™ (SSDT). It provides a virtual representation of a battery and its associated data anchored in a decentralized trust network using W3C’s Decentralized Identifiers (DIDs) Standard. MOBI’s community plans to use the Integrated Trust Network or ITN for exchanging data.
Vo said this would enable authenticated access to SOH and other battery data via a computer or mobile device. The Battery SSDT also stores a combination of static and real-time data to automatically log a battery’s journey through the value chain, from raw material sourcing and manufacturing to use and recycling.
The decentralized trust network ensures this data is tamper evident. The controller of the asset can also choose who to share this data with and how much to share. This way, stakeholders throughout the value chain can communicate across organizational lines in a privacy-preserving, secure and trusted manner. Users benefit from access to a richer account of a product’s history while protecting their privacy.
The supply of any commodity is complex and crosses many geographical boundaries, companies, raw materials and parts. This distributed digital twin approach could help to improve transparency and collaboration between stakeholders.
“We believe a clever combination of legacy systems anchoring data to be shared with stakeholders on a federated system along with decentralized identities will allow the stakeholders to collaborate at a much higher level,” Vo said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,477 | 2,022 |
"AMD powers the world's most powerful supercomputer | VentureBeat"
|
"https://venturebeat.com/ai/amd-powers-worlds-most-powerful-supercomputer"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AMD powers the world’s most powerful supercomputer Share on Facebook Share on X Share on LinkedIn The HPE Cray Frontier uses AMD Epyc processors.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Advanced Micro Devices said that its Epyc processors power the most powerful supercomputer in the world.
The Santa Clara, California-based chipmaker also said that its Epyc processors power five of the top 10 most-powerful supercomputers in the world and eight of the ten most efficient supercomputers, according to the latest Top500 Supercomputer and Green500 lists. AMD has 94 systems on the top 500 list. The fastest system is made by HPE.
Oak Ridge National Laboratory’s (ORNL) Frontier system submitted its very first score to the Top500 list of 1.1 exaflops, making it the world’s fastest supercomputer and the first to break the exaflop barrier, AMD said. This score is double the score of the No. 2 system and greater than the sum of the next eight systems on the latest Top500 list.
In addition, the Frontier test and development system (TDS) also secured the top spot on the Green500 list, delivering 62.68 gigaflops/watt power-efficiency from a single cabinet of 3rd Gen AMD Epyc processors and AMD Instinct MI250x accelerators. Frontier’s mixed-precision computing performance clocked in at 6.86 exaflops, as measured by the High-Performance Linpack-Accelerator Introspection, or HPL-AI, test.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The next steps for Frontier include continued testing and validation of the system, which remains on track for final acceptance and early science access later in 2022 and open for full science at the beginning of 2023.
In other AMD Epyc and AMD Instinct MI200 systems, CSC’s LUMI supercomputer is third on the Top500 list with 152 petaflops of performance and third on the Green500 list with 51.6 gigaflops/watt power-efficiency, and the Adastra system at GENCI-CINES is tenth on the Top500 list and fourth on the Green500 list. These systems continue to highlight the performance and efficiency capabilities of the AMD Instinct accelerators at a node, cabinet and system level.
Additionally, the Top500 and Green500 lists showcase the rapidly growing preference for AMD solutions across the HPC industry. On the Top500 list, AMD products power 94 total systems, an increase of 95% year-over-year, and AMD Instinct MI200 accelerators made their first entry to the Top500 list with seven systems.
“We are excited that AMD Epyc processors and AMD Instinct accelerators power the world’s fastest, most energy-efficient, and the first supercomputer to break the double-precision exaflop barrier,” said Forrest Norrod, senior vice president of the datacenter solutions group at AMD, in a statement. “Innovation and delivering more performance and efficiency for supercomputers is critical to addressing the world’s most complex challenges. AMD EPYC processors and AMD Instinct accelerators continue to push the envelope in high-performance computing, providing the performance needed to advance scientific discoveries and enable supercomputers to break the exascale barrier.” The performance number delivered by this single generation of AMD Instinct based systems on the Top500 List, almost equals the combined Flops of the rest of the 161 accelerated system on Top500 On the Green500 list, AMD EPYC processors and AMD Instinct accelerators now power the four most efficient supercomputers in the world. Beyond that, AMD products are in eight of the top ten, and 17 of the top 20 most efficient.
“The Frontier supercomputer, powered by AMD and HPE, represents a massive step forward for both science and for the HPC industry,” said Bronson Messer, director of science at the Oak Ridge Leadership Computing Facility, in a statement. “Our collaboration with AMD has been critical for us to ensure that we deploy the world’s leading platform for computational science. The Frontier supercomputer taps into the combined performance of enhanced AMD CPUs and AMD Instinct accelerators, along with an enhanced AMD ROCm 5 open software platform, to deliver the performance researchers need to carry out scientific research for the good of all mankind.” AMD is making headway in high-performance computing across key research areas including manufacturing, life sciences, financial services, climate research and more. AMD Epyc processors now power Thailand’s National Science and Technology Development Agency’s latest supercomputer, providing the computing horsepower to advance research in medicine, energy sources, weather forecasting and more. Additionally, The Ohio Supercomputer Center (OSC) recently announced Ascend, a new HPC cluster comprised of Dell Technologies PowerEdge servers powered by AMD Epyc processors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,478 | 2,022 |
"Arm announces high-performing internet of things (IoT) hardware | VentureBeat"
|
"https://venturebeat.com/technology/arm-announces-new-iot-hardware"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm announces high-performing internet of things (IoT) hardware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Earlier this week, Arm debuted several new products for its internet of things (IoT) portfolio, including its highest-performing Cortex-M microcontroller yet. The new updates span Arm’s Total Solutions for IoT roadmap and target applications such as cloud-native edge devices and voice recognition. Overall, Arm’s goal is to support the IoT ecosystem, which ranges from sensors to industrial applications.
Total Solutions for IoT Arm launched its Total Solutions for IoT half a year ago with the plans to deliver a full-stack solution to accelerate IoT development. The platform combines IP, software, machine learning (ML) and other tools for product design. As is common in Arm’s business model, the company is focused on providing common hardware IP that allows developers to innovate in areas of differentiation. Here, the core of its IoT platform is Arm Corstone as a pre-integrated and pre-verified IP subsystem.
Arm is now launching two new Total Solutions. The first, Total Solution for Cloud Native Edge Devices, is based on Arm’s most high-end Corstone-1000 platform, which features Cortex-A processors such as the Cortex-A53, but also leverages Cortex-M for the most efficient performance. The company says the addition of the Cortex-A32 provides IoT developers the capability of running operating systems like Linux as well as application-class workloads. Potential devices that could benefit from the newly introduced tech solution include wearables, gateways and smart cameras. In addition, the platform also contains a hardware secure enclave for the security of sensitive data.
The company’s second release, the Total Solution for Voice Recognition, is based on the Corstone-310 subsystem, which contains the new Cortex-M85 and the Ethos-U55 NPU to deliver what Arm claims is its highest performance MCU-based design yet. This solution could be used to enhance technologies like smart speakers, thermostats, drones and even factory robots.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The new Cortex-M85 is Arm’s highest-performing Cortex-M processor to date, with a claimed uplift of 30% over the Cortex-M7 and 20% for ML workloads. It supports the Armv8.1-M instruction set that features Arm Helium technology for endpoint ML and digital signal processing (DSP) workloads. With Helium, performance is improved anywhere from five to 15 times with the introduction of new, low precision scalar and vector instructions.
The Cortex-M85 also features Arm TrustZone support for enhanced security. Arm says it achieves the PSA Certified Level 2 security baseline for IoT deployments.
“The IoT runs on Arm and we have a responsibility to create greater opportunities for IoT innovation and scale by continually raising the bar on performance, simplified development and software reuse for our ecosystem,” said Mohamed Awad, vice president of IoT and embedded technology at Arm.
Virtual hardware This week, Arm also revealed several new virtual devices to further expand its virtual hardware. The additions include the Corstone platforms and seven new Cortex-M processors, up to the Cortex-M33. Arm also reports that it’s expanding its library with third-party hardware from partners such as NXP, ST Microelectronics and Raspberry Pi.
Arm Virtual Hardware, which launched together with its Total Solutions last fall, enables software development already in advance of silicon through a cloud-based offering. This allows the Arm ecosystem to adopt cloud-based development.
Open IoT Arm also launched Project Centauri to foster standardized IoT development. To that end, Arm has announced that it’s delivering the first release of the Open IoT SDK Framework that contains the new Open-CMSIS-CDI software standard. This standard defines a Common Device Interface (CDI) for the Cortex-M ecosystem. Arm says that eight industry players are already involved, with cloud service providers, ODMs and OEMs among those.
All of the company’s new tech solutions are immediately available for licensing and can be accessed in the cloud. As part of Arm’s roadmap, the company is also working on Total Solutions for vision, object recognition and smart sensor fusion. The first one will be addressed by Cortex-A53, while the latter two will leverage Cortex-M processors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,479 | 2,022 |
"Report: DDoS attacks have increased 4.5 times since last year | VentureBeat"
|
"https://venturebeat.com/security/report-ddos-attacks-have-increased-4-5-times-since-last-year"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: DDoS attacks have increased 4.5 times since last year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to a new report by Kaspersky , DDoS attacks rose 4.5x compared to the same period a year earlier, and 46% over the preceding quarter, which was previously considered the all-time high. The current spike in attacks began in late February.
Distributed Denial of Service (DDoS) attacks are designed to interrupt network resources used by businesses and organizations and prevent them from functioning properly. They become even more dangerous if the compromised systems are in government or financial sectors, since these services being unavailable has knock-on effects on the wider population.
A large number of the attacks appeared related to hacktivist activity.
Examples include a site mimicking the popular 2048 puzzle game to gamify DDoS attacks on Russian websites, and a call to build a volunteer IT army in order to facilitate cyberattacks.
The attacks also showed an unprecedented duration for DDoS sessions, particularly those aimed at state resources and banks. The average DDoS session lasted 80 times longer than those in Q1 2021. The longest attack was detected on March 29 with an atypically long duration of 177 hours.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In Q1 2022 we witnessed an all-time high number of DDoS attacks,” said Alexander Gutnikov, security expert at Kaspersky. “The upward trend was largely affected by the geopolitical situation. What is quite unusual is the long duration of the DDoS attacks, which are usually executed for immediate profit. Some of the attacks we observed lasted for days and even weeks, suggesting that they might have been conducted by ideologically motivated cyberactivists. We’ve also seen that many organizations were not prepared to combat such threats. All these factors have caused us to be more aware of how extensive and dangerous DDoS attacks can be.” Read the full report by Kaspersky.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,480 | 2,022 |
"Report: 95% of IT leaders say Log4shell was 'major wake-up call' for cloud security | VentureBeat"
|
"https://venturebeat.com/security/report-95-of-it-leaders-say-log4shell-was-major-wake-up-call-for-cloud-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 95% of IT leaders say Log4shell was ‘major wake-up call’ for cloud security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to a new report by Valtix , 95% of IT leaders believe that Log4Shell was a major wake-up call for cloud security, changing it permanently. The report highlights key trends in cloud workload security following Log4Shell, including insights into patching efforts and business impacts that still continue into 2022.
In 2021, Log4Shell was exploited and shook the entire global cybersecurity landscape. The humble piece of open-source software – ubiquitous with enterprise apps and cloud services – quickly became the worry of IT teams, executives and boards, as they scrambled to protect their most valuable data, systems and platforms.
The research found that 87% feel less confident about their cloud security now than they did prior to the incident. The research also found that even 3 months after the incident, 77% of IT leaders are still dealing with Log4J patching with 83% stating that Log4Shell has impacted their ability to address business needs. Most companies still lack clear visibility into their cloud environment, leaving many IT leaders in the dark of what is actually happening within this environment. And after the Log4Shell incident, cloud visibility has become more crucial than ever.
Log4J woke up the IT world, making leaders aware of just how brittle their defenses actually are in the cloud. While public clouds offer new opportunities to modernize and transform organizations, many organizations have struggled to find a cloud security operating model built for a more dynamic, distributed, and exposed environment than their legacy datacenter. IT and security leaders are in a precarious state as they continue dealing with Log4J while at the same time looking to invest in cloud security technology that will help them be ready for the next incident.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In March 2022, Valtix worked with an independent research firm to survey 200 cloud security leaders across the U.S. to better understand how the incident changed how IT teams look at and secure their cloud workloads in the aftermath of Log4Shell.
Read the full report by Valtix.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,481 | 2,021 |
"Report: 73% increase of threat incidents in Q4 2021 | VentureBeat"
|
"https://venturebeat.com/security/report-73-increase-of-threat-incidents-in-q4-2021"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 73% increase of threat incidents in Q4 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Trellix has released a new report examining cybercriminal behavior over the last six months, leveraging proprietary data from Trellix’s network of over 1 billion sensors along with open-source intelligence and Trellix Threat Labs investigations into prevalent threats like ransomware and nation-state activity.
Key findings include individual consumers as the No. 1 target of cybercriminals with a 73% increase in cyber incidents detected in Q4 2021. Threats to the healthcare vertical followed close behind, while transportation, shipping, manufacturing and information technology industries also showed a sharp increase in threats.
“We’re at a critical juncture in cybersecurity and observing increasingly hostile behavior across an ever-expanding attack surface,” said Christiaan Beek, lead scientist and principal engineer of Trellix Threat Labs. “Our world has fundamentally changed. The fourth quarter signaled the shift out of a two-year pandemic which cybercriminals used for profit and saw the Log4Shell vulnerability impact hundreds of millions of devices, only to continue cyber momentum in the new year where we’ve seen an escalation of international cyber activity.” Q4 2021 saw increased activity targeting sectors essential to the function of society. Transportation and shipping were the target of 27% of all advanced persistent threat (APT) detections. Healthcare was the second most targeted sector, bearing 12% of total detections. From Q3 to Q4 2021 threats to manufacturing increased 100%, and threats to information technology increased 36%. Of Trellix customers, the transportation sector was targeted in 62% of all observed detections in Q4 2021.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The report lists threat actors targeting Ukraine , including Actinium APT, Gamaredon APT, Nobelium APT (also known as APT29), UAC-0056 and Shuckworm APT. Of all APT activity Trellix observed in Q4 2021, APT29 accounted for 30% of the detections. The report details recommendations for organizations seeking to proactively protect their environment from tactics these actors use.
Trellix observed the continued use of Living off the Land (LotL) methods, where criminals use existing software and controls native to a device to execute an attack. Windows Command Shell (CMD) (53%) and PowerShell (44%) were the most-frequently used NativeOS Binaries, and Remote Services (36%) was the most-used Administrative Tool in Q4 2021.
Read the full report by Trellix.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,482 | 2,022 |
"Intigriti’s 'head of hackers': crowdsourced security is key to mitigating modern threats | VentureBeat"
|
"https://venturebeat.com/security/intigritis-crowdsourced-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intigriti’s ‘head of hackers’: crowdsourced security is key to mitigating modern threats Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Earlier this week, bug bounty and vulnerability disclosure platform Intigriti announced that it had raised over €21 million ($22 million) as part of a series B funding round.
The organization’s solution provides enterprises with access to over 50,000 ethical hackers, who can continuously test the security of their environments through bug bounty programs and crowdsourced techniques.
As part of this approach, an organization can pay an external researcher to search for vulnerabilities bad actors could exploit and report them to the organization through the Intigriti platform to remediate them.
For enterprises, crowdsourced security has the potential to detect vulnerabilities that commercial scanners miss, and upscales the capabilities of onsite security teams who may not have the time or expertise to spot potential entry points themselves.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Keeping up with hackers The announcement comes as organizations continuously struggle to allocate the resources necessary to effectively balance cybersecurity concerns alongside other strategic business objectives.
Research shows that 90% of IT decision makers claim their business would be willing to compromise on cybersecurity in favor of digital transformation, productivity or other goals.
However, Integriti’s head of hackers, Inti De Ceukelaire, argues that bug bounty solutions have a critical role to play in providing overstretched security teams with access to external support from “adversarial-minded” security researchers.
“Our bug bounty platform enables companies to overcome several common cybersecurity challenges. For example, clients can overcome cybersecurity skills gaps by leveraging thousands of security experts’ skills, expertise and creativity. They can also more easily stay on top of cyberthreats by tapping into this network,” said Ceukelaire.
“Like a malicious hacker, bug bounty hunters are wired to spot what your team might miss. With a bug bounty program, organizations are also investing in their internal talent by allowing them to learn from incoming submissions and interactions with researchers,” Ceukelaire said.
By providing organizations with a centralized security-testing solution, internal teams can proactively test their security defenses and scale the capabilities of their human analysts in a way that’s cost effective.
The bug bounty market Since the organization’s initial funding round in 2020, Intigriti has grown by 650%, making it the fastest-growing crowdsource security platform globally. This has coincided with the growth of the global bug bounty market , which researchers valued at $223.1 million in 2020, and anticipate will reach a value $5.46 billion by 2027.
Other companies embracing the crowdsource security approach include crowdsource security provider BugCrowd , which offers a platform for managing vulnerabilities with vulnerability rating taxonomy (VRT) and common vulnerability scoring system (CVSS) ratings, alongside remediation guidance; and integrations for JIRA, Slack, ServiceNow, Trello and Github.
BugCrowd raised $30 million in funding in 2020.
Another big competitor in the market is HackerOne , which provides organizations with access to continuous vulnerability testing from external researchers, who can prioritize vulnerabilities to enable internal security teams to follow up more effectively.
HackerOne most recently announced that it had raised $49 million as part of a series E funding round earlier this year, bringing its total funding to date to $160 million.
In terms of differentiation, Intigriti aims to stand out from other providers with triaging. “Unlike most other leading bug bounty platforms, our programs also offer triage services by default and without an additional fee. Triage plays a significant role in managing incoming reports and will make sure the program’s internal team only receives unique, actionable and valid reports, meaning they can keep their focus on business-as-usual activities,” Ceukelaire said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,483 | 2,022 |
"How remote browser isolation can shut down virtual meeting hijackers | VentureBeat"
|
"https://venturebeat.com/security/how-remote-browser-isolation-can-shut-down-virtual-meeting-hijackers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How remote browser isolation can shut down virtual meeting hijackers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Virtual meetings continue to attract cyberattackers who use them to distribute ransomware, including GIF-based account takeover attacks. Earlier this week, Zoom agreed to pay $85 million to its users who have been victims of zoom bombing. Zoom also committed to increasing its efforts to stop cyberattackers from delivering malware and account takeover attempts via chat on its platform. The company has also promised to implement additional security and privacy policies as part of a legal settlement that was reached earlier this week. The Web continues to be a vulnerable space for cyberattackers and virtual meetings’ evolving security, which became a need accelerated by the pandemic , has been an easy target.
Before the pandemic’s onset, many CISOs were wary of the first generations of virtual meeting platforms. The potential for cyberattackers to hide malware in HTML, JavaScript and browser code and then launch attacks aimed at unsecured endpoints was one of the reasons why virtual meeting platforms didn’t grow faster before the pandemic. Once an endpoint is compromised, cyberattackers laterally move across an enterprise’s network and launch additional malware attacks or impersonate senior management and defraud the company.
Cyberattacks growing more sophisticated Using GIF images to deliver worm-based attacks across Microsoft Teams into corporate accounts shows how sophisticated these attacks are. Users only had to view the GIF in Teams to have their authtoken cookie data shared with the compromised subdomain.
CyberArk’s recent blog post on how cyberattackers successfully used a GIF message to launch a worm-like malware variant through enterprises shows how vulnerable anyone using Teams and Microsoft-based applications can potentially be.
CyberArk’s post provides a timeline of how Microsoft responded quickly to thwart this type of attack and observed that the cyberattackers could traverse an organization and gain access to confidential, privileged data. Hacking into virtual meetings has become a new way for cyberattackers to gain the benefits of having privileged access credentials without having to steal them first.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The following graphic illustrates how the GIF-based attack worked.
Why remote browser isolation works What began as a strategy to secure and create more collaborative virtual meeting platforms simultaneously, Zoom and other platform providers began installing a remote web server on users’ devices. To their credit, Zoom quickly resolved the issue, while Apple pushed a silent update on their systems to block Zoom’s server. Zoom has progressed its security since 2019 and will need to improve, given the high cost of the legal settlement this week. Their timeline reflects the challenges all virtual meeting platforms have in balancing security, speed and responsiveness of user experience while enabling virtual collaboration. Many enterprises initially resisted migrating off their legacy teleconferencing systems, as slow and intuitive as they were, given the security risk for Zoom and other platforms.
Since the start of the pandemic and continuing now, virtual and hybrid teams are flourishing across all organizations, creating an entirely new series of security risks for virtual meeting sessions. It makes CISOs’ and CIOs’ jobs challenging to support the proliferating variety of personal, unmanaged devices.
Remote Browser Isolation (RBI)’s growth over the last two years is in response to the needs organizations have to bring a more zero trust security-based approach to all web sessions, regardless of where they are located. Zero trust looks to eliminate dependence on trusted relationships across an enterprise’s tech stack — as any trust gap can be a major liability. As a result, it is an area attracting enterprise cybersecurity providers like Forcepoint, McAfee and Zscaler that have recently added RBI to their offerings, joining RBI pioneers like Ericom and Authentic8. Of these and many other competing vendors in the RBI market, Ericom is the only one to have successfully developed and delivered a scalable solution that meets the demanding technological challenges of securing virtual meetings globally. It has applied for a patent for their innovations in this area.
RBI is proving out to be a more secure alternative to downloading clients that lack security and can cause software conflicts on endpoints that render them unprotected. RBI works by opening the virtual meeting URL in a remote, isolated container in the cloud. Virtual devices such as a microphone, webcam or desktop within the container synchronize media streams with endpoint devices.
Only safe rendering data representing isolated users’ media is streamed to participants’ endpoint browsers from the container. Isolated users likewise receive only safe renderings of media originating from other participants. The isolated container is destroyed when an active virtual meeting session ends, including all content within. In addition, policies restrict what users can share in virtual meetings via screen shares and chats. No images, video or audio of meetings is cached in participant’s browsers, so they can’t be retrieved and examined after the meeting or shared. The solution also prevents the malware-enabled illicit recording of sessions.
Turning a cautionary tale into a proactive strategy Virtual meetings keep teams collaborating, creating and accomplishing complex tasks together. CIOs and CISOs who enable the underlying virtual meeting technologies must continue to be vigilant about the security risks of virtual meeting platforms’ downloadable clients. Until now, there has not been a reliable way to secure them. While a lesson from the past, Zoom’s decision to load web servers on users’ systems is a cautionary tale every CIO I know still speaks about when virtual meeting platforms come up in conversation.
RBI has the capability to isolate virtual meetings can alleviate the concerns of CIOs and CISOs who want a solution that can scale across unmanaged devices. Endpoint security has progressed rapidly during the pandemic in parallel with RBI, as organizations adopt a more zero trust -based strategy for protecting every threat surface and reducing enterprise risk. As a result, securing virtual meetings is becoming core to a solid enterprise endpoint security strategy.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,484 | 2,022 |
"Experts say BlackCat ransomware isn't more of a problem than any other ransomware strain | VentureBeat"
|
"https://venturebeat.com/security/experts-say-blackcat-ransomware-isnt-more-of-a-problem-than-any-other-ransomware-strains"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Experts say BlackCat ransomware isn’t more of a problem than any other ransomware strain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last week, the FBI released a flash report highlighting that the BlackCat ransomware-as-a-service , also known as ALPHV, has breached over 60 organizations since last November.
In these attacks, attackers are using compromised credentials harvested by an initial access broker to enter an organization’s internal systems and start spreading ransomware.
How dangerous is BlackCat ransomware? While many commentators are concerned that BlackCat is one of the most sophisticated and dangerous ransomware threats, some experts are skeptical that the strain poses any more risk than other existing variants.
“Black Cat is a problem, but it’s really no more of a problem than other variants we’ve seen,” said Gartner senior research director, Jon Amato.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The big difference between BlackCat (also known as ALPHV) and other ransomware toolkits is that it’s written in Rust, and seems to have better memory protection and reliability. And initial indications are that BlackCat is more likely to successfully deploy and execute on target computers than ransomware toolkits written in C++ or other languages, for example,” Amato said.
However, Amato also notes that the code used by the malware does have the advantage of being less likely to be detected by some antimalware tools, which might not have been trained to detect malicious binaries written in Rust.
What can enterprises do? The publicity over the BlackCat ransomware threat comes at a time when organizations’ anxiety over ransomware is at an all-time high, following a number of high-profile attacks, including the Colonial Pipeline breach and the long-term havoc wreaked by the Conti ransomware group.
In fact, research shows that 74% of IT decision makers report they are so concerned about new extortion tactics that they believe ransomware should be considered a matter of national security.
Although ransomware threats are extremely serious, there are some simple steps that enterprises can take to mitigate it. Namely, acting fast to deny the attacker the ability to encrypt the data in the first place, which means decreasing reliance on legacy security tools and embracing next-generation extended detection and response ( XDR) tools.
“From an organizational standpoint, companies need to stop relying on legacy perimeter and signature-based security tools alone, such as firewalls and antivirus software, and start deploying EDR [endpoint detection and response] and XDR solutions that are readily available on the market. In terms of preventative controls, enabling MFA in the organization is a good first step,” said Ken Westin, director of security strategy at cybersecurity vendor Cybereason.
The reality is that legacy security tools are not equipped to identify and mitigate the latest malicious threats. For example, Westin highlights that BlackCat ransomware uses the Rust programming language to evade existing behavioral and static analysis tools which are trained to look at traditional languages like C++.
This means that enterprises not only need to protect their endpoints against compromise, but they also need to have sophisticated XDR solutions in place that are capable of identifying and responding effectively to obfuscated attacks.
The top ransomware protection solutions As organizations become more concerned over the threat of ransomware breaches, there has been a significant growth in ransomware protection solutions, with the global ransomware protection market valued at $19.77 billion in 2020 and anticipated to reach $47.04 billion by 2027.
One of the leading providers addressing this challenge is Malwarebytes , which generated over $190 million in annual recurring revenue (ARR) in 2020, and offers endpoint detection and response solutions that can detect and block attempts to deploy malicious code to the endpoints.
Malwarebytes’ solution uses machine learning (ML) to detect anomalous activity on the endpoint and respond. It also offers just-in-time backups to ensure that data is recoverable if it’s encrypted.
Another competitor is CrowdStrike , with CrowdStrike Falcon Insight, an endpoint protection and response solution that uses ML and behavioral indicators of attack to identify and block ransomware. CrowdStrike recently announced their 2022 fiscal year results, with an ARR of $217 million and total revenue of $431 million.
The main differentiator between antiransomware solutions at the endpoint level is how effective their AI is at detecting and blocking threats in real time. For instance, CrowdStrike combines the latest threat intelligence with an AI that can spot signs of compromise and enable security analysts to respond.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,485 | 2,022 |
"Webflow aims to bridge the developer-designer gap with its no-code/low-code platform | VentureBeat"
|
"https://venturebeat.com/programming-development/webflow-aims-to-bridge-the-developer-designer-gap-with-its-no-code-low-code-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Webflow aims to bridge the developer-designer gap with its no-code/low-code platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There has been a flurry of discussion around platform offerings aimed at creative professionals with little-to-no technical expertise. Since the onset of the COVID-19 crisis, no-code and low-code tools have become the technology of choice for many enterprises. According to recent research from Gartner , 70% of new apps developed are expected to leverage low-code or no-code technologies by 2025, up from less than 25% in 2020.
It’s against this backdrop and fresh from a $120 million series C and a $4 billion valuation that Accel-backed Webflow is on track to become one of the fastest growing no-code and low-code all-in-one website builders on the market. Currently serving 3.5 million users and attracting more than 10 billion visitors per month, the company claims that its goal is to empower enterprises to operate more quickly and efficiently.
“We started in 2013 with an idea that was straightforward in its application — anyone who didn’t know how to write a single line of code could create a professional, high-performing website using a drag-and-drop interface,” said Bryant Chou, Webflow’s cofounder and CTO.
Bridging the gap between design and development The design-development gap refers to the missing piece of communication among designers and developers when building a product.
This is an age-old problem in the web app and design process. Fortunately, the proliferation of low-code and no-code tools has made it possible to create a shared language between design and development and bring their visions to life.
Over the years, Webflow has evolved from a single page website builder to a dynamic in-browser design tool with CMS and ecommerce capabilities, which Chou says has helped narrow the divide that still exists between the two.
Where Webflow fits into a company’s data infrastructure By removing the technical complexity behind web development, Chou says that business users can focus on design workflows, while engineers can focus on higher-level tasks like building out their core product and differentiating it in the marketplace.
The company maintains that it integrates seamlessly with an organization’s existing tech stack, allowing users to consolidate their key business data in one place.
Shaping the future of app and software development Based on data from the U.S. Bureau of Labor Statistics , by 2030, the demand for software developers is predicted to grow by 22%. While there may be a shortage of professionals to fill pertinent roles and meet the needs of modern enterprises , staying ahead of the curve will entail making processes and products more agile.
This is where the benefits of no-code and low-code development will come into play.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,486 | 2,022 |
"Web3 monetization: Decentralizing the blueprint for creator-fan relationships | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/web3-monetization-decentralizing-the-blueprint-for-creator-fan-relationships"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Web3 monetization: Decentralizing the blueprint for creator-fan relationships Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Let’s face it, creator-fan relationships are fragmented. Since the emergence of Web2, fans have grown accustomed to a one-way transactional street of following and consuming, seldom feeling fulfilled by their interactions with high-profile creators. Celebrities have limited and segmented tools for engaging with top fans and managing their personal brand. However, the interpersonal potential of Web3 presents an alternative and solidifies the current dynamic as unidirectional and disengaging.
Social tokens, a type of blockchain token personalized and traded for a specific offering, allow creators to develop and nurture a local economy for their network of fans, transferring their service or skill into a social currency. In return for purchasing the social token, the fan unlocks a personalized experience such as an exclusive stem track from a music artist, a video shout-out from a notable influencer, or a gym session with a famous athlete. By enabling this process, social tokens deepen the convergence of physical and digital experiences and uncover new methods of interaction that are dynamic and multidirectional — through this framework, both creator and fan are rewarded.
New means for monetization, new relationships Social tokens, also known as creator tokens, grant a level of personal sovereignty not achievable on legacy systems. As such, encoding functionality is only restricted by human talent and creativity. This direction presents a new opportunity for the full spectrum, from the nano influencer to the higher-profile celebrities with mammoth community followings. Most recently, we have seen the discourse surrounding Joe Rogan and his three-and-a-half-year exclusivity deal with the audio streaming service Spotify reported to be worth almost $200 million USD. With almost 200 million monthly downloads of Rogan’s podcast, it’s evident that he could catapult past Spotify’s offering should he choose to directly monetize his community and cut out the middleman. According to Patreon cofounder Sam Yam on The a16z Podcast , the average initial pledge amount has increased over time, growing 20% in the last few years, now exceeding $100 USD per month.
Although many A-list celebrities are embracing the benefits of Web3 technology — for example, Paris Hilton launching an NFT collection , and Reese Witherspoon advocating for digital identity — social tokens will prove much more valuable for independent creators. Once the big name stars have acquired their money, monetization and branding are largely outsourced. However, for the less established artist, ranging anywhere from an audience of fewer than 1000 people upward, receiving merely a single dollar from each of their fans would underpin major progression and success by way of decentralized crowdfunding.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This creates a positive feedback loop in the creator-fan relationship. As the creator becomes more in tune with their community’s support, they can aptly generate engaging and fulfilling content. Thus the monetization potential of social tokens introduces beneficial autonomy into emerging creators’ careers. Instead of being constrained by structures of celebrity status, traditional and social media, the power now lies directly in the hands of the creators and their respective online communities. Celebrities can choose what and when they would like to share, and fans can choose what and when they would like to consume, fostering a symbiotic relationship between both parties. It is within this arena that you will find the creators most likely to put in the legwork, as relying on a social token requires a consistent and concerted effort. Accountability becomes measurable against the pressure from the community to produce. It then follows that the content produced is of higher quality, engagement is frequent and impactful, and overall fans receive more value from the celebrity relationship than ever before.
The higher the investment, the higher the reward Empowering both parties, fans now enact a more important contribution to the relationship by participating in the success of a creator and investing in their future and ongoing performance. By choosing to showcase support through staking , the Web3 alternative to ‘following’, consumers are granted access to a whole host of exclusive and additional Web3 enabled features offered by the celebrity.
By “staking” your creator token, you may be granted access to an exclusive fan club. For those super fans that have accumulated a certain amount of interest, perhaps they can trade it in for a non-fungible token (NFT) that represents a season ticket or lifelong VIP access. As fans utilize and acquire intangible assets through Web3 activities, they become both physical and digital entities themselves. Digital functionality aids physical experience and vice-versa.
There is a higher risk involved with supporting a celebrity in the Metaverse, as the value can decrease the moment an athlete becomes injured or an unfortunate controversy erupts in the media. However, as the saying goes, the higher the risk the higher the reward! Creators acknowledge this volatility by showing appreciation for their followers’ investments through transparency, greater access, and a more intimate relationship. Fan engagement becomes the driving factor of celebrity and interactivity is now essential.
The future of the creator While the introduction of social tokens into the realm of celebrity is exciting in its empowerment and medium for fostering meaningful exchanges, it also leaves many questions such as whether this system will encourage everything to become monetized, and if all access lies behind a paywall, where does this leave those of us who don’t pine after personalized experiences ? As a large portion of content that is freely available today may be blocked, users will be forced to be more selective in who receives their attention. This, however, should not be perceived as a negative, as it is a lesson in conscious consumption. Currently, following 100 people can be done without much consideration, but paying $100 is a much more impactful and careful decision. Refining our following choices transfers the power of curation back to where it belongs: in the hands of the people. In summary, these social tokens will revolutionize our relationship with creators and the media in a way that is advantageous to our daily consumption habits.
The current Web2 system is constricted, inefficient to both creator and fan, and has been strategically built to serve advertisers and the centralized platforms on which these relationships exist. Through a system redesign, creators now have the opportunity to earn significantly more revenue and offer a more authentic, engaged, and accessible experience to their communities. In the future, these relationships will be underpinned by closeness, loyalty and personalized content, and the freedom of choice will be unlocked. Fans will directly choose the products and services that celebrities will directly choose to offer. While new means of monetization allow both celebrities and followers to practice autonomy over their contribution to the relationship, these activities occur behind a paywall.
That is not to say that everything on the internet will migrate behind paywalls. There is an exclusivity barrier not as apparent within current celebrity dynamics but once surpassed, the possibilities of access are limitless! Rusty Matveev is CSO at Calaxy DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,487 | 2,022 |
"The super malicious insider and the rise of insider threats | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/the-super-malicious-insider-and-the-rise-of-insider-threats"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The super malicious insider and the rise of insider threats Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In 2021, the work-from-anywhere (WFA) movement took up permanent residence in enterprises across business and industry, spurred by pandemic precautions and an accelerated digital transition to cloud-based systems. The year also gave life to a new breed of cyber threat actor: the Super Malicious Insider.
The hasty shift to remote work created an array of new challenges for security and risk professionals who suddenly had to protect hundreds of thousands of “remote offices” outside of traditional, perimeter-based corporate controls. Combined with a measurable increase in employee attrition toward the end of 2021 ( “The Great Resignation” ), the transition created a perfect storm for insider threats.
With this in mind, we set out to examine the effect of remote work on employee human behavior that is driving a dramatic increase in damaging insider attacks. In addition to noticing a significant increase in anomalous behavior driven by WFA practices, such as odd working hours and the use of new applications, our research revealed sharp increases in industrial espionage, the theft of intellectual property (IP) and data, and other criminal acts. And it classifies, for the first time, the Super Malicious Insider, someone with the knowledge and skills (often provided by their employer) to avoid detection by accepted defensive practices. The following trends should serve as a wake-up call to security teams that traditional tools such as Data Loss Prevention (DLP), User Behavior Analytics (UBA) and User Activity Monitoring (UAM) are being avoided or circumvented by insiders.
Industrial espionage on the rise Based on thousands of investigations conducted for hundreds of customers, our 2022 Insider Risk Intelligence and Research Report as opposed to collecting the results of a blind survey. Among its key findings: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2021 saw a 72% increase in actionable insider threat incidents from 2020 Super Malicious Insiders accounted for 32% of malicious insider incidents 75% of insider threat criminal prosecutions were the result of remote workers 56% of organizations had an insider data theft incident resulting from employees either leaving or joining the companies It’s clear that industrial espionage has hit an all-time high. Forty-two percent of actionable incidents were related to IP and data theft , including the theft of trade secrets, source code and active collusion with a foreign nexus. While some of these resulted from accidental disclosures, a significant portion was attributed to sabotage.
The increase in insider threats growing out of WFA also showed in other, somewhat less impactful ways. For example, we uncovered a more than 200% increase over 2020 in data loss associated with users taking screenshots during confidential Zoom and Microsoft Team meetings, some of which were leaked to the media or unauthorized users. On top of that, there was a 300%+ increase in the number of employees using corporate assets for non-work activities, including social media, shopping and stocks.
Profiling the super malicious The risks from insiders can be classified in three ways. Basic insider risk, of course, covers 100% of users, any of whom could fall for a phishing attack, accidentally expose data or otherwise be compromised. Insider threats are the 1% of users with bad intent, who would actively steal data or cause harm. The Super Malicious threat comprises a subset of malicious insiders with superior technical skills and in-depth knowledge of common insider threat detection techniques.
Although they make up a very small portion of users, Super Malicious Insiders accounted for about a third of these incidents and showed skill at covering their tracks. The survey found that 96% of malicious insiders avoided using attack techniques listed in the MITRE ATT&CK framework, which tracks common adversary tactics and techniques. Some of the most common techniques used by Super Malicious Insiders, who are better able than typical malicious actors to hide their activities, include data obfuscation and exfiltration of sensitive information without detection. They made every attempt to appear to be benign, normal users, staying within their day-to-day routines. The training many of them received in cybersecurity, data loss prevention and insider threats, along with their knowledge of the organization’s cybersecurity landscape and technology stack, helped them stay within the lines.
The Super Malicious also showed the ability to use subtle social engineering techniques to manipulate others to perform actions on their behalf. With a relationship already established with the other employees, this insider could use more nuanced—and harder to detect—techniques than those used by external actors through spear phishing, baiting or pretexting.
Steps to securing your organization Organizations should make insider risk a priority this year. It increasingly affects every sector, and recent guidance from government regulators indicates that mandates for insider threat and non-regulated data protection are likely on the way. In building a framework for an insider risk program, you can draw on resources from CISA, the National Insider Threat Task Force and other bodies, such as Carnegie Mellon University, Gartner and Forrester.
An effective step would be to keep the insider risk program outside of the security operations team (SOC), which is built to detect and investigate external threats. Insider risk is different, requiring an understanding of human behavior, psycho-social factors and trends, and feel for the abnormal. It will require inter-organizational collaboration with HR, legal, finance, technology and, of course, cybersecurity teams, so it would be best operated separately.
Remember that exfiltration of data is the last step in an attack, so an insider threat program should be looking for early indicators of malicious intent. The Insider Threat Framework describes the indicators of behaviors such as reconnaissance, circumvention, aggregation and obfuscation.
Organizations also would do well to rely not just on technology, but on people. CISA, in fact, recommends using “people as sensors against insider threats. An organization should be familiar with employee behaviors, decide which are acceptable and which are not, and positively reinforce policies that are tailored to the needs of each department. It’s also advisable for an organization to get an insider risk assessment, which are offered by a number of system integrators, consultants and vendors (some free, some for a fee).
Whether accidental, malicious or super malicious, the threat is only growing. Organizations need to act now to protect their enterprises from the inside out.
Rajan Koo is CCO and DTEX i3Lead with DTEX Systems.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,488 | 2,022 |
"The case for data-centric security in 2022 | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/the-case-for-data-centric-security-in-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The case for data-centric security in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As it pertains to cybersecurity, the start of the new year has marked anything but new beginnings. Now four months into 2022, the reality of our cyber crisis is unfortunately still more of the same – more attacks and more breaches , yet the same reluctance to pivot from legacy security controls and outdated approaches that are failing on a global scale.
Over the past few weeks, we’ve witnessed cyberattacks emerge as a new facet of war. We’ve seen major chip manufacturers , multi-billion dollar news conglomerates , automobile manufacturers , school systems , and oil companies all fall victim to a variety of attacks that resulted in the loss of service, loss of revenue, and loss of data.
This aforementioned list of organizations isn’t comprised of small mom-and-pop businesses with non-existent cybersecurity budgets. It’s full of name-brand global enterprises with significant investments in sophisticated security protections. So why, then, do the companies that essentially do what they’re supposed to do, at least by common industry convention, still end up in the headlines? It’s because these attacks, like a majority of the 2021 incidents that preceded them, were common byproducts of the lack of data-centric security across the cyber community.
We’ve seen this movie thousands of times, but keep misinterpreting the plot: the bad guys don’t steal the network itself — they steal (or destroy) the data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All too often, enterprises are exposed for failing to adopt security approaches that align with an evolving threat landscape, where highly sophisticated threat actors and ransomware gangs are more capable and more well-funded than ever before. Today’s common cybercriminal can easily bypass the thin veil of passive security controls that exist in data storage systems, enabling them to silently and often effortlessly steal or destroy large volumes of unstructured data assets for malicious and monetary gain. Without a data-centric security model that enables Zero Trust and Principle of Least Privilege concepts to be applied at every access point, there isn’t an effective way to safeguard that data from potential threats.
A fundamental change in approach is long overdue. Instead of operating with a narrow focus on the ever-changing tactics, techniques and procedures (TTPs) of attackers, enterprises need to place a higher priority on actively safeguarding the assets they’re after. That is the foundational component to data-centric security — protecting data at the core, not from the perimeter.
The technology behind data-centric security Adopting a data-centric security model starts with re-orienting the focus away from traditional network-based security approaches in favor of ones where security begins where the data lives. The modern definition for these cyber-infused storage technologies is called Cyberstorage, and the solutions leverage artificial intelligence and machine learning to blend active security controls with advanced compliance and monitoring, generating real-time internal visibility to better identify, detect, respond, and recover from encrypted attacks on unstructured data assets.
These solutions, compatible with any on-premise, cloud or hybrid network environment, strengthen data maturity by simplifying the complexities of active data protection, scalable data storage, and continuous data compliance through a unified approach.
Data protection: Securing both primary and secondary data files from compromise, loss, theft, or corruption while providing the integrated capability to quickly restore the data to a functional “known good” state in the event of a breach Data storage: Providing scalable utility architecture to efficiently store data while prepreserving the accuracy, completeness, and quality of data as it is made accessible to users over standards-based protocols Data compliance: Minimizing threat vectors by certifying that all systems enforce the required data security policies on a continuous basis, and that all users comply with regulations to prevent misuse, theft, or loss of sensitive assets.
Cyberstorage solutions also enrich the organization’s cyber ecosystem with actionable cyber defense insights that aren’t attainable through external-based network systems. The real-time guidance generates the essential agility to not only prevent breaches, but also swiftly respond to them and mitigate their impact.
Cyberstorage is the missing piece in a complex security ecosystem. It’s not a replacement for network-based cyber solutions, but rather the key ingredient that has been missing in the recipe to defend against modern data-centric attacks like ransomware, data theft, sabotage, and…well….basically all the attacks that have happened over the last handful of years.
How to implement data-centric security Implementing data-centric security doesn’t need to be difficult. It boils down to three basic steps: Reorient your perspective Layer and compartmentalize Establish a feedback loop Before investing in the actual technology behind data-centric security, it’s critical for enterprises to develop the mindset for a data-centric approach. The first step is to stop thinking about security as a “doors and windows” problem — you know, just lock the doors and the windows to keep the bad guys out — and instead view it in the context of the asset you are most interested in protecting. Ask yourself, if it’s impossible to keep the threat out, then what defenses can ensure my data still remains secure? Most organizations lack visibility into what is actually happening with their data — how much is there, how it’s being used, who has access to it and what differentiates “normal use” from “abnormal (or malicious) use.” The effectiveness of data-centric security is rooted in the insights which come from the usage of data sources themselves. It takes smart software like a cyberstorage solution to do this effectively, but before you get to that point, you must have a general understanding of how to logically categorize users and applications by function, and then take a segmented approach to implementation. Once those boundaries are established, implement controls in layers to ensure protection.
Security is a living and breathing thing. The ever-evolving threat landscape requires organizations to continually improve their defenses. How you accomplish that is by taking information from multiple sources and continuously feeding it back into a system which can evolve right alongside the threats. Sources like audit and change logs, admin and user access patterns, and policy changes provide a basis for machines to learn and improve defenses autonomously.
Data-centric security obviously extends beyond what humans alone can do. Humans create the business rules, but it’s the technology that implements them. With the volume of data stored expected to triple over the next several years, taking a data-centric approach must start with paying attention to where that data lives and how “security savvy” those storage systems are.
Making real, tangible progress toward strengthening organizational security posture can only be accomplished through cyber resilience from protecting data at the core. By placing a heightened emphasis on implementing data-centric security across the public and private sectors, we can take steps to ensure 2022 is a year of positive change — not just more of the same.
Eric Bednash is the CEO and cofounder of RackTop Systems.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,489 | 2,022 |
"The basics of decentralized finance | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/the-basics-of-decentralized-finance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The basics of decentralized finance Share on Facebook Share on X Share on LinkedIn 3/8 AI FinTech story Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
News headlines mentioning cryptocurrencies , blockchain technology and peer-to-peer finance have become common over the last years. Despite this, not everyone understands how they work and the decentralized finance (DeFi) sector can appear intimidating. This limited awareness about the building blocks of DeFi has in turn resulted in many people missing out on the significant returns available in DeFi as they believe it is only about exchanging Bitcoin, Ether, stablecoins and other cryptocurrencies.
As CEO of AQRU, an incubator specializing in decentralized finance, I’ve crossed paths with many people who think this way. This is why I’ve dedicated significant efforts to raising awareness about DeFi and explaining that decentralized finance is similar to the traditional financial system in the sense that it offers a wide range of services such as lending, savings and insurance. But, unlike its traditional counterpart, these services use peer-to-peer and blockchain technology to eliminate intermediaries and to offer higher returns for investors.
So, let’s take a closer look at the building blocks of decentralized finance, how the system works and how it has managed to offer customers higher returns than traditional finance.
How does decentralized finance work? Decentralized finance is built on blockchain technology, an immutable system that organizes data into blocks that are chained together and stored in hundreds of thousands of nodes or computers belonging to other members of the network.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These nodes communicate with one another (peer-to-peer), exchanging information to ensure that they’re all up-to-date and validating transactions, usually through proof-of-work or proof-of-stake. The first term is used when a member of the network is required to solve an arbitrary mathematical puzzle to add a block to the blockchain, while proof-of-stake is when users set aside some cryptocurrency as collateral, giving them a chance to be selected at random as a validator.
To encourage people to help keep the system running, those who are selected to be validators are given cryptocurrency as a reward for verifying transactions. This process is popularly known as mining and has not only helped remove central entities like banks from the equation, but it also has allowed DeFi to open more opportunities. In traditional finance, are only offered to large organizations, for members of the network to make a profit. And by using network validators, DeFi has also been able to cut down the costs that intermediaries charge so that management fees don’t eat away a significant part of investors’ returns.
Clever contracts In decentralized finance, all transactions, are run by smart contracts which are programs, or pieces of code, stored on the blockchain and are only enacted when certain preconditions are met.
For instance, a smart contract regarding the purchase of a nonfungible token (NFT) such as the popular ‘Bored Ape’ would be automatically triggered once the buyer has paid the seller. And, if the agreement is broken or the seller blocks the transfer of the NFT, the smart contract would determine that there has been a breach of contract and it wouldn’t complete the transaction.
As a result, decentralized finance has no need for a central or independent third-party to review that the contracts are fulfilled and, if there has been a breach, to determine where the issue originated and how the non-compliant party should compensate the victim.
Challenging traditional finance with decentralized finance From this explanation, it may seem by removing intermediaries, DeFi can only offer users a few pence in savings from transaction costs. However, the reality is much more impressive — especially when compared to traditional finance — which is why it’s time to look at how a bank and the stock market work and why the way the DeFi system is built to give the upper hand to users.
When money is deposited in a traditional savings account, the bank invests the assets in its diverse portfolio of holdings to generate profits which can range between 10% and 20%. However, running a bank is expensive. By the time the bank has covered all its costs (and taken its own share), there is not much left for customers, who usually receive returns of around 0.06% per year on their standard savings accounts. And, with inflation having just hit a 30-year high in the U.K.
, saving money in the bank is a highly effective method of becoming poorer in real terms.
Stocks and shares are not much better. While professional traders make investments with average returns of around 10% per year, the reality is that the returns for normal people are much lower, with the U.S.
Securities and Exchanges Commission estimating that 70% of day traders lose money each quarter. Meanwhile, risk-averse investors who are focused on the so-called safe investment options may not lose money but, to pay for this ‘safety’, their returns will tend to be low so, if they’re lucky, they will only just outstrip inflation.
This is where the peer-to-peer nature of the blockchain comes into its own. In collateralized lending, for instance, smart contracts usually require borrowers to deposit 150% of the value of the loan and automatically enforce the terms of the agreement, removing the risks of non-payment. And, as the entire process is run by computer code, there are no additional or hidden charges which means that most, if not all, the returns go to the lender.
Another excellent example of how removing intermediaries has allowed decentralized finance to offer higher returns to investors is liquidity mining, when consumers receive yields from placing their assets in a decentralized lending pool. Like in traditional finance, the returns will depend on the risk of the investment, with newer and riskier coins paying exceptional high yields while trusted tokens, such as Bitcoin, stablecoins and Ether, offer healthy returns of more than 10% per year. Regardless of the investor profile, the share of the revenue that goes to the user is likely to be significantly higher than in traditional finance, as many DeFi platforms only require ‘gas’ to cover blockchain transaction fees.
There’s more than higher returns Beyond ensuring that users can access significant returns, using smart contracts and blockchain technology has also enabled decentralized finance to offer an additional level of security and transparency to investors that is not currently available in traditional finance.
Indeed, given that smart contracts have been battle-tested and improved for years, they’re now able to ensure that both parties deliver exactly what they’ve promised. And, if the terms of the contract need to be changed or loopholes need to be filled, the unidirectional nature of blockchain prevents changes from being made to the contracts without the support of both parties. This is significantly different from those letters we often receive from banks outlining their updated terms and conditions which we can either accept or deny, as long as we are willing to switch to another provider.
When it comes to security, there’s more than ensuring that contracts are followed and that no changes are made without our approval, it’s about making sure that our assets are safe. As a result, we’re now seeing that many platforms, such as our own AQRU app, that allow users to access the decentralized markets are learning from traditional finance and implementing many of the security solutions that banks use. This has provided reassurance to users that through DeFi they can receive higher returns while also managing risk efficiently.
Conclusion Decentralized finance is an exciting financial ecosystem, which, by utilizing tight security controls, can allow everyday investors to simply generate high yields and generate income on existing holdings. The blockchain’s immutable ledger allows intermediaries to be stripped from financial transactions, greatly improving returns, as the only fees required are for the upkeep of the blockchain itself. Innovation on the blockchain has allowed smart contracts to be used to create impressive financial products, providing a real challenge to traditional financial institutions.
In inflationary times, DeFi is a means of maintaining and generating value without excessive risk or time requirements. We believe that investors should start seriously considering the decentralized markets as part of a diversified investment portfolio – the returns could be just too good to miss.
Philip Blows is the CEO of AQRU plc.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,490 | 2,022 |
"NFTs have a royalty problem: Here's the answer | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/nfts-have-a-royalty-problem-heres-the-answer"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community NFTs have a royalty problem: Here’s the answer Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The nonfungible token (NFT) trend is still screaming hot like an overheating car engine, with renowned celebrities , influencers, artists, musicians and creators alike jumping on the bandwagon to capitalize on these new digital assets. From a diverse range of NFTs changing hands for tens of millions of dollars to the ever-expanding list of secondary marketplaces, the NFT ecosystem is understandably turning heads.
But amid all of this, there’s a lingering problem that often goes unnoticed.
NFT proponents claim that the underlying smart contracts are designed to give creators more control and power over their content. They claim that these smart contracts can help artists establish royalties for their NFTs, ensuring that every time the NFT changes hands, the original creator receives their fair share of the revenue.
The NFT royalty structure Even though the concept sounds great on paper, things work out differently in the real world. Pointing out the technical loophole with NFT royalties, Jeff Gluck , CEO of CXIP Labs, notes, “The problem with the current NFT royalty structure is that marketplaces are not designed to be cross-market compatible. The smart contracts are unable to communicate with each other when it comes to royalty triggering events across the ecosystem.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To delve deeper into Jeff’s perspective, it is critical to acknowledge that most NFT tokens are ERC-721-based. Therefore, when an artist first sells their work to a buyer, they are entitled to receive the proceeds from the initial sale. If the buyer then goes to a secondary market like Rarible to flip the NFT for two, three, five, or ten times the original price, the artist never sees any remuneration in the form of royalties from subsequent sales.
There is no denying that NFTs have immense potential to disintermediate the prevailing publishing model and empower creators simultaneously, especially when considering royalties accrued from these creative works are probably the most powerful incentives driving motivation and commitment.
However, the prevailing problem with today’s NFTs is that the creator’s royalty is tied to the marketplace where the NFT was originally minted.
The creator can’t prevent a buyer from listing their NFT on other, competing marketplaces. As a result, there is a stark possibility that the original creator will miss out on any royalties generated from subsequent sales.
If this wasn’t a serious problem already, consider for a second that buyers often opt for private sales to avoid high transaction fees. In this case, NFTs are directly transferred from the seller’s wallet to the new buyer’s wallet, effectively cutting out the creator and any corresponding marketplace from the entire process.
Universal token standards The NFT ecosystem’s obvious answer for keeping creators incentivized is a universal token standard accepted across marketplaces. This will ensure that the royalties stemming from a body of work will always be shared with the creator, irrespective of the marketplace or blockchain where subsequent transactions unfold.
Still, given the stiff competition between NFT marketplaces and the ongoing battle for dominance playing out between first and third-generation blockchains, establishing a lasting, agreeable standard among these parties will likely prove elusive.
The other, more likely avenue for addressing this royalty mismatch lies in the capabilities of smart contracts. As underlying NFT technology evolves and new novel use cases are unlocked, smart contracts can facilitate newer transaction types, like subscriptions, while also ensuring that royalties from all future sales are passed along to the creator in perpetuity.
There is progress being made on this front, albeit quietly. The ERC-721 standard that laid the foundation of NFTs is undergoing continuous revisions to enable more dynamic standards for paying out royalties.
The NFT ecosystem is undeniably still at an early stage, despite its near parabolic growth and staggering sums being spent. As such, there is a lot of room for improvement, especially when it comes to platforms and initiatives that focus on implementing solutions that will fairly compensate creators for their contributions in perpetuity.
Sadie Williamson is the founder of Williamson Fintech Consulting.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,491 | 2,022 |
"How Google could own healthcare | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-google-could-own-healthcare"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How Google could own healthcare Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In 2020, Google reminded the world of its effectiveness as a healthcare company. Google acted quickly to help people and organizations manage the impact of the COVID-19 pandemic, even as Google itself needed to operate at a reduced capacity. For example, the company developed the COVID-19 National Response Portal, an open data platform to help communities respond to the pandemic, among many other initiatives.
Google’s response to the pandemic is a microcosm of how the company intends to lead the entire healthcare industry: helping people stay as healthy as possible through wellness care and managing the journey to receive care when needed.
To do that, Google is building a data platform connected to devices.
Managing wellness The first part of Google’s strategy consists of helping people stay healthy through wellness care. This approach is both purposeful and sensible. As our population ages, receiving healthcare is increasingly expensive – healthcare bills are the leading cause of personal bankruptcy. Helping people manage personal fitness is an important way to stay away from the hospital. And wellness is a $4 trillion industry.
Google wants a piece of that.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google, like Apple, has an advantage in personal healthcare: a data platform tied to devices. For instance, Google Fit is a health-tracking platform for Android, similar to Wear OS and Apple Inc.’s iOS. It is a single set of APIs that blends data from multiple apps and devices. With Google Fit, you can connect any Android-enabled device such as a Garmin Watch, to collect health information in one place.
Hardware, though, is the key to Google’s foray into health. Devices, ranging from Chromebooks to Pixel phones to home devices, provide the means for Google users to manage their data and for Google to monetize it.
Meta, by contrast, lacks a hardware device and finds itself beholden to Google and Apple because of it. Devices will power Google’s future in healthcare, and I believe Google will press its advantage and acquire another device manufacturer as it did many months ago with Fitbit.
Because when you own the devices and the hardware, you can gather even more information and monetize it.
Getting healthcare The second part of Google’s healthcare strategy is to own the patient journey to getting care. And here, Google is the undisputed Big Tech leader. The company has positioned itself as the default resource for people to research symptoms and access care. Google influences every phase of the patient journey, from awareness to consideration. According to research conducted by YouGov and Reputation , Google is the most popular source for searching for a physician or hospital – more popular than provider/physician websites, healthcare-specific sites such as WebMD and Healthgrades, or social media. Google is also the #1 review site used by healthcare consumers.
This is no accident. Google continues to make improvements that make Google Search more valuable for care seekers, evident in the following recent announcements: That it can now show searchers which health insurance networks the provider might accept.
Searchers can also filter providers nearby who accept Medicare. These developments are huge. Our own research shows that insurance acceptance is the most important attribute when people evaluate a physician or provider.
An option for healthcare professionals to let prospective patients know what languages are spoken at their office.
Google currently has more than a dozen languages represented, including Spanish and American Sign Language.
A feature that shows the appointment availability for healthcare providers so that care seekers can more easily book an appointment.
So, if someone is searching for, say, “spine specialists near me,” they’ll find not only names of relevant physicians but also available appointment dates and times for doctors in their area. A “book” button will direct them to a third-party site to make an appointment. This appears to be an extension of the Reserve with Google program that has been available for the past five years. The initial rollout appears to be limited to locations like urgent care clinics and is not a provider scheduler, but that could change in the future. The company is also working with MinuteClinic at CVS and other unnamed appointment schedulers.
Although Google owns the search for care, it has online care competition. Amazon and virtual care provider Teladoc recently announced that they are making it possible for patients to ask for doctors using Amazon Alexa’s voice assistant. Through a voice-activated virtual care program, patients can get non-emergency medical help by telling Alexa (via an Amazon Echo device) that they want to see a doctor. A Teladoc physician will call them back.
Amazon is likely getting more involved in direct medical care and prescription drug services through the Alexa-enabled Echo speaker, which commands the biggest share of the smart speaker market. Such a move would capitalize on Amazon’s strengths of providing efficient service cost-effectively, supported by Amazon Web Services as the cloud backbone.
What’s next To strengthen its own role in healthcare, I expect Google to: Respond to Amazon by leaning into Google Assistant (Google’s own voice assistant) to help people manage healthcare via telehealth.
When the COVID-19 pandemic hit, telehealth services skyrocketed. In April 2020, the number of virtual visits was 78 times higher than two months earlier, accounting for nearly one-third of outpatient visits. After that, telehealth usage tapered off. Even so, throughout 2021, telehealth usage had increased 38 times from the pre-pandemic baseline, according to McKinsey. Google has the pieces in place through Reserve with Google – a voice-first experience is a natural evolution.
Connect patients to physicians more effectively through Google Business Profiles.
Google Business Profiles are becoming increasingly important for providers and physicians. They are the most important way for any business to be found in local search. And 65% of organic searches result in a conversion right in the Google experience — without ever visiting a website. Google will provide more tools for providers and physicians to make their Profiles function like second websites. Those tools include chat, scheduling and ratings/reviews. Google Business Profiles will become the center of the patient experience – if Google has its way.
From wellness care to finding care, Google intends to find more ways to keep people living in Google’s universe. Google will benefit by attracting more advertisers (because engagement and volume are like gold to online advertisers). And digital health will benefit, too.
Adam Dorfman is Head of Product Growth at Reputation.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,492 | 2,022 |
"How digital twins are transforming warehouse performance | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-digital-twins-are-transforming-warehouse-performance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How digital twins are transforming warehouse performance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The global Industry 4.0 market was worth $116 billion in 2021 and is predicted to rise to $337 billion by 2028. Many technologies are contributing to the incredible growth of Industry 4.0, but a standout among them is digital twin solutions.
Specifically, digital twins are now being deployed to greatly improve warehouse automation operations with the end goal of increasing efficiency and reducing downtime.
Digital twins can deliver virtual representations of a physical environment — proving extremely helpful to the warehouse industry. With a digital twin, new improvements and efficiencies can be tested virtually, without downtime or rearrangement of physical assets.
Warehouse operations are rapidly growing in complexity. Inventory is more diverse, as the massive expansion of ecommerce has brought an increase in the proliferation of SKUs. Logistics solutions are strained, as customers now expect lightning-fast fulfillment. Technology is more complex, as innovative new automation systems come to market, and managers must analyze the new systems to introduce those that bring the greatest benefit to their warehouse operations.
To win against competitors, smart companies are now building digital twins of their warehouse operations and using them to handle operational complexities and performance improvements.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Visualization and design made easier With digital twins, companies can try out new floor plans and test new workflows virtually. They can introduce new variables and configuration parameters to the virtual model of their operations and assess the impact. Every aspect of operations can be monitored and tweaked, including SKU mix, ordering and shipping and demand spikes. Automation efforts can be tested and their impact reviewed. In short, warehouse performance can be improved far more quickly and cost-effectively than in the past.
Digital twins are also valuable in the design of new automation systems. For a traditional automation system, thousands of hours are invested in configuring and adjusting the software that runs the system. One size does not fit all. For each system, each customer’s different materials and processes must be accommodated. But what if it were possible to skip all the coding of these customer specifics? It is, with a digital twin of the physical system, in which a machine learning algorithm can run experiments. It usually takes up to a year to create and install a system; but in that time, machine-learning can run its experiments and the physical system can be optimal on the day it goes live.
Digital twins can also help businesses adapt to changing requirements or other disruptions. How far can you push your operation during peak holiday shopping periods? What happens if customer buying patterns change? How will your system respond to equipment downtime? A warehouse or distribution center empowered with digital twin technology can run hundreds of possible scenarios in minimal time. This provides critical insights of average performance over time and identifies areas where you can bolster your system’s ability to handle short- and long-term challenges. It also enables much more timely decisions about when to service, upgrade or replace the system.
Robotic systems go digital The digitization of robotic systems continues to enable next-generation automation within the warehouse space. The synthetic models that feed intelligence to the algorithm train the internal dataset to replicate into a digital twin, showing the end user the best-case scenarios of performance. Augmenting these processes gives the robotic systems the optimal path for project completion. In previous applications, these technological processes equated to months of implementation in the warehouse. With the digital twin data, these processes are now reduced to weeks. Increased investment and R&D in machine learning will continue to streamline operations for better efficiency and reduced downtime.
Forward-thinking companies are now using digital twins as a key tool in their digital transformation. Their operations and maintenance teams are gaining valuable new insights from digital twins to make more timely and relevant decisions. They’re synchronizing digital twins with their real-world environments to analyze the performance of their current processes and improve them. With integrated process and asset data, combined with predictive analytics, they’re reaching new levels of productivity and better identifying any looming issues. They’re boosting performance, efficiency, yield and uptime.
There is now an intense focus on digital transformation initiatives and industrial IoT — and digital twins are playing a vital part. They’re greatly improving performance and ROI by enabling far more efficient, rapid and cost-effective analysis and decision-making.
Thomas H. Evans, Ph.D. is CTO of robotics at Honeywell.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,493 | 2,022 |
"Getting hyperautomation right | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/getting-hyperautomation-right"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Getting hyperautomation right Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A lot has changed in the last two years. As the pandemic threw operations across enterprises out of gear, a slew of trends, including distributed (remote) working , have thrust themselves into the limelight. However, in the broader scheme of digital transformation, hyperautomation , after making the first appearance in the pre-pandemic era as the top 2020 strategic trend by Gartner Research, continues to be a hot topic in 2022.
And it is so for a good reason.
Hyperautomation is not just about technologies but about combining them to achieve the strategic objectives as defined by the organization. Gartner has even redefined hyperautomation as “a business-driven, disciplined approach that organizations use to rapidly identify, vet and automate as many business and IT processes as possible.” Moreover, as per Gartner, hyperautomation involves the orchestrated use of multiple technologies, tools, or platforms to achieve their goals.
That’s where it differs from other technological trends. Unlike specific technologies, such as robotic process automation (RPA) , for instance, the goals for hyperautomation can vary significantly from enterprise to enterprise. The manner in which an enterprise goes about implementing hyperautomation can also diverge widely from another.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Making it work Since hyperautomation is a broader approach, it comes with its own challenges. And most of these challenges involve establishing clarity on multiple fronts: Explicit identification and delineation of strategic goals Identification of use cases and their priorities Assessment of roles of various technologies Establishing a roadmap and an implementation methodology These challenges are intertwined. A clear vision of the end goal helps.
Let’s take the example of a financial institution that intends to transform its account opening across products and services.
Depending on the key driving factors or the chosen objectives, the vision for the transformed process varies. These goals may be any of the following or a combination thereof: Increase the number of account opening applications by x% Reduce abandonments throughout the process by y% Improve the prospect and employee experience measurably Reduce the cycle time by m% Reduce the cost per closure by n% Launch a 100% touch-free/human-less account opening experience in p months Having identified these goals, it is critical to establish a roadmap, which includes identifying and acquiring various technologies with good justification and defining a long-term architectural stack. After all, account opening in this case is only the starting point, and the real value of hyperautomation lies in leveraging the stack for multiple processes and applications across the enterprise with speed.
This brings us to various technological capabilities that combine to make hyperautomation powerful. It is critical to define how they come together to deliver digital account opening in this case. Here is one effective way to piece them together: Prospects apply for any account, for any product or service, from a device of their preference, with help from an AI-supported chatbot A natural language processing (NLP) engine processes all incoming requests to analyze and classify them based on prospect status (new/existing/premium), product/service, category, geography, et al., and triggers the relevant process Intelligent image and document processing captures all the information based on uploaded documents and kicks off a fully automated digital customer identification program (CIP) to establish id authentication/verification, security credentials, financial status and creditability Intelligent process automation enables the end-to-end process in real-time with straight-through processing (and flexibility to intervene or route it for exceptions, if any). It also triggers RPA bots for automated real-time execution of routine (traditionally manual) steps across the process At various points in the process, AI/ML-driven rules-engine and RPA automate approvals and other key decisions, including routing, that are traditionally taken by knowledge workers. This frees up their time for other value-add tasks that require human judgment, such as complex credit analysis for high-value deals All the relevant documents (or media) are auto-processed with content analytics and are embedded in the context of the process, with authenticated access across the cycle enabling contextual engagement with customers Throughout the process, prospects are kept engaged across channels of their preferences through omnichannel customer communication Upon final approval, the welcome kit is generated in an automated manner and delivered to the prospect digitally, while backend integration takes care of account set-up and funding, whenever applicable At appropriate times (at the application stage for existing customers or at closure for new prospects), AI/ML algorithm presents the cross-sell options relevant to the prospects’ preferences and profile and triggers the respective automated process if the prospect takes up the offer Getting hyperautomation right at the enterprise scale Through the example above, it is easy to see how hyperautomation can make a real impact by leveraging a combination of technologies. However, this is only one example. Enterprises are replete with thousands of applications and processes ranging from small supporting applications to large and deep mission-critical processes.
That’s why Gartner emphasizes on the “approach” bit. It’s not only about doing it once but achieving this over and over again, for various processes and applications, with speed.
That’s where a digital transformation platform comes in. Let’s consider the following: A set of key technologies form the fulcrum of hyperautomation strategy. This includes low code process automation (combining what is traditionally referred to as business process management – or BPM – with rapid development through low code capability), RPA, business rules management, case management and decision management Another key ingredient in hyperautomation is contextual content services that enable the end-to-end lifecycle management of all forms of content (documents and media across formats) to supply context to transactions and processes All applications and processes involve collaboration and communication in some form, requiring omnichannel customer engagement capability These technologies are further augmented by AI, machine learning (ML) and content analytics to boost speed and intelligence Hyperautomation is only impactful at the enterprise scale with end-to-end automation that is holistic in nature and can be achieved with speed and repeatability. For example, after the account opening is digitalized, are you able to extend it to lending line of business and let your existing customers experience a similar digital interface for their loan needs? While it is possible to do all this by building an architectural stack or appending technologies such as RPA to existing processes, it is time- and risk-intensive, not to mention all the opportunity costs associated with any delays. A lot of times, it may not even yield the desired results to only implement AI or RPA with incremental improvement over existing processes because the broader silos still persist.
A platform approach not only provides a kickstart but also mitigates the long-term risks of technical debt. Additionally, a digital transformation platform with low code capability helps realize the true potential of hyperautomation with speed and across lines of business enterprise-wide, as promised.
Anurag Shah is head of products and solutions for Americas at Newgen Software.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,494 | 2,022 |
"Eliminate the data packrat mentality | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/eliminate-the-data-packrat-mentality"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Eliminate the data packrat mentality Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Millions of companies are hoarding old and unnecessary data. And the data they’re hoarding could be putting their organizations at risk, increasing storage costs and souring their analytics.
If you’ve ever walked into a hoarder’s home, you were likely met by endless piles of seemingly worthless things like newspapers, books, photographs and clothing. To the owner, though, these items are invaluable.
Now apply that same lens to the data on your computer. Could your organization be a data pack rat? Psychology Today says people hoard for two reasons: they feel that they do not have permission to get rid of something or can’t imagine how to live without it. Those reasons can easily be attributed to hoarding multiple versions of the same letter, past reports, or old spreadsheets on your computer.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Most of us hoard data because we don’t know what to do with it. Often, we don’t even know what’s included in data from three, five, or even 10 years ago.
Collecting and storing data over time puts organizations at increased risk during ransomware attacks and creates what IT experts are calling “dark data.” Gartner defines dark data as “the information assets organizations collect, process and store during business activities, but generally fail to use for other purposes.” This data can impede the integrity of data analytics and erode your data security.
Dark files are usually temporary, created by programs, or work-in-progress files abandoned when a job is completed. Some of these files can contain sensitive information.
Potentially missed insights Dark data can sneak into your analytics in the form of files or hidden folders. It can also get misplaced within computer systems and intermingle with the rest of the data. If you unknowingly use the wrong data, you will get misleading results, sabotaging your analysis and leading to missed insights when using business intelligence (BI) tools.
Information hiding in the dark can prevent you from recognizing your organization’s entire data landscape. It can hinder your company from having a real grasp of how much data the organization actually possesses and what that data contains.
These dark files may contain irreplaceable information about your internal processes, which you could use to improve your productivity. Others might include information about customers that you could use to better your customer service. Or it could expose you to liabilities if that personal data is compromised.
If there’s a dark file that contains more up-to-date information or more accurate information than what you’re feeding into your business intelligence tools, you could be making decisions based on false pretenses.
Shine the light on dark data and dark files A data audit or data assessment can help shine the light on dark data and reveal potential threats to your organization. This process scans your entire file system and explores the deepest crevasses of your company. The assessment can also read the data in unstructured files and reveal their content.
Once uncovered, you can classify that data and decide if you want to delete your dark files or move them into a more suitable location. For example, less important data might be transferred to a more affordable storage solution, or sensitive data can be recorded and placed in a more secure location that is more easily searched and accessed.
You can identify redundant files, obsolete files and trash data to take control of your information and deliver more value to your organization. Much like using a metal detector at the beach, you’ll find plenty of junk, yet you may find some buried treasure that you could use to improve your organization.
By eliminating the data pack rat mentality, you can avoid a backlog in data management and limit the unstructured data that could be subjected to a ransomware attack, providing a more efficient and secure data environment.
Adrian Knapp is the CEO and founder of Aparavi DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,495 | 2,022 |
"Deep learning is bridging the gap between the digital and the real world | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/deep-learning-is-bridging-the-gap-between-the-digital-and-the-real-world"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Deep learning is bridging the gap between the digital and the real world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Algorithms have always been at home in the digital world , where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AI’s leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.
To traditional AI specialists, deep learning (DL) is old hat. It got its breakthrough in 2012 when Alex Krizhevsky successfully deployed convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. It’s neural networks that have allowed computers to see, hear and speak. DL is the reason we can talk to our phones and dictate emails to our computers. Yet DL algorithms have always played their part in the safe simulated environment of the digital world. Pioneer AI researchers are working hard to introduce deep learning to our physical, three-dimensional world. Yep, the real world.
Deep learning could do much to improve your business, whether you are a car manufacturer, a chipmaker or a farmer. Although the technology has matured, the leap from the digital to the physical world has proven to be more challenging than many expected. This is why we’ve been talking about smart refrigerators doing our shopping for years, but no one actually has one yet. When algorithms leave their cozy digital nests and have to fend for themselves in three very real and raw dimensions there is more than one challenge to be overcome.
Automating annotation The first problem is accuracy. In the digital world, algorithms can get away with accuracies of around 80%. That doesn’t quite cut it in the real world. “If a tomato harvesting robot sees only 80% of all tomatoes, the grower will miss 20% of his turnover,” says Albert van Breemen, a Dutch AI researcher who has developed DL algorithms for agriculture and horticulture in The Netherlands. His AI solutions include a robot that cuts leaves of cucumber plants, an asparagus harvesting robot and a model that predicts strawberry harvests. His company is also active in the medical manufacturing world, where his team created a model that optimizes the production of medical isotopes. “My customers are used to 99.9% accuracy and they expect AI to do the same,” Van Breemen says. “Every percent of accuracy loss is going to cost them money.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To achieve the desired levels, AI models have to be retrained all the time, which requires a flow of constantly updated data. Data collection is both expensive and time-consuming, as all that data has to be annotated by humans. To solve that challenge Van Breemen has outfitted each of his robots with functionality that lets it know when it is performing either well or badly. When making mistakes the robots will upload only the specific data where they need to improve. That data is collected automatically across the entire robot fleet. So instead of receiving thousands of images, Van Breemen’s team only gets a hundred or so, that are then labeled and tagged and sent back to the robots for retraining. “A few years ago everybody said that data is gold,” he says. “Now we see that data is actually a huge haystack hiding a nugget of gold. So the challenge is not just collecting lots of data, but the right kind of data.” His team has developed software that automates the retraining of new experiences. Their AI models can now train for new environments on their own, effectively cutting out the human from the loop. They’ve also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: “It’s somewhat paradoxical because you could argue that a model that can annotate photos is the same model I need for my application. But we train our annotation model with a much smaller data size than our goal model. The annotation model is less accurate and can still make mistakes, but it’s good enough to create new data points we can use to automate the annotation process.” The Dutch AI specialist sees a huge potential for deep learning in the manufacturing industry, where AI could be used for applications like defect detection and machine optimization. The global smart manufacturing industry is currently valued at 198 billion dollars and has a predicted growth rate of 11% until 2025. The Brainport region around the city of Eindhoven where Van Breemen’s company is headquartered is teeming with world-class manufacturing corporates, such as Philips and ASML. (Van Breemen has worked for both companies in the past.) The sim-to-real gap A second challenge of applying AI in the real world is the fact that physical environments are much more varied and complex than digital ones. A self-driving car that is trained in the US will not automatically work in Europe with its different traffic rules and signs. Van Breemen faced this challenge when he had to apply his DL model that cuts cucumber plant leaves to a different grower’s greenhouse. “If this took place in the digital world I would just take the same model and train it with the data from the new grower,” he says. “But this particular grower operated his greenhouse with LED lighting, which gave all the cucumber images a bluish-purple glow our model didn’t recognize. So we had to adapt the model to correct for this real-world deviation. There are all these unexpected things that happen when you take your models out of the digital world and apply them to the real world.” Van Breemen calls this the “sim-to-real gap,” the disparity between a predictable and unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned AI researcher from Stanford and cofounder of Google Brain who also seeks to apply deep learning to manufacturing, speaks of ‘the proof of concept to production gap.” It’s one of the reasons why 75% of all AI projects in manufacturing fail to launch. According to Ng paying more attention to cleaning up your data set is one way to solve the problem. The traditional view in AI was to focus on building a good model and let the model deal with noise in the data. However, in manufacturing a data-centric view may be more useful, since the data set size is often small. Improving data will then immediately have an effect on improving the overall accuracy of the model.
Apart from cleaner data, another way to bridge the sim-to-real gap is by using cycleGAN, an image translation technique that connects two different domains, made popular by aging apps like FaceApp. Van Breemen’s team researched cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment, where three simulated camera’s observed a simulated robotic arm picking up a simulated object. They then developed a DL algorithm based on cycleGAN that translated the images from the real world (three real camera’s observing a real robotic arm picking up a real object) to a simulated image, which could then be used to retrain the simulated model. Van Breemen: “A robotic arm has a lot of moving parts. Normally you would have to program all those movements beforehand. But if you give it a clearly described goal, such as picking up an object, it will now optimize the movements in the simulated world first. Through cycleGAN you can then use that optimization in the real world, which saves a lot of man-hours.” Each separate factory using the same AI model to operate a robotic arm would have to train its own cycleGAN to tweak the generic model to suit its own specific real-world parameters.
Reinforcement learning The field of deep learning continues to grow and develop. Its new frontier is called reinforcement learning. This is where algorithms change from mere observers to decision-makers, giving robots instructions on how to work more efficiently. Standard DL algorithms are programmed by software engineers to perform a specific task, like moving a robotic arm to fold a box. A reinforcement algorithm could find out there are more efficient ways to fold boxes outside of their preprogrammed range.
It was reinforcement learning (RL) that made an AI system beat the world’s best Go player back in 2016. Now RL is also slowly making its way into manufacturing. The technology isn’t mature enough to be deployed just yet, but according to the experts, this will only be a matter of time.
With the help of RL, Albert Van Breemen envisions optimizing an entire greenhouse. This is done by letting the AI system decide how the plants can grow in the most efficient way for the grower to maximize profit. The optimization process takes place in a simulated environment, where thousands of possible growth scenarios are tried out. The simulation plays around with different growth variables like temperature, humidity, lighting and fertilizer, and then chooses the scenario where the plants grow best. The winning scenario is then translated back to the three-dimensional world of a real greenhouse. “The bottleneck is the sim-to-real gap,” Van Breemen explains. “But I really expect those problems to be solved in the next five to ten years.” As a trained psychologist I am fascinated by the transition AI is making from the digital to the physical world. It goes to show how complex our three-dimensional world really is and how much neurological and mechanical skill is needed for simple actions like cutting leaves or folding boxes. This transition is making us more aware of our own internal, brain-operated ‘algorithms’ that help us navigate the world and which have taken millennia to develop. It’ll be interesting to see how AI is going to compete with that. And if AI eventually catches up, I’m sure my smart refrigerator will order champagne to celebrate.
Bert-Jan Woertman is the director of Mikrocentrum.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,496 | 2,022 |
"Computational storage and the new direction of computing | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/computational-storage-and-the-new-direction-of-computing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Computational storage and the new direction of computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The aggravation, the unexpected delays, the lost time, the high costs: commuting ranks regularly as the worst part of the day by people worldwide and is one of the big drivers for work-from-home policies.
Computers feel the same way. Computational storage is part of an emerging trend to make datacenters, edge servers, IoT devices , cars and other digitally-enhanced things more productive and more efficient by moving data less. In computational storage, a full-fledged computing system — complete with DRAM, I/O, application processors, dedicated storage and system software — gets squeezed into the confines of an SSD to manage repetitive, preliminary, and/or data-intensive tasks locally.
Why? Because moving data can soak up inordinate amounts of money, time, energy, and compute resources. “For some applications like compression in the drive, hardware engines consuming less than a watt can achieve the same throughput as over 140 traditional server cores,” said JB Baker, VP of marketing and product management at ScaleFlux.
“That’s 1,500 watts and we can do the same work with a watt.” Unnecessary data circulation is also not good for the environment. A Google-sponsored study from 2018 found that 62.7% of computing energy is consumed by shuttling data between memory, storage and the CPU across a wide range of applications. Computational storage, thus, could cut emissions while improving performance.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And then there’s the looming capacity problem.
Cloud workloads and internet traffic grew by 10x and 16x in the past decade and will likely grow at that rate or faster in the coming years as AI-enhanced medical imaging, autonomous robots and other data-heavy applications move from concept to commercial deployment.
Unfortunately, servers, rack space and operating budgets struggle to grow at that same exponential rate. For example, Amsterdam and other cities have applied strict limits on data center size forcing cloud providers and their customers to figure out how to do more within the same footprint.
Consider a traditional two-socket server set-up with 16 drives. An ordinary server might contain 64 computing cores (two processors with 32 cores each). With computational storage, the same server could potentially have 136: 64 server cores and 72 application accelerators tucked into its drives for preliminary tasks. Multiplied over the number of servers per a rack, racks per datacenter, and datacenters per cloud empire, computational drives have the power to boost the potential ROI of millions of square feet of real estate.
The fine print So if computational storage is so advantageous, how come it’s not pervasive already? The reason is simple — a confluence of advancements, from hardware to software to standards must come together to make a paradigm shift in processing commercially viable. These factors are all aligning now.
For example, computational storage drives have to fit within the same power and space constraints of regular SSDs and servers. That means the computational element can only consume two to three watts of the 8 watts allotted to a drive in a server.
While some early computational SSDs relied on FPGAs, companies such as NGD Systems and ScaleFlux are adopting system-on-chips (SoCs) built around Arm processors originally developed for smartphones. (An eight-core computational drive SoC might dedicate four cores to managing the drive and the remainder to applications.) SSDs typically already have quite a bit of DRAM — 1GB for every terabyte in a drive. In some cases, the computational unit can use this as a resource. Manufacturers can also add more DRAM.
Additionally, a computational storage drive can support standard cloud-native software stack: Linux OSes, containers built with Kubernetes , or Docker. Databases and machine learning algorithms for image recognition and other applications may also be loaded into the drive.
Standards will also need to be finalized.
The Storage Networking Industry Association (SNIA) last year released its 0.8 specification covering a broad range of issues such as security and configuration; a full specification anticipated later this year.
Other innovations you should expect to see: more ML acceleration and specialized SoCs, faster interconnects, enhanced on-chip security, better software for analyzing data in real-time, and tools for merging data from distributed networks of drives.
Over time, we could also see the emergence of computational capabilities added to traditional rotating hard drives, still the workhorse of storage in the cloud.
A double-edged edge Some early use cases will occur at the edge with the computational drive acting in an edge-for-the edge manner. Microsoft Research and NGD Systems, for instance, found that computational storage drives could dramatically increase the number of image queries that can be performed by directly processing the data on the CSDs — one of the most discussed use cases — and that throughput grows linearly with more drives.
Bandwidth-constrained devices often with low latency requirements such as airplanes or autonomous vehicles are another prime target. Over 8,000 aircraft carrying over 1.2 million people are in the air at any given time. Machine learning for predictive maintenance can be performed efficiently during the flight with computational storage to increase safety and reduce turnaround time.
Cloud providers are also experimenting with computational cloud drives and will soon start to shift to commercial deployment. Besides helping offload tasks from more powerful application processors, computational drives could enhance security by running scans for malware and other threats locally.
The alternative? Some might argue that the solution is obvious: reduce computing workloads! Companies collect far more data than they use anyway.
That approach, however, ignores one of the unfortunate truths about the digital world. We don’t know what data we need until we already have it. The only realistic choice is devising ways to process the massive data onslaught coming our way in an efficient manner. Computational drives will be a critical linchpin in letting us filter through the data without getting bogged down by the details. Insights generated from this data can unlock capabilities and use-cases that can transform entire industries.
Mohamed Awad is vice president of IoT and embedded at Arm.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,497 | 2,022 |
"3 lessons learned from building a software startup | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/3-lessons-learned-from-building-a-software-startup"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 lessons learned from building a software startup Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
About 90% of startups fail, and of that stunning figure, 10% fail within their first year. Which means that for every unicorn, there are a whole lot of gray mules littering the path to startup greatness. Building a company from the ground up, especially while operating in stealth, is a high-wire act that takes nerve and an incredible amount of hard work.
As cofounder and CEO of a startup myself, I’ve experienced firsthand the sometimes grueling, but always gratifying, process of bringing a software startup to market. The lessons we have already learned during that process have proven to be invaluable.
1. Get to product-market fit as though your life depends on it, because it does.
If a startup’s solution is truly innovative and disruptive, the odds that any other company is already doing the same thing are unlikely. Yet it’s estimated that 35% of startups go belly-up due to poor market demand — demonstrating market fit and demand are crucial during the funding process and beyond, especially in the highly competitive software market. Much has already been written about the value and definition of product market fit, but an additive lesson I have learned is that a crucial component of market fit is developing a robust business case to defend the purchase.
This means demonstrating not only how the product will deliver on the promise or needs of the customer, but how they will justify their purchase and fit into their work plan. In a world of skilled worker shortages, the funding or desire for the product may not be enough to create an optimal selling environment. The individuals who need to implement the product will likely require budget justification and the time required to onboard and roll out the solution. So as you consider scaling and timing, understanding and framing for your prospects how your product will fit into their budget commitments and work plan is essential.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Startup founders must ask themselves: Who in the company will be tasked with implementation and day-to-day use? How much of a lift is it — in terms of finances, personnel and time — to implement this solution? Will it disrupt prospects’ budgetary cycles? Is the ROI impressive enough that any obstacles to adoption will be worth it? When the product-market fit is there, the answer to the final question will be a resounding yes.
2. Expect to make mistakes, but be prepared to move past them quickly.
A massive challenge for founders is being right too often. A software startup founder might make 100 right decisions in a row, but that pattern may help hide a poor decision on the journey. Being blinded by early success has led to many big issues in numerous leadership teams. Better to recognize a mistake and course-correct quickly than dig in your heels for the sake of being right.
As such, the software startup creation process can be boiled down to a two-step cycle that repeats continuously: validate, then build. This is true for any aspect of a startup; building can refer to your team, your product, your pricing, your marketing strategy, etc. And the ensuing validation can come from peer advisors, design partners, investors or sales prospects.
This validate-then-build strategy is most perfectly reflected in the sprint process that has taken software companies by storm. By committing to new product releases every two weeks rather than quarterly rollouts, organizations can successfully evaluate these releases quickly to fast-track any required updates.
By fluctuating between building and validating, you are constantly improving, innovating and refining — and yes, making mistakes. Startups must be flexible enough to evolve and pivot when needed. This flexibility is crucial, as is the need to move past missteps quickly. The past is the past, and those decisions should not weigh heavily as startups debate new information and receive progressive feedback.
3. You get one chance to come out. Be ready for it.
Research shows poor timing was the final nail in the coffin for 10% of failed startups. Timing really is everything, and sometimes the best decision you can make as a founding team is to stay in stealth mode even amidst market pressure. This requires founders to put pride aside, even if it means forfeiting potentially being first to market. Right-sizing your stealth period allows founders to be incredibly judicious with how they behave, enabling them to bring forth a refined product to the market.
Another value of not automatically coming out of stealth on a predictable, early timeline is that it gives you time to understand your market, message and approach. All startups inevitably have to adjust their messaging during their infancy, but it’s better to do so outside of the public spotlight. A rapidly changing message right out of stealth sends a red-flag signal to prospects and investors that there is lack of clarity in and commitment to a powerful vision.
And in the end, people are interested in mystery. Staying in stealth mode for an extended period builds intrigue that can be incredibly valuable from a public relations and branding perspective.
Software startups can change the world.
As a startup founder, you will inevitably get a lot of advice – some of it great, and some of it less so. But if you have a clear strategy of how you intend to construct your early days, not just the product, but the whole approach to becoming a company, you will be able to easily figure out which advice to heed and which to bypass. When you are guided by a sense that you are doing something special, and when you are hyper-intentional about building the right foundation, you can position your startup for an exciting launch. More importantly, you can increase the likelihood that your young venture will be built to last.
Mike Fey is the CEO and cofounder of Island.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,498 | 2,022 |
"Aerospike embraces new, JSON-ready document model for its database | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/aerospike-embraces-a-new-json-ready-document-model-for-its-database"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aerospike embraces new, JSON-ready document model for its database Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This week, Aerospike , a company that provides real-time data and hybrid memory for its storage, announced the sixth edition of their database designed for supporting large data flows at extremely high speeds. The move brings modern protocols and formats to its core engine that’s designed to support contemporaneous data processing and storage.
Aerospike’s database gained traction with companies confronting large, continuous data streams from relentless and indefatigable sources like online games, connected devices or online ads. Sony, for example, uses the database to support personalizing experiences for PlayStation consoles. Dream11 manages more than 100 million fantasy sports players in India who track the performance of their teams.
In the past, some engineers have raved about the database and one called it an “an incredible piece of engineering of a database, all optimized towards a single use case: guaranteed single-millisecond reads and writes on a key-value store that’s too expensive to fit in RAM.” The new version maintains that same focus, but expands the storage model to offer more complexity while also improving other performance by adding better indexing and more capability for backups.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Our focus is to take that goodness that we have with the database and expand it to a much broader set of use cases.” explained Subbu Iyer, the CEO of Aerospike. “That’s why we focused on really going after building a native document model.” The document model differentiator One of the most prominent features of the new sixth edition is support for JSON format and what database administrators often call the “document model”. Developers enjoy using the format because JSON is the native data structure for JavaScript, the common language for browser-based applications. They can often send information directly between the database and the users’ browsers with little or no intervention.
Using the same JSON-centric model throughout the stack can remove the need for elaborate rewriting or reformatting, a step that can slow performance and require more developer time. The data flows easily between the user’s browser and the database, a feature that can decrease latency and speed up throughput, both important features for the product.
The JSON focus is not just for databases relying on document models for storage. Oracle, PostgreSQL and MySQL are just some of the traditional databases that have added the option over the years. Some newer databases like Couchbase were designed around the standard.
Not all of these options may scale as easily nor support the same low latency. In its marketing material, the company says, “first and only real-time data platform to support JSON document data models that can deliver sub-millisecond performance at the gigabyte-to-petabyte scale.” The company regularly compares their product favorably to other lightweight data storage solutions like Redis or Cassandra. The most common competition may be clusters of MySQL NDB installations.
“We don’t see ourselves playing in the simple session-store application, which have to stitch and store some JSON documents that come and go. That’s not the place we play.” said Iyer. “But as you scale your document-based workloads and you start kind of getting some level of scale half a terabyte to a terabyte plus, that’s where we see a really good sweet spot from there all the way up to petabytes.” Complicated data made easy The new model also allows developers to create more complicated data structures on the fly, adding new fields wherever they may be useful. This flexibility allows developers to deploy different structures for each object in the database. While this flexibility prompts debate, some companies like MongoDB continue to do very well with it.
“A lot of our customers today are using us for only a couple of different use cases and they obviously have a lot more applications and developers who are building applications with documents as their primary storage model.” explained Iyer. “We’re already talking to customers who are saying, ‘Oh, now I can actually use you in a much broader set of applications’.” The new version also includes several other features to expand support for different roles. The latest version also improves regulatory compliance for Federal users. The new version now supports FIPS 140-2 which opens up the opportunity for the company to support some of the biggest data.
“We built in the ability to back up and restore to S3.” added Iyer. “A lot of people want to use S3 as an archival storage, so they want to move data away from the active database into S3. We allow for that both ways, which is backup and restore.” The company has also improved the speed of the secondary indices so they can perform as quickly as the primary indices. In the past, fast responses were only available to queries that relied upon the primary index.
The new announcement also starts to chart some paths ahead for the company. They want to expand the database to tackle more event-driven data streams for time-series data, often from log files. They also intend to evaluate adding graph functionality to help make smarter decisions about networked data sets.
At the end of the briefing, Bill Odell, Aerospike’s chief marketing officer, pointed out that the new version is already ready to run and is currently available.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,499 | 2,022 |
"Breaking out of the app store: New monetization opportunities for mobile game developers | VentureBeat"
|
"https://venturebeat.com/business/breaking-out-of-the-app-store-new-monetization-opportunities-for-mobile-game-developers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Breaking out of the app store: New monetization opportunities for mobile game developers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Studies show that more than 260 million mobile game players are out of reach of the app stores. But with Apple and Epic’s lawsuit impacting the mobile game industry and monetization in a big way, new opportunities to monetize outside the app store are opening up.
To talk about how mobile developers can capitalize on these new avenues, GamesBeat Summit gave the floor to Anthony Mendoza, Director of Business Development, Global, and Hannah Zhang, Business Development Manager, USA, both at Xsolla.
Mendoza opened up by talking about three major components to this new freedom from the app store. First is that now players can use online payment methods to purchase in-game items and currency, and top up their accounts. The second key opportunity is that developers can now market directly to players, in order to drive traffic to a branded web shop for purchases. The third major component is that a branded web shop, which represents a new way for developers to own the entire user experience. And it looks like it may end up being a global trend.
“Apple may eventually lift the restrictions and allow developers to offer alternative payment options for their users in more territories,” Zhang said. “Developers are more aware of their options around contacting their users to encourage them to pay directly and avoid Apple and Google’s platform fees.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Reaching 260 million new players The lift in restrictions also means developers now have a way to reach new, untapped markets — those 260 million players who can’t use the app stores because they have no way to pay for their purchases (because credit cards are not the standard form of payment in their countries). When game developers can offer alternative, local methods of payment, through companies like Xsolla, they can increase conversion and drive new revenue, and create a worldwide community.
Xsolla offers a wide range of local and alternative payment methods around the world — 99-plus in the United States (a 10% increase in available payment methods), 300-plus in Europe (+40%), and 200-plus in China and Asia (an increase of 60% in South Korea, about 90% in China).
In Asia, including China and South Korea, Apple mostly offers credit card payment methods for users, but in China, 54% of payment transactions actually happen through methods like Alipay and WeChat Pay. In South Korea, many users rely on local credit card payments like KakaoPay, Toss, and Payco.
In Central and Latin America, Mexico has about 50 million players, and Brazil has about 75 million — and 50% of those players use local credit card brands. Boleto Flash accounts make up about 15% of the market, while an up-and-coming instant payment method called PIX is gaining traction. In the U.S., much of the untapped market comes from things like game store gift cards.
Owning your own community “One of the biggest draws to all of this is being able to own your own community,” Mendoza said, which is huge for marketing purposes.
For example, a game developer or publisher can collect email addresses through confirmed purchases or initiated purchases in the web shop, then target campaigns and ads to those email addresses to maximize profit.
“You own the data. You own your ecosystem. You can have direct marketing and direct engagement to these players that you would not otherwise be able to do on the Apple and Google platforms,” he added. “There’s the big draw with marketing campaigns, being able to communicate with your users and your gamers on a more intimate level.” Beyond the marketing campaigns, the web shop also offers other benefits and value propositions, Zhang added. For instance, Xsolla’s web shop is a white label digital store for developers and publishers to sell subscriptions, in-game items, and currency directly to their users from their own customizable branded website. The storefront becomes a part of the community because it offers a way to distribute games as well as update features, news, and leaderboards.
Independent webstores also get around Apple’s $99.99 restriction on the price of user packages. Xsolla partner Playstudios offers a VIP page to engage whales with exclusive mobile item content. Similarly, Scopely has created a web store outside of their mobile games to sell special bundles and packages to their high-LTV cohort and whales.
“You make your users and your community feel special by having this alternate ecosystem to allow them to talk to one another and purchase things,” Mendoza said. “That’s where you create trust in your community and your gamers. That’s where you’ll create loyalty, and loyalty in the mobile game industry does mean money.” Catch up with whole session on demand right here.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,500 | 2,022 |
"3 new-collar jobs to check out | VentureBeat"
|
"https://venturebeat.com/business/3-new-collar-jobs-to-check-out"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs 3 new-collar jobs to check out Share on Facebook Share on X Share on LinkedIn Coined in 2016, the term “new collar job” has joined the lexicon along with white collar and blue collar jobs, and has come to mean a role that’s done by someone in the tech industry, and often someone who may not have come in by a traditional educational path — such as cybersecurity analysts, application developers, and cloud computing specialists.
Blue collar roles are on the rise, too: A recent study established that there are 32 million Americans without college degrees, but who have the skills to transition to higher income jobs. To go along with that, Google search interest in the term “learn coding” was 292% higher in January 2022 than it was for the previous 10 years.
If you’re ready to make a career move now, then we have three appealing roles to check out below — as well as lots more to browse over on our Job Board.
Senior Software Engineer (front-end heavy), Indeed Located in Boston, Indeed, the world’s number one job site, is looking for a Senior Software Engineer , who will join the company’s First Conversations Team. This is a working group that enables those crucial initial contacts between employers and job seekers all through the Indeed Platform. Your scope of work will be full stack, but will lean towards front-end engineering. Technical skill and experience with modern Javascript frameworks are ideally what Indeed is looking for.
This is a company on the up, actively recruiting and rewarding its employees: to join, you’ll need a Bachelor’s degree or above required in computer science, electrical engineering, computer engineering, mathematics, or an equivalent field, as well as five-plus years of experience programming with at least one of the following languages: JavaScript, Typescript, Java, or Python. Experience building applications and new features using modern Javascript Frameworks (React, Angular, and Vue for example), Typescript, GraphQL, and Java based API end-points is also a must.
If that sounds like a fit for your skill set, more information on the Senior Software Engineer role is available and lots more open roles at Indeed are on our Jobs Board.
Senior Software Engineer — SysOps, Intuit Specializing in financial software, Intuit ranks number 11 on Fortune’s list of 100 great places to work, recognizing it as a company focused on creating a great space for all of its employees. If that sounds good to you, then you may be interested in the Senior Software Engineer — SysOps role , available at the company’s Texas city location.
Around 70-85% of your time will be spent coding in this role, and you’ll also be gathering functional requirements, developing technical specifications, and doing project and test planning. Designing / developing web, software, mobile apps, prototypes, or proofs of concepts will be key, as will acting in a technical leadership capacity by mentoring junior engineers, new team members, and applying technical expertise to challenging programming and design problems.
You’ll be expected to have more than eight years of experience developing web applications and web services and you’ll have a BS/MS in computer science, its equivalent, or equivalent work experience. Knowledge of Javascript ES6, ESLint, HTML5, CSS3, Webpack, Babel and SASS is a must.
More information on the Senior Software Engineer — SysOps role is available, and to discover Intuit’s other open positions, visit our Job Board.
Senior Camera and Imaging Architect, NVIDIA NVIDIA designs and sells graphics processing units (GPUs) for gaming, cryptocurrency mining, and professional applications, as well as chip systems for use in vehicles, robotics, and other tools. In 2021, it was rated as the second-best place to work in the U.S. according to a ranking released by Glassdoor.
The company is growing fast in some of the hottest state-of-the-art fields such as deep learning, AI, and autonomous vehicles. If you’re a creative and autonomous architect with a real passion for imaging and ISP, you could be a great fit for the Senior Camera and Imaging Architect role , located in Santa Clara.
You’ll be responsible for working on the development of new architectures and algorithms for NVIDIA’s Image Signal Processor (ISP) and other camera and imaging hardware. You will collaborate with the other architects, ASIC Design Engineers, and Software Engineers to study, analyze, and implement the camera strategy for NVIDIA’s Tegra products.
To apply, you’ll need a Master’s or PhD in computer science, computer architecture, digital signal processing, mathematics, or related discipline (or equivalent experience), ideally focused on image processing as well as eight-plus years of experience with camera/imaging architecture and/or algorithms.
Interested? More on the Senior Camera and Imaging Architect role is available , and there are plenty more open roles at NVIDIA on our Job Board too.
Are the above roles not a fit for you? For hundreds of other open positions across all sorts of new-collar roles, check out our Job Board VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,501 | 2,022 |
"Meta AI announces long-term study on human brain and language processing | VentureBeat"
|
"https://venturebeat.com/ai/meta-ai-announces-long-term-study-on-human-brain-and-language-processing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta AI announces long-term study on human brain and language processing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The human brain has long been, and continues to be, a conundrum — how it developed, how it continues to evolve, its tapped and untapped capabilities.
The same goes for artificial intelligence (AI) and machine learning (ML) models.
And just as the human brain created AI and ML models that grow increasingly sophisticated by the day, these systems are now being applied to study the human brain itself. Specifically, such studies are seeking to enhance the capabilities of AI systems and more closely model them after brain functions so that they can operate in increasingly autonomous ways.
Researchers at Meta AI have embarked on one such initiative. The research arm of Facebook’s parent company today announced a long-term study to better understand how the human brain processes language. Researchers are looking at how the brain and AI language models respond to the same spoken or written sentences.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’re trying to compare AI systems to the brain, quite literally,” said Jean-Rémi King, senior research scientist at Meta AI.
Spoken language, he noted, makes humans wholly unique and understanding how the brain works is still a challenge and an ongoing process. The underlying question is: “What makes humans so much more powerful or so much more efficient than these machines? We want to identify not just the similarities, but pinpoint the remaining differences.” Brain imaging and human-level AI Meta AI is working with NeuroSpin (CEA), a Paris-based research center for innovation in brain imaging and the French National Institute for Research in Digital Science (INRIA). The work is part of Meta AI’s broader focus on human-level AI that can learn with little to no human supervision.
By better understanding how the human brain processes language, the researchers hypothesize that they can glean insights that will help guide development of AI that can learn and process speech as efficiently as people do.
“It is becoming increasingly easy to develop and train and use special learning algorithms to perform a wide variety of tasks,” King said. “But these AI systems remain far away from how efficient the human brain is. What’s clear is that there is something missing from these systems to be able to understand and learn language much more efficiently, at least as efficiently as humans do.
This is obviously the million-dollar question.” In deep learning, multiple layers of neural networks work in tandem to learn. This approach has been applied in the Meta AI researchers’ work to highlight when and where perceptual representations of words and sentences are generated in the brain as a volunteer reads or listens to a story.
Over the past two years, researchers have applied deep learning techniques to public neuroimaging datasets culled from images of brain activity in magnetic resonance imaging (MRI) and computerized tomography (CT) scans of volunteers. These were collected and shared by several academic institutions, including Princeton University and the Max Planck Institute for Psycholinguistics.
The team modeled thousands of these brain scans while also applying a magnetoencephalography (MEG) scanner to capture images every millisecond. Working with INRIA, they compared a variety of language models to the brain responses of 345 volunteers that were recorded with functional magnetic resonance imaging (fMRI) as they listened to complex narratives.
The same narratives that were read or presented to human subjects were then presented to AI systems. “We can compare these two sets of data to see when and where they match or mismatch,” King said.
What researchers have found so far Researchers have already pulled out valuable insights. Notably, language models that most closely resemble brain activity are those that best predict the next word from context (such as “on a dark and stormy night…” or “once upon a time…”), King explained. Such prediction based on partially observable inputs is at the core of AI self-supervised learning (SSL).
Still, specific regions of the brain anticipate words and ideas far ahead in time – while by contrast, language models are typically trained to predict the very next word. They are limited in their ability to anticipate complex ideas, plots and narratives.
“(Humans) systematically predict what is going to come next,” King said. “But it’s not just prediction at a word level, it is at a more abstract level.” In further contrasts, the human brain can learn with a few million sentences and can continuously adapt and store information between its trillions of synapses.
AI language models, meanwhile, are trained on billions of sentences and can parameterize up to 175 billion artificial synapses.
King pointed to the fact that infants are exposed to sentences in the thousands and can understand language quickly. For instance, from just a few examples, children learn that “orange” can refer to both a fruit and a color. But modern AI systems have trouble with this task.
“It is very clear that the AI system of today, no matter how good or impressive they are, are extremely inefficient as well,” King said. While AI models are performing increasingly complex tasks, “it is becoming very clear that in many ways they do not understand things broadly.” To further hone their study, Meta AI researchers and NeuroSpin are now creating an original neuroimaging dataset. This, along with code, deep learning models and research papers will be open sourced to help further discovery in AI and neuroscience fields. “The idea is to provide a series of tools that will be used and capitalized on by our colleagues in academia and other areas,” King said.
By studying long-range forecasting capability more in depth, researchers can help improve modern AI language models, he said. Enhancing algorithms with long-range forecasts can help them become more correlated with the brain.
King emphasized that, “What is clear now is that these systems can be compared to the human brain, which was not the case just a few years ago.” He added that scientific progress requires the bringing together of the disciplines of neuroscience and AI. With time, they will evolve much more closely and collaboratively.
“This exchange between neuroscience and AI is not just a metaphorical exchange with abstract ideas,” King said. “It’s becoming extremely concrete. We’re trying to understand what are the architectures, what are the learning principles in the brain? And we’re trying to implement these architectures and these principles into our models.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,502 | 2,022 |
"How AI and video are redefining talent recruitment | VentureBeat"
|
"https://venturebeat.com/ai/how-ai-and-video-are-redefining-talent-recruitment"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI and video are redefining talent recruitment Share on Facebook Share on X Share on LinkedIn Hiring or not? Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
While there’s a vast amount of data available for HR and talent analytics today, most organizations are still not reaping the benefits of their analytics investments. Gartner reports that just 21% of HR leaders use data to “shape talent acquisition and recruiting strategies, improve employee engagement and inform other business decisions.” As the report notes, more data doesn’t necessarily mean more action.
However, Myinterview — an Israel-based company that aims to enable hiring managers to leverage video for pre-screening candidates at scale — says it has developed a platform that captures, plays and interprets videos generated from candidates who respond to pre-determined questions.
Myinterview was cofounded in 2016 by Benjamin Gillman, the CEO and Guy Abelsohn CPO, who both believe the traditional CV is fast becoming obsolete. The company uses artificial intelligence (AI) and machine learning (ML) models to ensure fast and effective hiring processes for HR teams across the enterprise and mass market.
Gillman told VentureBeat that Myinterview helps employers to save up to 70% of their time to hire, providing them with the insights they need to hire the best fit for their open roles. He noted that Myinterview’s video interview platform allows employers and jobseekers to meet much sooner in the hiring funnel. Myinterview is keen on empowering organizations to meet people and not resumés.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Fast-tracking the recruitment funnel with AI Gillman said most hiring funnels today are drawn out, uncompetitive processes. Alternatively, he noted that MyInterview’s mission is to create short, efficient hiring funnels because it’s better for everyone.
The company uses ML techniques for interpreting video, video capturing, compression and delivery across varying internet stabilities. Gillman said the AI component of Myinterview’s platform extracts information from videos by using these two models: An NLP/text analysis model that identifies keywords and phrases that align to various personality leanings.
An ML-trained database of tens of thousands of interviews, with each interview rated by a team of four different behavioral psychologists based on specific criteria.
Gillman noted that it’s essential to use these models because there’s an enormous volume of data along the hiring funnel that organizations often miss by focusing only on candidates’ CVs. There’s more to the candidate than the CV, and our platform helps recruiters gather data that they would ordinarily not get on paper, he said.
He noted that companies can get a lot more data from someone speaking casually in a video than from texts and other data forms. According to Gillman, videos enable better expression, rather than standard items listed on a CV.
“With our platform, jobseekers are able to apply for roles in record time. The platform also allows employers to get answers to candidates much quicker. More people are considered and less time is wasted. Recruiters also have more time to build relationships and are safe from administrative bottlenecks with follow-ups and no-shows,” said Gillman.
Myinterview analyzes each video interview to check for soft skills, personality traits and keywords that enable organizations to select the candidates that align with their culture, vibe and goals — all while reducing the risk of bias or human sentiment. The company’s training data currently sits around 30,000 curated interviews, according to Gillman.
Competitive edge Gillman said one of Myinterview’s advantages over other players in the industry is in its position to serve not only enterprises, but also the mass market, such as large online job boards.
Other differentiators, according to Gillman, include the company’s proprietary filtering and search model, collaboration tools and one of the largest integration marketplaces. Myinterview’s automated shortlisting capabilities enable recruiters to uncover talents that otherwise could have been overlooked — a feature which Gillman said no one currently offers in the market.
Software comparison and product review platform G2 shows MyInterview currently has strong competition in Spark Hire, Jobvite, TestGorilla, CodeSignal and Workable.
Business model and expectations Myinterview is a subscription-based business-to-business (B2B) software-as-a-service (SaaS) platform, with plans ranging from free to $329 per month. The company’s plans include interviewing volumes and feature availability. Myinterview’s customers include Marriott Hotels, Meta, Billable, Agoda, McDonald’s and more.
Since inception, Myinterview has raised a total of $6.2 million from investors like Aleph, Entrée Capital and SeedIL, among others. Having just welcomed a new COO in Amalia Bercot, who founded SendinBlue , Myinterview is looking to further build its team as the company continues to integrate with other solutions across the industry.
“Myinterview had a 120% year-over-year growth last year and is on track to grow even further this year,” Gillman said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,503 | 2,022 |
"Meta AI’s open-source system attempts to right gender bias in Wikipedia biographies | VentureBeat"
|
"https://venturebeat.com/technology/meta-ais-open-source-system-attempts-to-right-gender-bias-in-wikipedia-biographies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta AI’s open-source system attempts to right gender bias in Wikipedia biographies Share on Facebook Share on X Share on LinkedIn female Model / painting with Lights Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
By this point, it’s become reflexive: When searching for something on Google, Wikipedia is the de facto go-to first page. The website is consistently among the top 10 most-visited websites in the world.
Yet, not all changemakers and historical figures are equally represented on the dominant web encyclopedia. Just 20% of Wikipedia biographies are about women. That percentage goes down even more when it comes to women from intersectional groups — those in male dominated industries like sciences, for example, or from historically underrepresented ethnic backgrounds.
This is indicative of the fact that “there’s a lot of societal bias on the internet in general,” said Meta AI researcher Angela Fan, who set out to explore this imbalance for her Ph.D. project as a computer science student at the Université de Lorraine, CNRS, in France. “AI models don’t cover everyone in the world equally.” In addressing this, Fan teamed with her Ph.D. advisor, author and computer science researcher Claire Gardent, to build an open source AI system that sources and writes first drafts of Wikipedia-style biographies. Today, they released their findings and methodologies in the paper, “Generating Full-Length Wikipedia Biographies: The Impact of Gender Bias on the Retrieval-Based Generation of Women Biographies.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Meta AI has also open-sourced the model and corresponding dataset. These directly relate to not only women, but women in science and those located in Asia and Africa. The hope, Fan said, is that the open, reproducible science can complement existing efforts and provide a starting point for researchers to bring more representation to the web.
NLP battles gender bias As Fan pointed out, the natural language processing (NLP) community has focused on combating gender bias in co-reference resolution dialogue, detection of abusive language, machine translation and word embeddings. These studies have presented a variety of strategies, including data augmentation, additional data collection efforts, modified generation and fair evaluation.
In the case of Wikipedia, while efforts by such groups as the Wikimedia Foundation, WikiProject Women, and Women in Red – a Wikipedia editor community – have focused on de-biasing existing content, they haven’t addressed systemic challenges around the initial gathering of content and the factors that introduce bias in the first place, Fan said.
Meanwhile, factuality is one of the major problems in text generation and NLP. The process raises three key challenges, Fan said: How to gather relevant evidence, how to structure that information into well-formed text, and how to ensure that the generated text is factually correct.
The study’s model and dataset uses AI to generate full biographies, instead of focusing on fixing or adding bits and pieces of content to existing profiles. The model writes a full biography by first predicting text around an intro paragraph, then the subject’s early life, then their career. Each section follows three steps: a retrieval module that selects relevant information from the web to write each section; a generation module to write the next section’s text and predict which section to write next; and a citation module that lists relative citations.
Fan and Gardent’s query consisted of three parts: The name of the person for which the biography is generated; their occupation(s), and a section heading. They curated a dataset of 1,500 biographies about women, then analyzed that generated text to understand how differences in available web evidence data affect generation. They evaluated the factuality, fluency, and quality of generated texts using both automatic metrics and human evaluation looking at content and factuality.
The limitations of AI As Fan explained, existing AI can write individual sentences fairly well, but producing full grammatically correct sentences can be difficult, and producing an entire long-form document or article is even more difficult.
“The key challenge is generating long text,” said Gardent, who authored the book, “Deep Learning Approaches to Text Production,” and is affiliated with the Lorraine Research Laboratory in Computer Science, the French National Centre for Scientific Research, and the University of Lorraine. “That sounds very natural. But if you look at it in detail, it’s full of contradictions and redundancies, and factually it can be very wrong.” This is because there often aren’t enough secondary sources to fact-check against. Concurrent with that are challenges with multilingual NLP. Wikipedia supports 309 languages, but English is dominant, followed by French and German. From there, it significantly drops off because many languages – such as those spoken in Africa – are low-source. “It’s important to measure not just the representation of one group, but how that interacts with other groups,” Fan said.
The goal is to have “language agnostic representation,” Gardent agreed. If numerous languages can be processed, they can be used to derive maximum information.
In tackling factuality, the study also used what’s known as Natural Language Entailment, a high-level quantification proxy. If two sentences entail each other in both directions, then they are semantically equivalent, Fan explained.
Ultimately, she emphasized that the model and dataset are just one small step in the process of righting long-standing, inherent bias.
“Our model addresses just one piece of a multifaceted problem,” Fan said, “so there are additional areas where new techniques should be explored.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,504 | 2,022 |
"Innodisk debuts industrial-grade DDR5 memory | VentureBeat"
|
"https://venturebeat.com/technology/innodisk-debuts-industrial-grade-ddr5-memory"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Innodisk debuts industrial-grade DDR5 memory Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Innodisk has announced its industrial-grade DDR5 memory for workstations. This memory will push forward speed, capacity and reliability in a professional environment.
Innodisk touts that it is enabling the early adoption of DDR5 for workstations, bringing server-grade performance to PC form factors. DDR5 brings new features such as double bank groups, same bank refresh, on-die ECC (error correction code) and dual subchannels. This enables higher bandwidth and capacity, lower power consumption , and improved data protection due to the on-die ECC. The first CPU with DDR5 support was Intel’s 12th Gen Alder Lake, which launched in late 2021. More CPUs, including for the datacenter, are expected to launch in 2022.
Innodisk said that its new product, a 4800MT/s 32GB DDR5 UDIMM, is already making its way to the market, citing a design win with a company that designs workstations for engineering and other related applications. Nevertheless, the memory could also be used for other applications, such as server and embedded applications in surveillance and healthcare.
Innodisk’s full DDR5 portfolio includes form factors such as UDIMM, SODIMM and DRIMM, both with or without ECC. Innodisk further says that it is sampling a wide temperature UDIMM that reduces the minimum operating temperature from 0°C to -40°C. All DRAM modules are available in 16GB to 32GB capacities.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,505 | 2,022 |
"How the C12-CEA partnership can help drive quantum computing | VentureBeat"
|
"https://venturebeat.com/technology/how-the-c12-cea-partnership-can-help-drive-quantum-computing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the C12-CEA partnership can help drive quantum computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
French research institute CEA and startup C12 Quantum Electronics have announced a partnership to produce multi-qubit chips at wafer scale using carbon nanotubes. C12 and CEA also claim to be the world’s first to manufacture components to control qubits using standard manufacturing processes.
CEA says it manufactures quantum chips on 200mm silicon wafers by relying on standard CMOS processes, while C12 is pursuing carbon nanotubes as a novel approach to build qubits, which are the fundamental building blocks of quantum computers. Combining these two approaches would result in a scalable and ultra-coherent platform for quantum computing.
Coherency refers to the lifetime of a qubit: the amount of time it can retain its information before the quantum state is destroyed by noise. A full prototype is expected in 2024.
Addressing qubits using standard processes Additionally, CEA and C12 claim to have demonstrated for the first time ever the capability to manufacture in volume core components to calibrate, control and read qubits using standard (presumably CMOS) processes. The nanotubes are assembled mechanically by C12 onto the semiconductor chip that has been fabricated by CEA. This allows C12 to design electronic circuits with almost arbitrary complexity, while protecting the qubit from contamination until the last manufacturing step, when the qubits and the silicon chip are combined.
In this approach, the ultra-pure carbon nanotube serves as the qubit, while the silicon chip is used as a quantum communication bus. It also causes the qubits to be isolated. The carbon atoms are isotopically purified (which means that all atoms have the same amount of neutrons in the nucleus of the atom), which would minimize their decoherence by reducing noise.
“This partnership is a key milestone for our company to transfer an academic fab process to an industrial-grade semiconductor fab process, which was a major challenge,” said Pierre Desjardins, CEO and co-founder of C12. “Thanks to CEA-Leti, we will benefit from better quality and higher volume as well as will prepare for industrialization of our devices.” Finally, the two companies announced that they have started to manufacture chips for C12’s “quantum accelerators,” which are meant for integration into classical supercomputers. This represents the startup’s first product milestone.
C12 closed a $10 million seed round in June 2021.
More on quantum computing Quantum computing represents a fundamentally new way to do computation, as it makes use of quantum mechanical properties such as entanglement and superposition. Whereas Moore’s Law is based on periodically doubling the number of transistors on a chip, in quantum computing the theoretical capabilities grow exponentially with each new qubit.
Research has been progressing over the last decade across multiple companies and startups. Various techniques have been invented to make qubits. One of the earliest technologies that saw traction was the superconducting qubit.
In contrast, C12’s qubits are created by trapping a single electron in a quantum nanotube. Gate electrodes are used to form a double quantum dot within the nanotube, and to entangle the electronic spin with the double quantum dot. The spin qubit is then addressed through a resonator via microwave pulses. Multi-qubit gates are performed via spin-spin coupling between the qubits.
Besides CEA, the main other company working on a CMOS-based quantum computer is Intel, which uses its 300mm fabs compared to CEA’s 200mm wafers. Intel is working on both silicon and silicon-germanium spin qubits, instead of carbon nanotubes, where a single electron is trapped inside a structure that resembles a classical transistor.
Quantum applications It is generally believed that it will require thousands or even millions of qubits for quantum computing to gain relevance for practical applications, which could go from chemistry to cryptography, materials science and optimization problems. With the industry currently stuck at around 100 qubits or even less, this means quantum computing still has some way to go before it could become a mainstream technology.
Nevertheless, some are already jumping on the bandwagon by offering the technology to the wider public for early adoption, for instance through simulators or cloud access. One such example is Azure Quantum.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,506 | 2,022 |
"Why remote browser isolation is core to zero-trust security | VentureBeat"
|
"https://venturebeat.com/security/why-remote-browser-isolation-is-core-to-zero-trust-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why remote browser isolation is core to zero-trust security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Providing internet access to users while protecting against web attacks is the most persistent security challenge organizations face. Unfortunately, the web has become cybercriminals’ attack surface of choice. It takes minutes for cybercriminals to create fraudulent landing pages and websites to drive phishing , malware, credential theft and ransomware attacks. In addition, cybercriminals are always sharpening their social engineering skills, making phishing and spoofing attempts difficult to spot.
Web is the attack surface of choice Google’s Security Team saw a large jump in Chrome browser exploits this year and say the trend continues in 2022. A Google Security blog provides a detailed look at how security teams track exploits and identify zero-day attacks.
The increase is driven by Chrome’s global popularity and Google’s improved visibility into exploitation techniques. In addition, they’re seeing more zero-day exploits in the wild and have set up Project Zero, an internal team, to track zero-day exploits attempted.
Zero-day vulnerabilities are those not known to the public or Google at detection. Google’s Project Zero Team recently released their findings of zero-day bugs by technology.
Malware, ransomware and phishing/social engineering attacks grew significantly in 2021 and continue to grow this year. All three approaches to attacking an organization are getting past current antivirus, email security and malware applications. Ransomware will cost victims approximately $265 billion by 2031 , with a new attack occurring on average every two seconds.
Cybersecurity Ventures finds that cybercriminals are progressively refining their malware payout demands and exportation techniques, contributing to a predicted 30% year-over-year growth in damage costs through 2031.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Phishing attacks continue to grow as cybercriminals look to exploit weak and sometimes nonexistent web access security at the browser level. For example, Proofpoint’s latest State of the Phish found that 15 million phishing messages with malware payloads were directly linked to later-stage ransomware. Hackers rely on Dridex, The Trick, Emotet, Qbot and Bazaloader malware variants most often. Additionally, 86% of organizations surveyed experienced a bulk phishing attack last year, and 77% faced business email compromise (BEC) attacks.
Why CISOS are turning to remote browser isolation for zero trust Reducing the size of the attack surface by isolating every user’s internet activity from enterprise networks and systems is the goal of remote browser isolation (RBI). CISOs tell VentureBeat that the most compelling aspect of RBI is how well it integrates into their zero trust strategies and is complementary to their security tech stacks. Zero trust looks to eliminate trusted relationships across an enterprise’s tech stack because any trust gap is a major liability. RBI takes a zero-trust approach to browsing by assuming no web content is safe.
When an internet user accesses a site, the RBI system opens the site in a virtual browser located in a remote, isolated container in the cloud, ensuring that only safe rendering data is sent to the browser on a user’s device. The isolated container is destroyed when an active browsing session ends, including all website content and any malware, ransomware and weaponized downloads from websites or emails. To prevent data loss, policies restrict what users can copy, paste, and save using browser functions, such as social media or cloud storage sites. No data from SaaS sites remains in browser caches, so there’s no risk of data loss via the browser if a device is stolen or lost.
Considered a leader in providing a zero-trust-based approach to RBI, Ericom’s approach to RBI concentrates on maintaining native-quality performance and user experience while hardening security and extending web and cloud application support. For example, their RBI isolates websites opened from email links in the cloud, so malware can’t enter endpoints via browsers and halt phishing attempts. It also identifies and opens risky sites in read-only mode to prevent credential theft.
Additionally, Ericom has developed a unique RBI solution called Virtual Meeting Isolation that allows it to seamlessly isolate even virtual meetings like Zoom, Microsoft Team Meetings and Google Meet, to prevent malware and exfiltration of confidential data via the meeting. Ericom’s RBI can also secure endpoints from malware in encrypted sites, even IMs like WhatsApp. Every RBI vendor takes a slightly different approach to deliver secure browsing with varying user experience, performance, and security levels evident across each solution. Additional RBI vendors include Cloudflare, Menlo Security, McAfee, ZScaler, Symantec and others.
CISOs interviewed for this article also told VentureBeat via email that RBI works when securing endpoints by separating end-user internet browsing sessions from their endpoints and networks. In addition, RBI assumes all websites might contain malicious code and isolate all content away from endpoints so no malware, ransomware or malicious scripts or code can impact a company’s systems. One CISO says that his organization uses four core criteria to evaluate RBI. The first is the seamless user experience, a core requirement for any RBI solution to be deployed company-wide. The second is how consistently the system delivers the user experience. CISOs also look for how hardened the security and policy features are. The fourth factor is how deep the functionality and applications support is. These four criteria guide the selection process for RBI solution providers with CISOs today.
The future of RBI Web access is necessary for every business to stay competitive and grow, making it the most popular attack surface with hackers and cybercriminals.
As a result, CISOs want zero trust in the browser and session level with no degradation in user experience or performance. RBI’s rapid advances in secured containers, more hardened security, and a wider variety of functions deliver what CISOs need. The goal is to provide an air gap between a user’s browser sessions and enterprise systems. Leaders in providing RBI systems ensure their solutions can be complementary and scale with security tech stacks as they move toward zero trust.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,507 | 2,022 |
"Spring4Shell vulnerability likely to affect real-world apps, analyst says | VentureBeat"
|
"https://venturebeat.com/security/spring4shell-vulnerability-likely-to-affect-real-world-apps-analyst-says"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Spring4Shell vulnerability likely to affect real-world apps, analyst says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
More answers are emerging about the potential risks associated with a newly disclosed remote code execution (RCE) vulnerability in Spring Core, known as Spring4Shell — with new evidence pointing to a possible impact on real-world applications.
While researchers have noted that comparisons between Spring4Shell and the critical Log4Shell vulnerability are likely inflated , analysts Colin Cowie and Will Dormann separately posted confirmations Wednesday, showing that they were able to get an exploit for the Spring4Shell vulnerability to work against sample code supplied by Spring.
“If the sample code is vulnerable, then I suspect there are indeed real-world apps out there that are vulnerable to RCE,” Dormann said in a tweet.
Still, as of this writing, it’s not clear how broad the impact of the vulnerability might be, or which specific applications might be vulnerable.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That alone would appear to suggest that the risk associated with Spring4Shell is not comparable to that of Log4Shell, a high-severity RCE vulnerability that was disclosed in December. The vulnerability affected the widely used Apache Log4j logging library, and was believed to have impacted most organizations.
Still to-be-determined about Spring4Shell, Dormann said on Twitter, is the question of “what actual real-world applications are vulnerable to this issue?” “Or is it likely to affect mostly just custom-built software that uses Spring and meets the list of requirements to be vulnerable,” he said in a tweet.
Spring is a popular framework used in the development of Java web applications.
Vulnerability details Researchers at several cybersecurity firms have analyzed and published details on the Spring4Shell vulnerability, which was disclosed on Tuesday. At the time of this writing, patches are not currently available.
Security engineers at Praetorian said Wednesday that the vulnerability affects Spring Core on JDK (Java Development Kit) 9 and above. The RCE vulnerability stems from a bypass of CVE-2010-1622 , the Praetorian engineers said.
The Praetorian engineers said they have developed a working exploit for the RCE vulnerability. “We have disclosed full details of our exploit to the Spring security team, and are holding off on publishing more information until a patch is in place,” they said in a blog post.
(Importantly, the Spring4Shell vulnerability is different from the Spring Cloud vulnerability that is tracked at CVE-2022-22963 and that, confusingly, was disclosed at around the same time as Spring4Shell.) The bottom line with Spring4Shell is that while it shouldn’t be ignored, “this vulnerability is NOT as bad” as the Log4Shell vulnerability , cybersecurity firm LunaSec said in a blog post.
All attack scenarios with Spring4Shell, LunaSec said, “are more complex and have more mitigating factors than Log4Shell did.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,508 | 2,022 |
"Nvidia is bringing zero trust security into data centers | VentureBeat"
|
"https://venturebeat.com/security/nvidia-bringing-zero-trust-security-into-data-centers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia is bringing zero trust security into data centers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Follow along with VentureBeat’s coverage from Nvidia’s GTC 2022 event >> Nvidia’s latest product strategy updates announced last week at the March 2022 GPU Technology Conference (GTC) reflect the high priority their devops and engineering teams are placing on closing the growing gaps in data center cybersecurity. Gaps in cybersecurity tech stacks are purportedly growing because the platforms supporting them typically were not designed for a zero-trust world.
The lack of platform and tech stack support makes implementing least-privileged access across data centers to the server level financially unattainable for many IT budgets. Additionally, getting microsegmentation accomplished across legacy servers and integrating identity access management (IAM) takes longer on legacy tech stacks. Likewise, implementing privileged access management (PAM) across a legacy infrastructure environment often requires integration workarounds.
Top-down approaches to equipping legacy tech stacks with the technology needed to support zero trust can be hard to do well. Nvidia’s product and solution strategies, unveiled at GTC 2022, seem to underscore that the company understands this and is taking aim at the opportunity to solve complex tech stack challenges and grow its total available market simultaneously.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nvidia’s rapid platform progress on cybersecurity VMware’s Project Monterey, which is supported by Nvidia’s BlueField-2 DPU (currently in Beta ) reflects how ingrained the design goal of augmenting enterprise tech stacks are in their product strategy. For example, the Nvidia Bluefield-3 DPU programmable data center infrastructure-on-a-chip has a public key accelerator (PKA), root-of-trust, security firmware updates, flash encryption and Cerberus compliance designed into their silicon and network platforms. All features that work together to enhance security efforts. Specifically, the Monterey LaunchPad Beta is flexible enough in design to support microsegmentation across a data center, which is a core requirement for implementing a zero-trust framework.
Also announced at last week’s conference, Nvidia’s Hopper GPU architecture and new H100 GPU , has confidential computing support designed to secure data and models. The H100 GPU also reflects company-wide design goals focused on enabling greater zero-trust across all products. Its confidential computing capabilities are designed to protect AI models and customer data while in process.
Confidential computing isolates data in an encrypted area during processing. The contents of the encrypted area, including data being processed, are accessible only to authorized programming code and are invisible to anyone else.
The Nvidia AI platform also proves pivotal in enabling enterprises to close gaps in their cybersecurity tech stacks. It’s used in over 25,000 companies worldwide. Nvidia’s AI Enterprise 2.0 cloud-native suite of AI and data analytics tools and frameworks, optimized and certified by the company and supported across every major data center and cloud platform.
“We updated 60 SDKs (software development kits) at this GTC,” said Jensen Huang, Nvidia’s CEO “For our 3 million developers, scientists and AI researchers and tens of thousands of startups and enterprises, the same Nvidia systems you run just got faster.” Given how ingrained cybersecurity and zero trust are within Nvidia’s devops design goals, the company provides the tools customers need to close gaps in their tech stacks that put them at risk.
National standards aim to create benchmarks for zero trust architecture Nearly every CISO and CIO have preferred benchmarking approaches and assessing how much a given vendors’ solution reduces risk and secures their business. Organizations ideally should benchmark how effective Nvidia is in assisting them at reaching their zero trust initiatives. Currently, a growing base of new benchmarks and frameworks is being created for CISOs, CIOs and their teams in this area.
One of the primary catalysts driving the development of these essential benchmarks is the National Security Telecommunications Advisory Committee’s (NSTAC) report, Zero Trust and Trusted Identity Management.
President Biden’s Executive Order 14028: Improving the Nation’s Cybersecurity defines zero trust architecture as the cybersecurity standard across all government agencies. It relies on on the latest National Institute of Standards and Technology (NIST) zero trust architecture standard ( NIST 800-207: Zero Trust Architecture ).
As a supplement to the above, the president’s office of management and budget’s Federal Zero Trust Strategy has pragmatic, useful insights any organization can use for planning their zero trust initiatives.The Department of Defense (DoD) Zero Trust Reference Architecture also provides a useful taxonomy for organizing each area of a zero-trust security strategy.
Of the many maturity models created since EO 14028 was signed, one of the most valuable is from the Cybersecurity & Infrastructure Security Agency (CISA).
Unlike many vendor-based models that could be biased towards a given technology or deployment methodology, CISA has strived to create an impartial, fair model that can span an enterprise’s five core security dimensions. The CISA Zero Trust Maturity Model provides insights into traditional, advanced and optimal levels of zero trust maturity. It’s a useful framework for CISOs and CIOs to communicate roadmap goals from a long-term or strategic standpoint.
Filling gaps in the tech stack Nvidia excels at finding gaps in tech stacks, then engineering new solutions from silicon to SDKs to solve them. The company’s rapid advances in zero-trust security are a case in point. Last week at GTC 2022, Nvidia DOCA 1.3 was launched along with updates to 60 different SDKs to streamline the development efforts of partners, startups and enterprises standardizing on the Nvidia AI platform. In addition, their reliance on Nvidia Morpheus, their continuously learning cybersecurity framework, continues to gain adoption across data centers.
It is technologies like these, that Nvidia continues to strive to be at the forefront of, that will assist enterprise leaders and security teams with adhering to and implementing national guidelines laid out by government entities.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,509 | 2,022 |
"Google Cloud security survey is 'aggressive' move vs. Microsoft | VentureBeat"
|
"https://venturebeat.com/security/google-cloud-security-survey-is-aggressive-move-vs-microsoft"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud security survey is ‘aggressive’ move vs. Microsoft Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A new survey commissioned by Google Cloud brings pointed criticism against Microsoft over the security of its platforms for government workers — suggesting that the battle for customers in cybersecurity is heating up between the two cloud giants, security industry executives told VentureBeat.
This line of argument — that Microsoft is a fundamental part of the cybersecurity problem, rather than the solution — has been made in the past by Microsoft security rivals such as CrowdStrike.
But the survey appears to be the most outspoken critique of this kind against Microsoft by Google Cloud so far.
The results of the survey were released Thursday in a blog post by Jeanette Manfra, senior director for global risk and compliance at Google Cloud. The post’s headline — “Government workers say Microsoft tech makes them less secure” — makes it abundantly clear what Google Cloud is aiming to convey, industry executives said in comments via email on Thursday.
“The poll itself is a transparent attempt to create a marketing message against Microsoft,” said John Bambenek, principal threat hunter at IT and security operations firm Netenrich. “While that means taking its conclusions with a grain of salt, it also means they are taking an aggressive approach to displace Microsoft using techniques more often seen in political campaigns.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The language of the post seems tailored to a government audience, as it is “very much at home in Washington, D.C.,” Bambenek said.
‘More vulnerable’ The survey’s key finding related to Microsoft: 60% of government employees who responded said they believe that “the federal government’s reliance on products and services from Microsoft makes it more vulnerable to hacking or a cyberattack.” The poll was conducted by Public Opinion Strategies, and surveyed 338 workers employed by the federal, state or local government around the U.S.
Based on these findings, “it’s clear that there’s an overreliance on legacy solutions [in government], despite a track record of cybersecurity vulnerabilities and poor user perception,” Manfra said in the blog post.
With this survey, it’s fair to conclude that Google is “taking a direct shot at Microsoft,” said Amit Yoran, chairman and CEO of cybersecurity firm Tenable.
That’s evident given that Google, much like Microsoft, makes its moves very deliberately and precisely — particularly when it comes to its public comments, Yoran said.
Ultimately, this “doesn’t seem like a random survey, especially considering Google’s acquisition of Mandiant,” Yoran said, referring to Google’s agreement disclosed this month to acquire prominent cyber firm Mandiant for $5.4 billion. Earlier, Microsoft had reportedly looked at acquiring Mandiant, before the talks fell through and Google stepped in.
Casey Bisson, head of product and developer relations at code security solutions firm BluBracket, said he agreed that this survey is part of an attempt by Google to challenge Microsoft’s market position. Along with being a dominant provider of productivity applications and now a major security vendor in its own right, Microsoft Azure also ranks as the second-largest public cloud platform by market share (21%) — behind AWS (33%) but ahead of Google Cloud (10%), according to Synergy Research Group.
With this survey tactic, Google is taking on Microsoft in security by “leveraging their legacy against them,” Bisson said. “Google is following the same playbook Apple used against Microsoft in the consumer space two decades ago.” Microsoft’s response In a statement, Frank Shaw, corporate vice president for communications at Microsoft, called the Google Cloud survey “disappointing but not surprising” — given a report today about a lobbying campaign funded in part by Google, which Shaw claims has been “misrepresenting small businesses.” “It is also unhelpful to create divisions in the security community at a time when we should all be working together on heightened alert,” Shaw said in the statement. “We will continue to collaborate across the industry to jointly defend our customers and government agencies, and we will continue to support the U.S. government with our best software and security services.” Google Cloud declined to comment Thursday on Microsoft’s statement or the comments by cybersecurity industry executives.
The new survey — which polled a total of 2,600 American workers, including the 338 government employees — builds on a previous Google Cloud-commissioned survey about the U.S. public sector’s use of software. The previous survey found 85% market share for Microsoft in office productivity software used in the U.S. public sector. The Google Workspace productivity suite competes with the Microsoft 365 suite of productivity apps.
Due to a number of factors, including the near-ubiquity of its platforms, Microsoft “will always be an easy target for rivals when it comes to security,” said Aaron Turner, vice president for SaaS posture at Vectra.
And while it’s true that Microsoft has suffered from “significant security problems lately due to the intensifying attacks on Azure Active Directory,” Turner said, Google Cloud has yet to prove itself as a comparable competitor in the security space.
Big security investments Google appears to be working hard on it, though: Besides the planned Mandiant acquisition, the company made a flurry of other investments recently including the acquisition of SOAR (security orchestration, automation and response) firm Siemplify in January and a series of expansions to its Chronicle security platform.
In a recent interview with VentureBeat, Sunil Potti, vice president and general manager for Google Cloud’s security business, said the contrast between Google Cloud and Microsoft’s approaches to security should be obvious.
“Microsoft has been very clear that they want to compete in security against all the partners, and everybody,” Potti said. Google, on the other hand, has chosen “a few markets we believe a cloud provider alone should drive,” and is offering first-party products just in those spaces, he said.
“But around each of those first-party products, we’ll create an ecosystem that leverages partners,” he said. That, again, is “unlike Microsoft, who wants to touch everything,” Potti said.
Industry analysts said that Google most definitely had Microsoft in its sights with the deal to acquire Mandiant. “Microsoft has been dominating the security industry for the past several years, and this string of acquisitions by Google shows its interest in playing a bigger role in the industry,” Forrester analyst Allie Mellen previously told VentureBeat.
Poor security practices to blame? In the larger scheme of things, though, Google’s core argument about Microsoft doesn’t entirely hold up, said Phil Neray, vice president of cyber defense strategy at cyber firm CardinalOps.
“The reality is that most high-profile attacks are the result of poor security practices rather than vulnerabilities in office productivity suites,” Neray said.
He pointed to past incidents such as the federal Office of Personnel Management breach in 2015, attributed to having “insufficient security monitoring to detect unusual activity in the network after attackers stole credentials from a government contractor.” Meanwhile, the Equifax breach in 2017 “was the result of poor web server patching practices. The SolarWinds breach occurred after attackers infected software updates for an IT application that’s widely used in both government and civilian organizations. The DNC breach was the result of a phishing attack,” Neray said. “And in the case of the Colonial Pipeline ransomware incident, the attackers exploited the fact that the company had a high number of open remote access ports accessible from the internet.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,510 | 2,022 |
"Gartner lists seven cybersecurity trends for 2022 | VentureBeat"
|
"https://venturebeat.com/security/gartner-lists-seven-cybersecurity-trends-for-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner lists seven cybersecurity trends for 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
CISOs’ roles need to transition from technologists who prevent breaches to corporate strategists managing cyber risks. Unfortunately, slowing down CISOs’ career growth are security tech stacks that aren’t designed for new digital transformation, virtualization and hybrid cloud initiatives in their companies. Gartner’s recently published top security and risk management trends for 2022 report explains where the most vulnerable security stack gaps are.
The seven trends also help to explain the many challenges CISOs face when transitioning their careers and cybersecurity spending away from tactics and into strategic roles. Implicit in these trends is the urgent need to treat cybersecurity as a business decision. Taken together from the standpoint of enterprises focused on new digital initiatives, the seven trends show clearly that cybersecurity needs to be a business enabler first. The two trending proof points of cybersecurity’s business value are decentralized decision-making and faster response times to business challenges.
How Gartner’s trends define a cybersecurity roadmap Responding to threats is what enterprises and their CISOs need the most help with today. As a result, Gartner chose to organize their trends and assign most of them to threat response. That’s a clear indication that their enterprise clients are focused on this area and looking for guidance. Attack Surface Expansion, Identity Threat Detection and Response and Digital Supply Chain Risk are the three trends Gartner sees as most important for threat response.
Rethinking Technology is the second strategic trend, including Vendor Consolidation and Cybersecurity Mesh. The third strategic trend is Reframing The Cybersecurity Practice. Gartner adds Distribution Decisions and Beyond Awareness to this group.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Taken together, Gartner’s trends create a high-level cybersecurity roadmap that any enterprise can follow. Best of all, it starts out closing the gaps in existing security tech stacks at their most vulnerable breakpoints. These include identity access management (IAM), privileged access management (PAM) and reducing threats to digital supply chains.
Translating the seven trends into a strategic roadmap yield the following: Roadmap phase 1: Responding to threats Attack surface expansion Identity threat detection and response Digital supply chain risk Roadmap phase 2: Rethinking technology Vendor consolidation Cybersecurity mesh Roadmap phase 3: Reframing practice Distributing decisions Beyond awareness What the trends mean for CISOs The more adept a security stack becomes at managing risk and supporting new business, the greater the potential career growth for CISOs. But unfortunately, legacy systems don’t just hold enterprises back from growing, and they hold careers back too. Today, speed and time-to-market are getting compressed on all digital business initiatives and new ventures. That’s the catalyst driving the urgency behind the seven trends.
The trends mean the following to CISOs today: Decentralized cybersecurity is an asset.
Getting away from centralized cybersecurity and adopting a more decentralized organization and supporting tech stack increases an organization’s speed, responsiveness and adaptability to new business ventures. Centralized cybersecurity is a bottleneck that limits the progress of new initiatives and limits the careers of those managing them, most often CISOs.
Cybersecurity needs extreme ownership.
The hardest part of any CISO’s job is getting the thousands of employees in their organizations to follow cybersecurity hygiene. Authoritarian approaches and continual virtual learning programs are limited in effectiveness, evidenced by the record ransomware breaches in 2021 and continuing this year. CISOs need to take on change management to create extreme ownership of outcomes by employees. Finding new ways to reward ownership for cybersecurity and good security hygiene are key. The best-selling book, Extreme Ownership , is an excellent read and one that CISOs and their teams need to consider reading this year when it comes to leadership and change management.
Attack surfaces are just getting started.
It’s a safe bet that the number, complexity and challenges of managing multiple threat surfaces are only going to grow. CISOs and their teams need to anticipate it and secure their digital supply chains, especially in their core DevOps process areas. Getting IAM and PAM right is also essential, as the trend Identity Threat Detection and Response explains.
CISOs: find new ways to add value Getting bogged down with security tactics puts enterprises and careers at risk. Instead, concentrate on making cyber-risk a business and organizational risk first. Only then can CISOs transition their organization to be more of an enabler and accelerator of new products and not a roadblock to new revenue. Most important is for CISOs to look at the trends through the lens of how they can build stronger relationships outside of IT. Starting with other C-level executives, board members with a specific focus on the CRO and CMO are key. The two executives who are the most responsible for revenue also make the riskiest decisions for an enterprise. Seeing how cybersecurity can manage risk is a great way to grow a business and a career.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,511 | 2,022 |
"Don't ignore Spring4Shell. But there's still no sign it's widespread | VentureBeat"
|
"https://venturebeat.com/security/dont-ignore-spring4shell-but-theres-still-no-sign-its-widespread"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Don’t ignore Spring4Shell. But there’s still no sign it’s widespread Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Patches are now available for the Spring4Shell vulnerability , and security teams are continuing to assess the potential for the remote code execution (RCE) flaw to affect applications. But as of this writing, there continues to be little evidence of widespread risk from the recently disclosed Spring Core vulnerability.
Organizations are encouraged to assess the situation for themselves to determine their level of risk exposure, according to security professionals including Chris Partridge, who has compiled details about the Spring4Shell vulnerability on GitHub.
However, “thus far nobody’s found evidence that this is widespread,” Partridge said on the GitHub page.
“This is a severe vulnerability, sure, but it only impacts non-default usage of Spring Core with no proven widespread viability. It’s categorically not log4shell -like.” In a message to VentureBeat, Partridge said that “it’s great that Spring is taking this fix seriously. Hopefully, no bypasses are found.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Spring is a popular framework used in the development of Java web applications.
Patches available On Thursday, Spring published a blog post with details about patches, exploit requirements and suggested workarounds for Spring4Shell. The RCE vulnerability, which is being tracked at CVE-2022-22965 , affects JDK 9 or higher and has several additional requirements for it to be exploited, the Spring blog post says.
Among other things, the blog post confirms that the Spring4Shell vulnerability is not Log4Shell 2.0, said Ian McShane, vice president of strategy at Arctic Wolf.
“It’s an RCE, so it’s a high-priority risk. But the fact that it needs a non-default implementation should limit the scope, especially compared with Log4Shell,” McShane said in an email.
The Apache Log4j logging software — which was impacted by the Log4Shell vulnerability disclosed in December — was embedded in countless applications and services and was vulnerable by default, he noted.
Spring4Shell, by contrast, “doesn’t seem to be a comparable risk. But that doesn’t mean organizations can ignore it,” McShane said. “As with all application vulnerabilities, especially ones that are internet-facing by design, you need to find out if you are at risk before you discount it.” Despite the similar naming to Log4Shell, it’s now clear that Spring4Shell is “definitely not as big,” said Satnam Narang, staff research engineer at Tenable.
“That said, we’re still in the early phases of figuring out what applications out there might be vulnerable, and we’re basing this on what’s known,” Narang said in an email. “There are still some question marks around if there are other ways to exploit this flaw.” More-accurate picture If anything, though, the blog post from Spring only narrows the range of vulnerable instances, said Mike Parkin, senior technical engineer at Vulcan Cyber.
And by clarifying the exploitable conditions, the update gives the security community a more-accurate picture of potential risk, Parkin said.
“However, attackers may find creative ways to leverage this vulnerability beyond the identified target range,” he said in an email. At the moment, though, there are no reports of the vulnerability being exploited in the wild, Parkin noted.
John Bambenek, principal threat hunter at Netenrich, agreed that the vulnerability appears to affect fewer machines as compared to Log4Shell.
There are some specific environments that Spring4Shell may apply to, “but the more dangerous case of embedded or vendor-provided machines are less likely to see this vulnerability,” Bambenek said.
More info still needed In an update to its blog post on the RCE vulnerability, Flashpoint and its Risk Based Security unit said that because Spring Core is a library, “The exploit methodology will likely change from user to user.” “More information is needed to assess how many devices run on the needed configurations,” the updated Flashpoint blog post says.
Colin Cowie, a threat analyst at Sophos, and vulnerability analyst Will Dormann separately posted confirmations Wednesday, showing that they were able to get an exploit for the Spring4Shell vulnerability to work against sample code supplied by Spring.
“If the sample code is vulnerable, then I suspect there are indeed real-world apps out there that are vulnerable to RCE,” Dormann said in a tweet.
Still, as of this writing, it’s not clear which specific applications might be vulnerable.
The bottom line is that Spring4Shell is “definitely cause for concern — but seems to be quite a bit more difficult to successfully exploit than Log4j,” said Casey Ellis, founder and CTO at Bugcrowd, in an email.
In any case, given the large volume of research and discussion around Sping4Shell, defenders would be well-advised to mitigate — and/or patch — as soon as possible, Ellis said.
It’s also likely that new flavors of this vulnerability could emerge in the near future, said Yaniv Balmas, vice president of research at Salt Security. “These could impact other web servers and platforms and widen the reach and potential impact of this vulnerability,” Balmas said in an email.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,512 | 2,022 |
"3 inspiring jobs to apply for right now | VentureBeat"
|
"https://venturebeat.com/programming-development/3-inspiring-jobs-to-apply-for-right-now"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs 3 inspiring jobs to apply for right now Share on Facebook Share on X Share on LinkedIn On the hunt for an exciting new job? You’ve come to the right place. There are countless companies currently advertising for exciting tech roles on our job board, and we wanted to share them with you! Check them out now, and get ready to start your new adventure.
Senior Penetration Tester, Robinhood Robinhood was founded on a simple idea: that financial markets should be accessible to all. With customers at the heart of decisions, Robinhood is lowering barriers and providing greater access to financial information. They are building products and services that help create a financial system everyone can participate in.
Robinhood is looking for a Penetration Tester who is passionate about breaking and fixing applications, services, and processes to join the Robinhood pentest team. The pentest team is part of the larger Offensive Security team and is a core pillar of Security & Privacy Engineering. The pentest team works with teams across Robinhood to ensure that products, services, and processes are secure through threat modeling, automated and manual penetration testing, and tracking remediations of identified vulnerabilities.
They’re looking for more growth-minded and collaborative people to be a part of their journey in democratizing finance for all. If you’re ready to give 100% in helping them achieve their mission—they want you to apply even if you feel unsure about whether you meet every single requirement in this posting. At Robinhood, they’re looking for people invigorated by their mission, values, and drive to change the world, not just those who simply check off all the boxes.
Lead/Staff Production Engineer, Shopify Shopify’s mission is to make commerce better for everyone. From building a new product feature for a commerce platform, to helping a merchant troubleshoot an issue over the phone, they want to empower an ecosystem through their work. Having a unified vision, a north star, is vitally important to ensure that they are all headed in the same direction. No matter the size or experience, they want to power every merchant’s experience.
Shopify is now permanently remote, and they are working towards a future that is digital by design. At Shopify, Lead Production Engineers (also referred to as Staff Engineers) use their expertise and passion to multiply the overall output of their development team. As a technical leader, you’ll help drive your team’s vision to its implementation. You and the team will design and build technically innovative solutions that empower all teams at Shopify to build powerful and resilient distributed cloud software. Merchants that depend on Shopify for a highly scalable, performant, and reliable platform benefit directly from the work you do. You will maintain a high bar for quality and lead and mentor other engineers. And of course, you’ll be hands-on in the code and contribute technically.
As an experienced infrastructure technical leader, they need your help to both start new teams and expand and grow the technology of their existing teams. There are multiple positions available on a variety of teams and Shopify will work with you as part of the interview process to identify which team best fits your interests, needs and experience.
Software Engineer II, Indeed This Software Engineer II role will join the jobs management team, which falls within Indeed’s SMB Growth operation. This team has complete ownership of the main landing page used by employers when they post job advertisements. In this role, your immediate impact will be joining the effort to revamp both the front-end and back-end to include modern open-source tech stacks (React (Hooks), Typescript, Microservices Architecture, etc.). Future work will entail new feature development in an on-going effort to keep the site user-friendly and effective for employers and job seekers.
This is a full stack role but leans heavier on the front end. Technical experience with React, Javascript, Typescript, GraphQL, and Microservices Architecture is ideal. As a Software Engineer in the Jobs Management Team within SMB Growth, you will build software that guides employers to post high-quality jobs which achieve good performance in Indeed’s search results. You will build UIs and APIs to offer helpful guidance to employers and highlight optimization opportunities and actions with jobs that help employers to make timely quality hires.
The Small to Medium Businesses (SMB) organization at Indeed develops products centered around the hiring needs of SMB businesses, making the hiring process simpler, faster, and more effective so they always find the talent that is right for their business, when they need it. The solutions we provide are focused on three main pillars, including ‘Building for the Long-Term’, ‘Delivering the Hire’ and ‘Driving Performance’. The impact of this work helps grow companies and communities around the world.
For even more amazing tech opportunities, head over to our job board now! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.